Microservices for Big Data: Flipping the Paradigm

Microservices for Big Data: Flipping the Paradigm

Microservices for Big Data: Flipping the Paradigm
“Microservices are essentially applications that are broken down into small pieces. Using microservices, businesses can prevent large-scale failure by isolating problems, and save on computing resources,” commented Jack Norris, Senior Vice President of Data and Applications at MapR. Microservices are now altering a major Data Management paradigm of the last 30 years.

In a recent DATAVERSITY® interview Norris said, “This class of applications delivers real-time analytics, high-frequency decision making and other solution architectures that require immediate operations on large volumes of data.” Working together as a cohesive unit instead of isolated in silos, “this architecture leads to greater responsiveness, better decisions, less complexity, lower costs, and lower risks.” Norris said that MapR’s company focus is “to be the foundation at the center of this data revolution.”

MapR started in 2009 and according to Norris, “We’ve been on a straight line of innovation since.” Having a technical team with deep roots in enterprise storage and a CTO who spent time at Google on the BigTable group, MapR’s founders understood the need for innovation at the data platform layer. Their goals were to provide high-scale enterprise storage capability in conjunction with real-time database operations, and to create one platform that is a convergence of file system, database, and stream. “If you have that as an underpinning, that really opens things up and [it] will transform how applications are developed and delivered in the enterprise.”

Read Also:
Machine Learning For Drug Discovery

Traditionally, data is stored and organized by a series of application-driven processes that extract, transform, and load data from hundreds of separate data silos, with analytic functions in a separate data store, and operations production systems that are separate from the analytics, Norris said. When we look at the volume and variety of data, he said, “we tend to start at the endpoint and then marvel at it.” The reality is that “most of that content is machine-generated content, most of that data was created one event at a time,” and it takes at least a day before any of that production data finds its way into the data warehouse.

Using microservices that can process operational and analytical data simultaneously, separations and delays can be removed, creating a “revolutionary” level of responsiveness to real-time events. This level of responsiveness can be used in a variety of contexts for a multitude of benefits: to provide opportunities for improving customer experience, to anticipate time loss due to equipment breakdowns, or to decrease the impact of fraud and security risks as they’re happening.

Read Also:
When Big Data Isn’t Enough – You Need Fast Data!

Norris stated that IDC and Gartner are predicting flat IT budget growth for the next five years, “and that kind of flat, stable growth masks some big changes underneath.” With an anticipated continued decline in spending on legacy technologies and an uptick in spending on next-gen technologies, the typical CIO can’t wait the usual year and a half for an application to roll out, he says. An increasing number of discussions about containers and re-usability, along with a growing interest in microservices are arising from a desire to lower costs and increase productivity without sacrificing innovation, Norris says, but “the stumbling block is the data.” Ephemeral, lightweight applications are considered good candidates, “but if it requires sharing data, if it requires a stable application, or in-depth analytics, those are typically passed over for deployment” due to constraints with data.

Read Full Story…

 

Leave a Reply

Your email address will not be published. Required fields are marked *