JVM developers moving to containers and microservices to keep up with fast data

JVM developers moving to containers and microservices to keep up with fast data

JVM developers moving to containers and microservices to keep up with fast data

Developers have never had it so good, swimming as they do in a sea of cheap, flexible cloud hardware and open source software. However, as a new Lightbend survey of 2,100 JVM developers suggests, there has never been a more precarious time for Java Virtual Machine (JVM) developers, as the traditional Java EE app server may be gasping its last breath. Perhaps this isn't surprising, given the fact that machine learning and microservices are completely changing how we program.

Yet, it's disconcerting for the thousands of engineers who have built their careers on what appears to be a dying art. As the survey report concludes, "The old world where JVM language developers relied on operators to do the work around deploying applications is in the midst of major upheaval, as the entire Java EE stack built around heavyweight app servers is losing relevance."

So long, and thanks for all the fish?

I spoke to Lightbend CEO Mark Brewer to get more background on the survey results and learn more about how JVM developers are grappling with modern data realities. He said the challenge stems less from the volume of so-called "big data" and more from meeting new requirements in speed and performance, which he calls the new era of "Fast Data."

Read Also:
The Data Warehousing Sanity Check

This fast data world threatens to completely upend the traditional Java app server, as well as the ops teams and developers who love them.

Though a new slant on big data isn't really necessary—Gartner's "three Vs" of big data already incorporates velocity (in addition to volume and variety of data)—Brewer can be excused for fixating on speed. After all, as he stressed, this is the first time that "any application can take advantage of data not even written to disk—as it's still moving from its source to the application or database."

This means that you don't have to wait to do queries, and can actually process the data while it's still moving. It also means that speed increasingly defines applications.

Machine learning, anomaly detection, analytics—all of these big data use cases put a premium on speed. Nor are they alone. New applications like IoT, mesh devices, home automation, self-driving cars and telemetry data, and many other use cases are reliant on processing data while it's still "in motion," Brewer said.

Read Also:
5 Ways To Reach Connected Customers Using Big Data

None of which makes JVM developers' lives any easier.

From a JVM developer standpoint, Brewer told me, this trend has made the applications richer, but also forced developers to be smarter about data. Indeed, "Where batch jobs seldom last for more than a few hours, a streaming pipeline often runs continuously, and this always-on requirement is unprecedented in how developers have their applications interacting with real-time data," Brewer said.

 



Read Also:
Analytics, IoT Crucial To Mobile E-Commerce Growth
Read Also:
Ten Myths About Machine Learning, by Pedro Domingos
Big Data Innovation Summit London
30 Mar
Big Data Innovation Summit London

$200 off with code DATA200

Read Also:
Four perspectives on data lakes
Read Also:
IBM releases DataWorks to give enterprise data a home and a brain
Read Also:
5 Ways to Avoid Common Pitfalls in Large-Scale Analytics Projects

Leave a Reply

Your email address will not be published. Required fields are marked *