streaming-100654808-primary-idge

What Spark’s Structured Streaming really means

What Spark’s Structured Streaming really means

Last year was a banner year for Spark. Big names like Cloudera and IBM jumped on the bandwagon, companies like Uber and Netflix rolled out major deployments, and Databricks' aggressive release schedule brought a brace of improvements and new features. Yet real competition for Spark also emerged, led by Apache Flink and Google Cloud Dataflow (aka Apache Beam).

Flink and Dataflow bring new innovations and target some of Spark's weaker aspects, particularly memory management and streaming support. Spark has not been standing still in the face of this competition, however; big efforts were made last year to improve Spark's memory management and query optimizer.

Moreover, this year will usher in Spark 2.0 -- and with it a new twist for streaming applications, which Databricks calls "Structured Streaming."

Structured Streaming is a collection of additions to Spark Streaming rather than a huge change to Spark itself. In other words, for all of you Spark jockeys out there: The fundamental concept of microbatching at the core of Spark's streaming architecture endures.

Read Also:
A Visual and Interactive Guide to the Basics of Neural Networks

If you've read one of my previous Spark articles or attended any of my talks over the past year or so, you'll have noticed that I make the same point again and again: Use DataFrames whenever you can, not Spark's RDD primitive.

DataFrames get the benefit of the Catalyst query optimizer and, as of 1.6, DataFrames typed with DataSets can take advantage of dedicated encoders that allow significantly faster serialization/deserialization times (an order of magnitude faster than the default Java serializer). Furthermore, in Spark 2.0, DataFrames come to Spark Streaming with the simple concept of an infinite DataFrame. Creating such a DataFrame from a stream is simple:

(Note: The Structured Streaming APIs are in constant flux, so while the code snippets in this article provide a general idea of how the code will look, they may change between now and the release of Spark 2.0.)

This results in a streaming DataFrame that can be manipulated in exactly the same way as the more familiar batch DataFrame -- using custom user-defined functions (UDFs), for example. Behind the scenes, those results will be updated as new data flows from the stream source. Instead of a disparate set of avenues into data, you'll have one unified API for both batch and streaming sources. And of course, all your queries on the DataFrames will call on the Catalyst Optimizer to produce efficient operations across the cluster.

Read Also:
Understanding Machine Learning

That's all well and good, as far as it goes. It makes developing easier between batch and streaming applications. But the real importance of Structured Streaming is Spark's abstraction of repeated queries (RQ). In essence, RQ simply states that the majority of streaming applications can be seen as asking the same question over and over again (for example, "How many people visited my website in the past five minutes?").;



Chief Data Officer Europe
20 Feb

15% off with code CDO7W17

Read Also:
Experts Highlight Differences Between Internet Plus and Big Data
Predictive Analytics Innovation summit San Diego
22 Feb

$200 off with code DATA200

Read Also:
Machine learning in markets: When intelligent algorithms start spoofing each other regulation becomes a science
Read Also:
Predictive Analytics: Driving Improvements Using Data
Big Data Paris 2017
6 Mar
Big Data Paris 2017

15% off with code BDP17-7WDATA

Read Also:
Why APIs Are Worth The Time And Attention Of IT Professionals
Read Also:
Experts Highlight Differences Between Internet Plus and Big Data

Leave a Reply

Your email address will not be published. Required fields are marked *