Real-Time Big Data Processing with Spark and MemSQL

Real-Time Big Data Processing with Spark and MemSQL

Real-Time Big Data Processing with Spark and MemSQL

Learn how you can maximize big data in the cloud with Apache Hadoop. Download this eBook now. Brought to you in partnership with Hortonworks.

I got an opportunity to work extensively with big data and analytics in Myntra, an e-commerce store based in India. Data-driven intelligence is one of the core values at Myntra, so crunching and processing data and reporting meaningful insights for the company is of utmost importance.

Every day, millions of users visit Myntra via the app or website, generating billions of clickstream events. It's very important for the data platform team to scale to such a huge number of incoming events, ingest them in real-time with minimal or no loss, and process the unstructured or semi-structured data to generate insights.

We use a varied set of technologies and in-house products to achieve the above, including Go, Kafka, Secor, Spark, Scala, Java, S3, Presto, and Redshift.

As more and more business decisions tend to be based on data and insights, batch and offline reporting from data were simply not enough. We required real-time user behavior analysis, real-time traffic, real-time notification performance, and more to be available with minimal latency. We needed to ingest as well as filter and process data in real-time and also persist it in a write-fast performant data store to do dashboarding and reporting.

Read Also:
Cloudera, Hadoop and the changing role of data in business

Meterial is a pipeline that does exactly this and even more with a feedback loop for other teams to take action from the data in real-time.

Our event collectors written in Golang sit behind Amazon ELB to receive events from the app or website. They add a timestamp to the incoming clickstream events and push them into Kafka. From Kafka, a Meterial-ingestion layer based on Apache Spark streaming ingests somewhere around four million events per minute, filters and transforms the incoming events based on a configuration file, and persists them to a MemSQL row-store every minute.



Big Data Innovation Summit London

30
Mar
2017
Big Data Innovation Summit London

$200 off with code DATA200

Read Also:
How Big Data Is Helping the NYPD Solve Crimes Faster

Data Innovation Summit 2017

30
Mar
2017
Data Innovation Summit 2017

30% off with code 7wData

Read Also:
Architecting and Structuring a Big Data Ecosystem

Enterprise Data World 2017

2
Apr
2017
Enterprise Data World 2017

$200 off with code 7WDATA

Read Also:
The 5 Major Players in Enterprise Big Data Management
Read Also:
Smart cities: The rise of new C-level executives

Data Visualisation Summit San Francisco

19
Apr
2017
Data Visualisation Summit San Francisco

$200 off with code DATA200

Read Also:
Royal Bank of Scotland to use AI platform for customer services

Chief Analytics Officer Europe

25
Apr
2017
Chief Analytics Officer Europe

15% off with code 7WDCAO17

Read Also:
Five reasons industrial IoT demands the edge

Leave a Reply

Your email address will not be published. Required fields are marked *