Real-Time Big Data Processing with Spark and MemSQL

Real-Time Big Data Processing with Spark and MemSQL

Real-Time Big Data Processing with Spark and MemSQL

Learn how you can maximize big data in the cloud with Apache Hadoop. Download this eBook now. Brought to you in partnership with Hortonworks.

I got an opportunity to work extensively with big data and analytics in Myntra, an e-commerce store based in India. Data-driven intelligence is one of the core values at Myntra, so crunching and processing data and reporting meaningful insights for the company is of utmost importance.

Every day, millions of users visit Myntra via the app or website, generating billions of clickstream events. It's very important for the data platform team to scale to such a huge number of incoming events, ingest them in real-time with minimal or no loss, and process the unstructured or semi-structured data to generate insights.

We use a varied set of technologies and in-house products to achieve the above, including Go, Kafka, Secor, Spark, Scala, Java, S3, Presto, and Redshift.

As more and more business decisions tend to be based on data and insights, batch and offline reporting from data were simply not enough. We required real-time user behavior analysis, real-time traffic, real-time notification performance, and more to be available with minimal latency. We needed to ingest as well as filter and process data in real-time and also persist it in a write-fast performant data store to do dashboarding and reporting.

Read Also:
10 BI Mistakes That Could Be Killing Your Analytics Projects

Meterial is a pipeline that does exactly this and even more with a feedback loop for other teams to take action from the data in real-time.

Our event collectors written in Golang sit behind Amazon ELB to receive events from the app or website. They add a timestamp to the incoming clickstream events and push them into Kafka. From Kafka, a Meterial-ingestion layer based on Apache Spark streaming ingests somewhere around four million events per minute, filters and transforms the incoming events based on a configuration file, and persists them to a MemSQL row-store every minute.



Read Also:
Data politics and the internet of things
Read Also:
Fintech: Blockchain And Artificial Intelligence Startups Are Flocking To HEC Paris — Here's Why
Read Also:
An inside look at why Apache Kafka adoption is exploding
Read Also:
Integrated vs. specialized: Which shines the brighter light on "dark data?"
Read Also:
Data politics and the internet of things

Leave a Reply

Your email address will not be published. Required fields are marked *