Why some Data Lakes are built to last
Hadoop-based Data Lakes can be game-changers, but too many are under performing. Here's a checklist to make your data lake a wild success.
CIO | Jun 10, 2016 7:08 AM PT
Email a friend
Use commas to separate multiple email addresses
Your message has been sent.
There was an error emailing this page.
Changing the customer conversation with predictive data analysis
Hadoop-based data lakes can be game changers: better, cheaper and faster integrated enterprise information. Knowledge workers can access data directly, where project cycles are measured in days rather than months, and business users can leverage a shared data source rather than creating stand-alone sandboxes or warehouses.
Unfortunately, more than a few data lake projects are off track. Data is going in but it’s not coming out, at least not at the pace envisioned. What’s the chokepoint? It tends to be some combination of lack of manageability, data quality and security concerns, performance unpredictability, and shortage of skilled data engineers.
What distinguishes data lakes that are “enterprise class”, i.e., the ones that are built to last and attract hundreds of users and uses? First let’s look at the features that are Table Stakes, i.e., what makes a data lake a data lake. Next we will describe the capabilities that make a first class data lake, one that is built to last.
Hadoop – the open source software framework for distributed storage and distributed processing of very large data sets on computer clusters. The base Apache Hadoop includes contains libraries and utilities needed by other Hadoop modules, HDFS – a distributed file --system that stores data on commodity machines,a resource-management platform for managing computing, and an implementation of the MapReduce programming model for large scale data processing.
Commodity Compute Clusters – whether on premise or cloud Hadoop runs on low cost commodity servers that rack and stack and virtualize. Scaling is easy and inexpensive. The economics of open source massively parallel software combined with the low cost hardware deliver the promise of intelligent applications on truly big data.
All Data / Raw Data – The data lake design philosophy is to land and store all data in raw format from source systems. Structured enterprise data from operational systems, semi structured machine-generated and web log data, social media data, et al.
Schema’less writes – this point in particular is a break-through. Whereas traditional data warehouses are throttled by time and complexity of data modelling, data lakes land data in source format. Instead of weeks (or worse) data can be gathered and offered up in short order. Schemas are used on read, pushing that analytic or modeling work to analysts.
Open source tools – (e.g., Spark, Pig, Hive, Python, Sqoop, Flume, Map Reduce, R, Kafka, Impala, Yarn, Kite, and many more) the evolving toolkit of programming, querying, and scripting languages and frameworks for ingesting and integrating data, building analytic apps, and accessing data.
If the Table Stakes listed above defines a data landing area, the following differentiate a data lake that is expansible, manageable, and industrial strength:
Defined Data and Refined Data – where data lakes contain raw data, advanced lakes contain Defined and Refined data as well. Defined Data has a schema, and that schema is registered in Hadoop’s Hcatalog. Since most data comes from source systems with structured schemas, it’s infinitely practical to leverage those.