Why some Data Lakes are built to last

Why some Data Lakes are built to last

Why some Data Lakes are built to last

Opinion
Why some Data Lakes are built to last
Hadoop-based Data Lakes can be game-changers, but too many are under performing. Here's a checklist to make your data lake a wild success.
CIO | Jun 10, 2016 7:08 AM PT
Email a friend
Use commas to separate multiple email addresses
From
Your message has been sent.
Sorry
There was an error emailing this page.
Credit: Thinkstock
Changing the customer conversation with predictive data analysis
Hadoop-based data lakes can be game changers: better, cheaper and faster integrated enterprise information. Knowledge workers can access data directly, where project cycles are measured in days rather than months, and business users can leverage a shared data source rather than creating stand-alone sandboxes or warehouses.
Unfortunately, more than a few data lake projects are off track. Data is going in but it’s not coming out, at least not at the pace envisioned. What’s the chokepoint? It tends to be some combination of lack of manageability, data quality and security concerns, performance unpredictability, and shortage of skilled data engineers.  
What distinguishes data lakes that are “enterprise class”, i.e., the ones that are built to last and attract hundreds of users and uses?  First let’s look at the features that are Table Stakes, i.e., what makes a data lake a data lake. Next we will describe the capabilities that make a first class data lake, one that is built to last.
Table stakes
Hadoop – the open source software framework for distributed storage and distributed processing of very large data sets on computer clusters. The base Apache Hadoop includes contains libraries and utilities needed by other Hadoop modules, HDFS – a distributed file --system that stores data on commodity machines,a resource-management platform for managing computing, and an implementation of the MapReduce programming model for large scale data processing.
Commodity Compute Clusters – whether on premise or cloud Hadoop runs on low cost commodity servers that rack and stack and virtualize. Scaling is easy and inexpensive.  The economics of open source massively parallel software combined with the low cost hardware deliver the promise of intelligent applications on truly big data.
All Data / Raw Data – The data lake design philosophy is to land and store all data in raw format from source systems. Structured enterprise data from operational systems, semi structured machine-generated and web log data, social media data, et al. 
Schema’less writes – this point in particular is a break-through. Whereas traditional data warehouses are throttled by time and complexity of data modelling, data lakes land data in source format. Instead of weeks (or worse) data can be gathered and offered up in short order. Schemas are used on read, pushing that analytic or modeling work to analysts.
Open source tools – (e.g., Spark, Pig, Hive, Python, Sqoop, Flume, Map Reduce, R, Kafka, Impala, Yarn, Kite, and many more) the evolving toolkit of programming, querying, and scripting languages and frameworks for ingesting and integrating data, building analytic apps, and accessing data.
Enterprise class
If the Table Stakes listed above defines a data landing area, the following differentiate a data lake that is expansible, manageable, and industrial strength:
Defined Data and Refined Data – where data lakes contain raw data, advanced lakes contain Defined and Refined data as well. Defined Data has a schema, and that schema is registered in Hadoop’s Hcatalog. Since most data comes from source systems with structured schemas, it’s infinitely practical to leverage those.

Read Also:
3 Questions You Should Ask Your Analytics Vendor

 



Data Science Congress 2017

5
Jun
2017
Data Science Congress 2017

20% off with code 7wdata_DSC2017

Read Also:
Doubling down on digital transformation: Big Data, strategy and the IoT

AI Paris

6
Jun
2017
AI Paris

20% off with code AIP17-7WDATA-20

Read Also:
Big Data and business analytics a $200 billion industry

Chief Data Officer Summit San Francisco

7
Jun
2017
Chief Data Officer Summit San Francisco

$200 off with code DATA200

Read Also:
Why quality data is critical for small business

Customer Analytics Innovation Summit Chicago

7
Jun
2017
Customer Analytics Innovation Summit Chicago

$200 off with code DATA200

Read Also:
The future of healthcare: data scientists and clinicians speaking as one

HR & Workforce Analytics Innovation Summit 2017 London

12
Jun
2017
HR & Workforce Analytics Innovation Summit 2017 London

$200 off with code DATA200

Read Also:
5 Surprising Ways Data Can Be Used

Leave a Reply

Your email address will not be published. Required fields are marked *