Next Generation Analytics: The Collision of Classic & Big Data Analytics

Next Generation Analytics: The Collision of Classic & Big Data Analytics

Next Generation Analytics: The Collision of Classic & Big Data Analytics

The Classic analytics traditionally supported by a data warehouse yields focus and insight by understanding organization’s past actions. One among many examples would include measuring the supply chain of materials into a product or service that’s brought to market,  which is typical across all industries from telecom and financial services to pharmaceuticals and retail. Others measure sales across product channels and customer demographics, or cash flow in, out, and through an organization. Most organizations leverage some sort of data warehouse or a set of business intelligence tools to dissect their own transactional data.  Mature organizations conduct analytics in real-time and some even use predictive modeling to forecast and help make decisions.

Big Data analytics shifts the analytics focus from analyzing internal mechanisms to the events that happen outside of an organization.  Now, itispossible to leverage data to understand events external to an organization. Tapping into social media, news feeds, and product review data can provide insight on how customers view an organization’s products & services. In some cases, tapping into machine logs will yield insight into how an organization’s key stakeholders (customers, employees, etc.) deliver or use the final products. Big Data Management Systems like Hadoop help manage the storage of this information. Moving this information into a system like Hadoop can be challenging, and teasing this massive amount of data into actionable intelligence seems impossible.

Read Also:
How a medical device maker took control of its sales data

Big Data Analytics requires new approaches and techniques to integrate data from classic data warehousing ETL.  Specifically, the repurposing of Data Quality techniques can help solve Big Data Integration challenges. Recently I set up some of the leading discovery & visualization tools, Tableau & Qliksense, to access data in Hadoop. These tools require access through Hive, a native SQL-like interface to the data. In order to get the Hadoop data into Hive, we had to structure it. This meant parsing up the text from the logs and articles into traditional, relational based columns and rows. Processing massive amounts of text data from a nearly limitless pool of file formats is resource intensive and almost an irrational endeavor.

 



Data Science Congress 2017

5
Jun
2017
Data Science Congress 2017

20% off with code 7wdata_DSC2017

Read Also:
10 Tools for Data Visualizing and Analysis for Business

AI Paris

6
Jun
2017
AI Paris

20% off with code AIP17-7WDATA-20

Read Also:
How to avoid messing up big data analytics
Read Also:
How a medical device maker took control of its sales data

Chief Data Officer Summit San Francisco

7
Jun
2017
Chief Data Officer Summit San Francisco

$200 off with code DATA200

Read Also:
Advanced Analytics in Audit

Customer Analytics Innovation Summit Chicago

7
Jun
2017
Customer Analytics Innovation Summit Chicago

$200 off with code DATA200

Read Also:
Google Brain team prepares for machine-learning-driven future

HR & Workforce Analytics Innovation Summit 2017 London

12
Jun
2017
HR & Workforce Analytics Innovation Summit 2017 London

$200 off with code DATA200

Read Also:
Why AI and machine learning need to be part of your digital transformation plans

Leave a Reply

Your email address will not be published. Required fields are marked *