DevOps For Data Science: Why Analytics Ops Is Key To Value

DevOps For Data Science: Why Analytics Ops Is Key To Value

DevOps For Data Science: Why Analytics Ops Is Key To Value

It may be a stretch to call data science commonplace, but the question “what’s next” is often heard with regard to analytics. And then the conversation often turns straight to Artificial Intelligence and deep learning. Instead, a tough love review of the current reality may be in order.

The simple truth is that, as currently configured, data-centric companies will struggle to cross the divide between what is currently considered effective data science and a modality where analytics is an inherent part of the fundamental fabric of business operations that benefits from continuous improvement. Today data science is all too often a process where new insights and models get developed as a one-time effort or deployed to production on an ad hoc basis, and require regular babysitting for monitoring and updating.

This is not to imply that companies are not on the right path with their data science initiatives, but merely acknowledgement that the steps they have taken thus far have brought them to the edge of a chasm they will have to cross. To the credit of more progressive organizations, creating an industrial-caliber data lake to store a lot of data of varying forms is an essential, foundational step. Building on that, the development of systems of data democratization that provide ready access to data for those seeking insight is critical. There’s no doubt that companies that have achieved those two steps already reap benefits.

Read Also:
How big data is being mobilised in the fight against leukaemia

Nevertheless, that’s as far as most have come and, more significant for the future, that’s all they have prepared themselves to accomplish. Today many companies have the data and data scientists who are equipped to do analysis and build models that can be carefully engineered to plug into some usable business application. But every deployment of a model is a custom, fragile, one-off job and ensuring quality of models is done as a fragile, manual effort. Change the model and the whole thing needs to be rebuilt. Often useful analyses are often performed once but can’t be reproduced or even worse get recreated periodically but inconsistently. And if a new version model doesn’t work well it can be a painful struggle to restore a previous version, let alone have systematic testing of models to continuously improve them.

It’s not enough to know how to wrestle with raw data. Companies need an infrastructure capable of continuously testing and improving models, starting with governed understood analytic data sets as input. This is an environment in which normalized data lets one do any kind of data science at any time.

Read Also:
Connecting the dots: How the customer data platform solves what ails marketers

It’s been done before.

 



Chief Analytics Officer Spring 2017

2
May
2017
Chief Analytics Officer Spring 2017

15% off with code MP15

Read Also:
The Emphasis is on Out-of-the-Box Internet of Things Data Management

Big Data and Analytics for Healthcare Philadelphia

17
May
2017
Big Data and Analytics for Healthcare Philadelphia

$200 off with code DATA200

Read Also:
Nonprofits Can Leverage Big Data to Further Their Missions

SMX London

23
May
2017
SMX London

10% off with code 7WDATASMX

Read Also:
How Facial Recognition is Shaping the Future of Marketing Innovation

Data Science Congress 2017

5
Jun
2017
Data Science Congress 2017

20% off with code 7wdata_DSC2017

Read Also:
How Facial Recognition is Shaping the Future of Marketing Innovation

AI Paris

6
Jun
2017
AI Paris

20% off with code AIP17-7WDATA-20

Read Also:
If data is loved so much, why is so much of it running around loose, dirty and in no fit state to get a job?

Leave a Reply

Your email address will not be published. Required fields are marked *