Having been working with big data since the early days, I’ve had the privilege of watching many organizations work through the opportunities, challenges, and changes it has driven. In that time I’ve seen things work out well and, sadly, sometimes not so well. In the spirit of sharing some best practices, the team and I thought it’d be helpful to write some blogs bringing together big data best practices to help companies learn from others.
In this entry, I’ll identify (and demystify) the top five most common misconceptions about big data that I’ve heard, and provide some tips on how to get past them. So without further ado, here we go:
Misconception #5: Tackling big data is just doing more of exactly what you’re doing today
When I first talk with many organizations new to big data, the first thing they usually want to do is make all their existing reports run faster. Or to provide the same runtimes with more data. This is a very well-intentioned ask driven by a classic IT challenge – do more with less. But it misses the value of big data. Can we make a report run faster? Absolutely. Can we meet existing SLAs with orders of magnitude more data? Definitely. Will the business open their purse strings and spend more money for a big data project which promises to deliver the status quo with incremental improvements? Hmm.
The business opportunities for big data can be significant. One of the more straightforward examples which didn’t involve any exotic new practices or people is Guess Inc. They were able to re-engineer their data pipeline to completely transform the experience of managing their retail stores. In the old world the store managers had a weekly printed report. In the new world they have real-time, dynamic information about their store, their customers, and brand & loyalty programs. So Guess was able to overhaul the process of decision-making. If they’d just focused on doing more of the same, this wouldn’t have happened.
Misconception #4: Tackling big data means throwing out everything and starting new
At the other extreme are the organizations I speak with who are convinced that they have to start from scratch. They might bring in new leadership, and maybe some consultants. And they might look to create an entirely new data architecture from scratch. The obvious issue with this is the high risk of undertaking the unknown path to big data. “But Walt,” the astute reader may observe “you just told me that big data meant not doing what I’m doing today. Now you’re telling me that it’s not about starting from scratch either. So which is it??”
Our most successful customers take a balanced approach. With few exceptions, most businesses weren’t born yesterday. They have accumulated knowledge of their business and solutions which are likely keeping the lights on. So while it’s very high risk to throw everything out and start over because the end value of that effort is unknown, it may be even higher risk to be myopic and focus exclusively on making incremental improvements to existing things. Striking a balance between entrepreneurship and predictability is important in this situation. Increasingly we see companies opening innovation centers – internal incubators where new or existing staff can experiment and find better ways of solving problems. When managed well, these can be engines of change.
Misconception #3: The value is five years away
This belief is often a symptom of companies who’ve tried ambitious IT transformation before…only to see the projects either fail outright, or become “zombie projects” which never really end, consume resources, and never deliver anything with tangible value. Past experience can often predict future events. Unless it doesn’t. The notion of a balanced approach can help here also.;
Big Data Innovation Summit London
$200 off with code DATA200
Data Innovation Summit 2017
30% off with code 7wData
Enterprise Data World 2017
$200 off with code 7WDATA
Data Visualisation Summit San Francisco
$200 off with code DATA200
Chief Analytics Officer Europe
15% off with code 7WDCAO17