In 2011, McKinsey & Co. published a study trumpeting that "the use of big data will underpin new waves of productivity growth and consumer surplus" and called out five areas ripe for a big data bonanza. In personal location data, for example, McKinsey projected a $600 billion increase in economic surplus for consumers. In health care, $300 billion in additional annual value was waiting for that next Hadoop batch process to run.
Five years later, according to a follow-up McKinsey report, we're still waiting for the hype to be fulfilled. A big part of the problem, the report intones, is, well, us: "Developing the right business processes and building capabilities, including both data infrastructure and talent" is hard and mostly unrealized. All that work with Hadoop, Spark, Hive, Kafka, and so on has produced less benefit than we thought it would.
In part that's because keeping up with all that open source software and stitching it together is a full-time job in itself. But you can also blame the bugbear that stalks every enterprise: institutional inertia. Not to worry, though: The same developers who made open source the lingua franca of enterprise development are now making big data a reality through the public cloud.
On the surface the numbers look pretty good. According to a recent SyncSort survey, a majority (62 percent) are looking to Hadoop for advanced/predictive analytics with data discovery and visualization (57 percent) also commanding attention.
Yet when you examine this investment more closely, a comparatively modest return emerges in the real world. By McKinsey's estimates, we're still falling short for a variety of reasons:
These aren't the only areas measured by McKinsey, but they provide a good sampling of big data's impact across a range of industries. To date, that impact has been muted. This brings us to the most significant hole in big data's process: culture.