As every form of data continues to grow, and some forms of data, like sensor data and human data, approach explosion status, the conversation is shifting. In the past few years, everyone was entranced by the astonishing metrics of data growth. How many times have you read the claim that 90% of the world’s data was created in the last two years? Or perhaps now it’s up to 95% since that claim was made in 2013?
Today, the most meaningful conversations are focused of the analytics fueled by that data, and specifically, analytics at scale. No example is more compelling than Toby Bloom, from the New York Genome Center, who took the stage at HPE Discover in Las Vegas with Meg Whitman. Seeking an answer in her fight against cancer, Toby passionately proclaimed that the answer is in the data and the New York Genome Center “analyzes the hell out of that data”. When you’re talking about human genome mapping, that means analytics at scale almost beyond imagination.
In the Spotlight keynote I hosted at HPE Discover, Dr. Liz Worthey from the HudsonAlpha Institute for Biotechnology told us that a single genome from only one human is represented by 175 books with a total of 262,000 pages and a font size of no more than 6. But that’s not the hard part. The average difference in genomes between two humans is a mere 565 pages, or 0.2%. Now combine millions of individual genomes and seek the answers to distinct forms of cancer as correlated with genomic variants, and we see the true meaning of analytics at scale.
But even that example is not enough.