Today’s DBA may feel somewhat like Dr. Who—traversing alternate application “universes” in which different rules and demands weigh on their decisions.
One universe—the universe of Big Data—is one in which people understandably tend to stress the “Big” part of data. NoSQL and Hadoop offerings, which often take center stage in conversations, are good at handling massive scale. As long as you’re okay with sacrificing immediate consistency, these solutions are excellent for use cases that don’t require making sense of data quickly and easily.
For Big Data applications that are all the rage now—evaluating behavioral trends, weather patterns, etc.—it doesn’t hurt if data consistency lags behind initial computations, and small discrepancies are forgivable.
Yet, there is a whole other “universe” in which even the slightest bit of inaccuracy spells doom for your application. These “high-value” workloads involve several steps, each of which must be performed immediately and accurately or else money, inventory or other assets may be lost.
This is the universe in which e-commerce, online gaming, ad tech and other applications reside—one where there simply is no room for error, lest you lose customers. But what happens if you need to handle millions of these types of transactions at once?
To ask it another way, what happens when the universes of high-value transactions and Big Data collide?
To survive in such a high value, heavy workload “multiverse”, companies need the best of both MySQL and NoSQL. They need the ACID compliance properties that ensure that database transactions are processed reliably, that comes with the former…but at the impressive scale characteristic of NoSQL.
However, achieving both massive scale while retaining complete data accuracy is not easy. MySQL’s (and by extension most MySQL drop-in replacements’) single-node (server) architecture hits a wall quick; once you max out capacity, you have to move to a bigger server, or employ unnatural feats to scale.