The four dynamic forces shaping AI
It’s time to debate scenarios that will shift the balance of data, compute resources, algorithms, and talent.
Inner circle at Stonehenge. (source: Kristian H. Resset on Wikimedia Commons ).
To learn more about the state of AI today and where we might be headed in coming years, download the free report “ What is Artificial Intelligence? ,” by Mike Loukides and Ben Lorica.
There are four basic ingredients for making AI: data, compute resources (i.e., hardware), algorithms (i.e., software), and the talent to put it all together. In this era of deep learning ascendancy, it has become conventional wisdom that data is the most differentiating and defensible of these resources; companies like Google and Facebook spend billions to develop and provide consumer services, largely in order to amass information about their users and the world they inhabit. While the original strategic motivation behind these services was to monetize that data via ad targeting, both of these companies—and others who are desperate to follow their lead—now view the creation of AI as an equally important justification for their massive collection efforts.
Abundance and scarcity of ingredients
While all four pieces are necessary to build modern AI systems, what we’ll call their “scarcity” varies widely. Scarcity is driven in large part by the balance of supply and demand: either a tight supply of a limited resource or a heavy need for it can render it more scarce. When it comes to the ingredients that go into AI, these supply and demand levels can be influenced by a wide range of forces—not just technical changes, but also social, political, and economic shifts.
Fictional depictions can help to draw out the form and implications of technological change more clearly. So, before turning to our present condition, I want to briefly explore one of my favorite sci-fi treatments of AI, from David Marusek’s tragically under-appreciated novel Counting Heads . Marusek paints a 22nd-century future where an AI’s intelligence, and its value, scales directly with the amount of “neural paste” it runs on—and that stuff isn’t cheap. Given this hardware (wetware?) expense, the most consequential intelligences, known as mentars, are sponsored by—and allied with—only the most powerful entities: government agencies, worker guilds, and corporations owned by the super-rich “affs” who really run the world. In this scenario, access to a powerful mentar is both a signifier and a source of power and influence.
Translating this world into our language of AI ingredients, in Counting Heads it is the hardware substrate that is far and away the scarcest resource. While training a new mentar takes time and skill, both the talent and data needed to do so are relatively easy to come by. And the algorithms are so commonplace as to be beneath mention.
With this fictional example in mind, let’s take stock of the relative abundance and scarcity of these four ingredients in today’s AI landscape:
The algorithms and even the specific software libraries (e.g., TensorFlow , Torch , Theano ) have become, by and large, a matter of public record—they are simply there for the taking on github and the ArXiv.
Massive compute resources (e.g., Amazon AWS , Google , Microsoft Azure ) aren’t without cost, but are nevertheless fully commoditized and easily accessible to any individual or organization with modest capital. A run-of-the-mill AWS instance, running about $1 an hour, would have been at or near the top of the world supercomputer rankings in the early 1990s.
The talent to build the most advanced systems is much harder to come by, however. There is a genuine shortage of individuals who are able to work fluently with the most effective methods, and even fewer who can advance the state of the art.