Pundits are quick to hype AI and machine learning as the future of everything. But, anyone who has been caught screaming at Siri for its lack of understanding of the most basic of queries knows that we have a long, ponderous way to go before "we have arrived." That's why I find Gil Press' summary of the recent O'Reilly AI Conference so helpful and important.
Some of the observations are banal ("AI is not going to exterminate us, AI is going to empower us"), but others capture the essence of what makes AI so promising...and beguiling.
The first observation ("AI is difficult") seems obvious, yet for all the wrong reasons. The first thing that makes AI and machine learning difficult comes down to trust. The reason, as Press captured in a statement made by Peter Norvig, director of research at Google, is that we can't see inside the machine to really understand what is happening: "What is produced [by machine learning] is not code but more or less a black box—you can peek in a little bit, we have some idea of what's going on, but not a complete idea."
The second reason, according to Press, comes down to the difficulty inherent in "teaching" a machine enough about the world to allow it to "understand" context. Yann LeCun, director of AI research at Facebook, indicated that to truly grok the world "machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan."
Compounding the difficulty of doing this in an accurate way is that any data we feed into a machine is necessarily biased by the person, or people, injecting the data. In the very act of trying to set machines free to objectively process data about the world around them, we imbue them with our subjectivities.