The future is here… or is it?
With so many articles proliferating the media space on how humans are at the cusp of full AI (artificial intelligence), it’s no wonder that we believe that the future — which is full of robots and drones and self-driven vehicles, as well as diminishing human control over these machines — is right on our doorstep.
But are we really approaching the singularity as fast as we think we are?
It’s not hard to have that impression with the likes of Elon Musk, Stephen Hawking, leading university departments and research centers around the world and more being highly concerned with the potential risks brought about by AI and taking action now to avoid a doomsday scenario in the near future. They predict that by the year 2030 machines will develop consciousness through the application of human intelligence.
In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.” Meaning that the future is here and it may soon outstrip us.
Yet, the truth is, we are far from achieving true AI — something that is as reactive, dynamic, self-improving and powerful as human intelligence. And I’m not talking about 100 years kind of far, but possibly centuries, millenniums and, perhaps, we might never get there at all.
Here are some reasons.
Full AI, or superintelligence, should possess the full range of human cognitive abilities. This includes self-awareness, sentience and consciousness, as these are all features of human cognition.
Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”
Now AI only exists such that it specializes in one area. For instance, there’s AI that can beat the world chess champion in chess, but that’s the only thing it does.
Even when scientists have built neural networks that mimic the intricate layers of how the brain understands, analyzes information and build concepts, they don’t know what exactly is going on in there, why neural networks are interpreting things in a certain way.
From the perspective of science, they are just a bunch of math and equations, just a bunch of numbers. And we all know that the human intelligence and the human brain is far from that.
“I don’t see any sign that we’re close to a singularity,” said Ernest Davis, a New York University computer scientist. “While AI can trounce the best chess or Jeopardy player and do other specialized tasks, it’s still light years behind the average 7-year-old in terms of common sense, vision, language and intuition about how the physical world works.”
The failure to recognize the distinction between this intelligence and full AI could be contributing to Hawking and Musk’s existential worries, both of whom believe that we are already well on a path toward developing full AI.
“To achieve the singularity, it isn’t enough to just run today’s software faster,” Microsoft co-founder Paul Allen wrote in 2011. “We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.”
Essentially, the most extreme promises of full AI are based on a flawed premise: that we understand human intelligence and consciousness.