The amazing artificial intelligence we were promised is coming

The amazing artificial intelligence we were promised is coming, finally

The amazing artificial intelligence we were promised is coming, finally

We have been hearing predictions for decades of a takeover of the world by artificial intelligence. In 1957, Herbert A. Simon predicted that within 10 years a digital computer would be the world’s chess champion.  That didn’t happen until 1996.  And despite Marvin Minsky’s 1970 prediction that “in from three to eight years we will have a machine with the general intelligence of an average human being,” we still consider that a feat of science fiction.

The pioneers of artificial intelligence were surely off on the timing, but they weren’t wrong; AI is coming.  It is going to be in our TV sets and driving our cars; it will be our friend and personal assistant; it will take the role of our doctor. There have been more advances in AI over the past three years than there were in the previous three decades.

Even technology leaders such as Apple have been caught off guard by the rapid evolution of machine learning, the technology that powers AI.  At its recent Worldwide Developers Conference, Apple opened up its AI systems so that independent developers could help it create technologies that rival what Google and Amazon have already built.  Apple is way behind.

Read Also:
The IoT and Big Data: Making The Connection

The AI of the past used brute-force computing to analyze data and present them in a way that seemed human.  The programmer supplied the intelligence in the form of decision trees and algorithms.  Imagine that you were trying to build a machine that could play tic-tac-toe. You would give it specific rules on what move to make, and it would follow them. That is essentially how IBM’s Big Blue computer beat chess Grandmaster Garry Kasparov in 1997, by using a supercomputer to calculate every possible move faster than he could.

Today’s AI uses machine learning in which you give it examples of previous games and let it learn from those examples. The computer is taught what to learn and how to learn and makes its own decisions.  What’s more, the new AIs are modeling the human mind itself using techniques similar to our learning processes.  Before, it could take millions of lines of computer code to perform tasks such as handwriting recognition. Now it can be done in hundreds of lines. What is required is a large number of examples so that the computer can teach itself.

Read Also:
If You’re Going To Dream Big, You First Need A Great Product

The new programming techniques use neural networks — which are modeled on the human brain, in which information is processed in layers and the connections between these layers are strengthened based on what is learned.

 



Chief Analytics Officer Europe

25
Apr
2017
Chief Analytics Officer Europe

15% off with code 7WDCAO17

Read Also:
Embracing Aspects of Analytics Automation

Chief Analytics Officer Spring 2017

2
May
2017
Chief Analytics Officer Spring 2017

15% off with code MP15

Read Also:
Big Data and IT Asset Management: A marriage made in heaven or hell?

Big Data and Analytics for Healthcare Philadelphia

17
May
2017
Big Data and Analytics for Healthcare Philadelphia

$200 off with code DATA200

Read Also:
The IoT and Big Data: Making The Connection

SMX London

23
May
2017
SMX London

10% off with code 7WDATASMX

Read Also:
Embracing Aspects of Analytics Automation

Data Science Congress 2017

5
Jun
2017
Data Science Congress 2017

20% off with code 7wdata_DSC2017

Read Also:
Big Data: Separating the Hype from Reality in Corporate Culture
Read Also:
Fraugster, a startup that uses AI to detect payment fraud, raises $5M

Leave a Reply

Your email address will not be published. Required fields are marked *