Artificial Intelligence

Artificial Intelligence, Deep Learning, and Neural Networks, Explained

Artificial Intelligence, Deep Learning, and Neural Networks, Explained

This article is meant to explain the concepts of AI, deep learning, and neural networks at a level that can be understood by most non-practitioners, and can also serve as a reference or review for technical folks as well.

  Artificial Intelligence (AI), deep learning, and neural networks represent incredibly exciting and powerful Machine Learning-based techniques used to solve many real-world problems. For a primer on machine learning, you may want to read this five-part series that I wrote.

While human-like deductive reasoning, inference, and decision-making by a computer is still a long time away, there have been remarkable gains in the application of AI techniques and associated algorithms.

The concepts discussed here are extremely technical, complex, and based on mathematics, statistics, probability theory, physics, signal processing, machine learning, computer science, psychology, linguistics, and neuroscience.

That said, this article is not meant to provide such a technical treatment, but rather to explain these concepts at a level that can be understood by most non-practitioners, and can also serve as a reference or review for technical folks as well.

The primary motivation and driving force for these areas of study, and for developing these techniques further, is that the solutions required to solve certain problems are incredibly complicated, not well understood, nor easy to determine manually.

Read Also:
How data science can make US college education available to all

Increasingly, we rely on these techniques and machine learning to solve these problems for us, without requiring explicit programming instructions. This is critical for two reasons. The first is that we likely wouldn’t be able, or at least know how to write the programs required to model and solve many problems that AI techniques are able to solve. Second, even if we did know how to write the programs, they would be inordinately complex and nearly impossible to get right.

Luckily for us, machine learning and AI algorithms, along with properly selected and prepared training data, are able to do this for us.

So with that, let’s get started!

  In order to define AI, we must first define the concept of intelligence in general. A paraphrased definition based on Wikipedia is:

While there are many different definitions of intelligence, they all essentially involve learning, understanding, and the application of the knowledge learned to achieve one or more goals.

Read Also:
Why Manufacturers Need Process Mining — A New Type Of Big Data Analytics

It’s therefore a natural extension to say that AI can be described as intelligence exhibited by machines. So what does that mean exactly, when is it useful, and how does it work?

A familiar instance of an AI solution includes IBM’s Watson, which was made famous by beating the two greatest Jeopardy champions in history, and is now being used as a question answering computing system for commercial applications. Apple’s Siri andAmazon’s Alexa are similar examples as well.

In addition to speech recognition and natural language (processing, generation, and understanding) applications, AI is also used for other recognition tasks (pattern, text, audio, image, video, facial, …), autonomous vehicles, medical diagnoses, gaming, search engines, spam filtering, crime fighting, marketing, robotics, remote sensing, computer vision, transportation, music recognition, classification, and so on.

Something worth mentioning is a concept known as the AI effect. This describes the case where once an AI application has become somewhat mainstream, it’s no longer considered by many as AI. It happens because people’s tendency is to no longer think of the solution as involving real intelligence, and only being a application of normal computing.

Read Also:
Self-taught artificial intelligence beats doctors at predicting heart attacks

This despite the fact that these applications still fit the definition of AI regardless of widespread usage. The key takeaway here is that today’s AI is not necessarily tomorrow’s AI, at least not in some people’s minds anyway.

There are many different goals of AI as mentioned, with different techniques used for each.

 



Data Science Congress 2017

5
Jun
2017
Data Science Congress 2017

20% off with code 7wdata_DSC2017

Read Also:
4 ways Google Cloud will bring AI, machine learning to the enterprise

AI Paris

6
Jun
2017
AI Paris

20% off with code AIP17-7WDATA-20

Read Also:
How Machine Learning is Changing the Way the Back Office Does Business

Chief Data Officer Summit San Francisco

7
Jun
2017
Chief Data Officer Summit San Francisco

$200 off with code DATA200

Read Also:
Agile Data Science Teams Deliver Real World Results

Customer Analytics Innovation Summit Chicago

7
Jun
2017
Customer Analytics Innovation Summit Chicago

$200 off with code DATA200

Read Also:
4 ways Google Cloud will bring AI, machine learning to the enterprise

HR & Workforce Analytics Innovation Summit 2017 London

12
Jun
2017
HR & Workforce Analytics Innovation Summit 2017 London

$200 off with code DATA200

Read Also:
Reverse-engineering artificial intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *