Artificial Intelligence

Artificial Intelligence, Deep Learning, and Neural Networks, Explained

Artificial Intelligence, Deep Learning, and Neural Networks, Explained

This article is meant to explain the concepts of AI, deep learning, and neural networks at a level that can be understood by most non-practitioners, and can also serve as a reference or review for technical folks as well.

  Artificial intelligence (AI), deep learning, and neural networks represent incredibly exciting and powerful machine learning-based techniques used to solve many real-world problems. For a primer on machine learning, you may want to read this five-part series that I wrote.

While human-like deductive reasoning, inference, and decision-making by a computer is still a long time away, there have been remarkable gains in the application of AI techniques and associated algorithms.

The concepts discussed here are extremely technical, complex, and based on mathematics, statistics, probability theory, physics, signal processing, machine learning, computer science, psychology, linguistics, and neuroscience.

That said, this article is not meant to provide such a technical treatment, but rather to explain these concepts at a level that can be understood by most non-practitioners, and can also serve as a reference or review for technical folks as well.

The primary motivation and driving force for these areas of study, and for developing these techniques further, is that the solutions required to solve certain problems are incredibly complicated, not well understood, nor easy to determine manually.

Read Also:
Why consumer data is the new oil

Increasingly, we rely on these techniques and machine learning to solve these problems for us, without requiring explicit programming instructions. This is critical for two reasons. The first is that we likely wouldn’t be able, or at least know how to write the programs required to model and solve many problems that AI techniques are able to solve. Second, even if we did know how to write the programs, they would be inordinately complex and nearly impossible to get right.

Luckily for us, machine learning and AI algorithms, along with properly selected and prepared training data, are able to do this for us.

So with that, let’s get started!

  In order to define AI, we must first define the concept of intelligence in general. A paraphrased definition based on Wikipedia is:

While there are many different definitions of intelligence, they all essentially involve learning, understanding, and the application of the knowledge learned to achieve one or more goals.

Read Also:
Why AI and machine learning need to be part of your digital transformation plans

It’s therefore a natural extension to say that AI can be described as intelligence exhibited by machines. So what does that mean exactly, when is it useful, and how does it work?

A familiar instance of an AI solution includes IBM’s Watson, which was made famous by beating the two greatest Jeopardy champions in history, and is now being used as a question answering computing system for commercial applications. Apple’s Siri andAmazon’s Alexa are similar examples as well.

In addition to speech recognition and natural language (processing, generation, and understanding) applications, AI is also used for other recognition tasks (pattern, text, audio, image, video, facial, …), autonomous vehicles, medical diagnoses, gaming, search engines, spam filtering, crime fighting, marketing, robotics, remote sensing, computer vision, transportation, music recognition, classification, and so on.

Something worth mentioning is a concept known as the AI effect. This describes the case where once an AI application has become somewhat mainstream, it’s no longer considered by many as AI. It happens because people’s tendency is to no longer think of the solution as involving real intelligence, and only being a application of normal computing.

Read Also:
Why big data is good for your health

This despite the fact that these applications still fit the definition of AI regardless of widespread usage. The key takeaway here is that today’s AI is not necessarily tomorrow’s AI, at least not in some people’s minds anyway.

There are many different goals of AI as mentioned, with different techniques used for each.

 



Data Innovation Summit 2017

30
Mar
2017
Data Innovation Summit 2017

30% off with code 7wData

Read Also:
How to Become a (Type A) Data Scientist

Big Data Innovation Summit London

30
Mar
2017
Big Data Innovation Summit London

$200 off with code DATA200

Read Also:
What Can You Do with a Career in Data Science? -

Enterprise Data World 2017

2
Apr
2017
Enterprise Data World 2017

$200 off with code 7WDATA

Read Also:
Machine Learning and the Future of Artificial Intelligence

Data Visualisation Summit San Francisco

19
Apr
2017
Data Visualisation Summit San Francisco

$200 off with code DATA200

Read Also:
Machine Learning and the Future of Artificial Intelligence

Chief Analytics Officer Europe

25
Apr
2017
Chief Analytics Officer Europe

15% off with code 7WDCAO17

Read Also:
My Experience as a Freelance Data Scientist

Leave a Reply

Your email address will not be published. Required fields are marked *