As a marketer, it can sometimes seem like the work my engineering colleagues are doing is magic — pure computer science sorcery.
I mean, they’re building a chatbot that can communicate using human language and learn from the conversations it’s having.
(Meanwhile, I’m busy contemplating Oxford commas and racking my brain over whether or not “racking” should actually be spelled “wracking.”)
I’m not going to sugarcoat it. AI can be a tough topic to wrap your head around. And with all of the various branches — machine learning, deep learning, natural language processing — it’s not a topic that you can hope to master by reading a single blog post. Or even a single book.
So the purpose of this post isn’t to provide you with an exhaustive, engineering-degree-level understanding of AI. Instead, it’s to “translate” some of the most commonly used AI terms into everyday language so you can understand them at a basic level.
Moving forward, it’s likely that AI will be playing a more significant role in the work marketers do. Becoming fluent in AI-speak now can help prepare us for the road ahead.
Or if you’re talking about AI as a discipline, it’s figuring out how to make computers do things that humans need intelligence to do.
Using logic, forming hypotheses, solving problems — those are a few examples of activities that we typically think of as requiring a human-level of intelligence. When a computer or computer program is able to do those types of things, it’s considered artificially intelligent.
Or at least, some people would consider it artificially intelligent.
As computer science professor Toshinori Munakata wrote in Fundamentals of the New Artificial Intelligence, “There is no standard definition of exactly what artificial intelligence is. If you ask five computing professionals to define “AI”, you are likely to get five different answers.”
Not really helping my cause here, professor.
Historically, the Turing test has been the gold standard for determining whether or not a computer is truly intelligent.
First described by computing pioneer Alan Turing in a 1950 paper, the Turing test invites a participant to exchange messages, in real-time, with an unseen party. In some cases that unseen party is another human, in other cases it’s a computer. If the participant is unable to distinguish the computer from the human, the computer is said to have passed the Turing test and can be considered intelligent.
So that settles it, right? When a computer’s behavior becomes indistinguishable from the behavior of a human, we can say that AI has been achieved.
A separate camp of AI researchers argues that framing AI as a quest to understand and imitate human intelligence is the wrong approach.
After all, they argue, humans didn’t achieve “artificial flight” through building machines that flap their wings like birds, bats, or bugs. Instead of imitating nature, humans relied on other engineering principles in order to create the planes we fly around in today.
Operating with this approach, the goal of AI isn’t to build computers that can behave like humans, but to build highly flexible, rational computers that can perceive their environments and take actions that maximize their chances of success toward some goal.
This, as it turns out, is in-line with how Amazon’s AI-powered program Alexa “thinks” about AI.
When I asked Alexa to define AI, she replied that it’s “the branch of computer science that deals with writing computer programs that can solve problems creatively.”
On a related note, when I asked Alexa if she could pass the Turing test, she replied, “I don’t need to pass that — I’m not pretending to be human.”
Or if you’re talking about machine learning as a discipline, it’s the branch of AI that explores how to create programs that can automatically improve with experience.