The hype around artificial intelligence (AI) is ramping up, especially as big tech companies like Amazon, Google, Facebook, IBM and Microsoft attempt to commercialize its use. Agencies are also starting to figure out how they can leverage AI to make their clients’ marketing and advertising efforts more effective. eMarketer’s Bryan Yeager spoke with Josh Sutton, global head of the artificial intelligence practice at Publicis.Sapient, to demystify how AI is used for marketing and what its future may hold.
eMarketer: At a high level, what role does the artificial intelligence practice play within Publicis.Sapient?
Josh Sutton: We’ve been looking at AI for the better part of four years now with the real focus of getting an understanding as to what it really is that people mean when they say “AI.”
Within the practice, we primarily do two things. One is that we help companies look across the landscape of all the different big data, machine learning and other artificial intelligence platforms that are available to understand which ones can help them best achieve their business objectives. We’re really being the engineers that help people figure out how to put together the best engine to meet their specific objectives.
The other focus area is looking at how we can apply artificial intelligence to the advertising space. Being part of Publicis Groupe, we’re obviously looking at what we can do to transform our own industry. We want to leverage various AI platforms to push the boundaries on what can be both to improve things that are done today, as well as to innovate and create new models that hadn’t previously been available.
eMarketer: You said that AI is widely used but widely misunderstood. Give us a sense of how you view the AI landscape as it stands today.
Sutton: We look at the whole host of companies that are out there that are claiming some degree of artificial intelligence and group it into three primary categories.
The first isn’t AI in the traditional sense. It’s more in the big data category: the platforms that enable aggregation of large datasets with which one can apply machine learning. These range from the bespoke systems people have built in-house, to the Palantirs of the world. It’s an area where there’s been a lot of focus and a lot of noise, but quite frankly, probably not the business results that some people expected over the past few years because there haven’t been the tools that extract real value out of that set of information yet.
That leads to that second tier of tools, which are the ones that you’re hearing all about today: machine learning platforms. Machine learning platforms are, at the end of the day, a very straightforward and simple concept. They take large amounts of data, apply algorithms to them to identify trends or common occurrences and effectively predict or identify insights or common responses to areas so one can better learn the likelihood of that occurring again.
Within that space, there are a few of the leading players, and I think the market is driving towards more of a consolidation. We view IBM, Microsoft and Google as the big three that are openly commercializing their platform to varying degrees. IBM has probably been the most aggressive with Watson, with Google dancing a fine line between open-sourcing some of their platforms and keeping some in-house. Microsoft is right there in the middle. Amazon and Facebook are also both making very substantial investments, but doing it in more of an in-house black box model [vs. the other three players].
The third tier of platforms are what I call human interface tools. Within these include causal AI tools—platforms that understand the world the same way that a human does. The main players in the market are the platforms that came out of MIT [Open Mind Common Sense] and Stanford [Cyc]. At this stage, they’re not creating insights that are particularly compelling above and beyond what a person would.