Artificial Intelligence Comes to Hollywood
- by 7wData
Last September, when the 20th Century Fox sci-fi thriller Morgan premiered, artificial intelligence (AI) took center stage for the first time not as a plot point but a tool. The film studio revealed that it had used IBM’s Watson — a supercomputer endowed with AI capabilities — to make the movie’s trailer. IBM research scientists “taught” Watson about horror movie trailers by feeding it 100 such trailers, cut into scenes. Watson then analyzed the data, from the point of view of visuals, audio and emotions, to “learn” what makes a horror trailer scary. Then the scientists fed in the entire 90-minute Morgan. According to Engadget, Watson “instantly zeroed in on 10 scenes totaling six minutes of footage.”
The media buzz that followed both overstated and understated what had actually happened. In fact, an actual human being edited the trailer, using the scenes Watson chose. So AI didn’t actually edit the trailer. But it was also a benchmark, tantalizing the Hollywood creatives (and studio executives) interested in how artificial intelligence might change entertainment.
The discussion about AI is still a bit premature; when today’s products are described, machine learning is a more accurate description. The first person to posit that machines could actually learn was computer gaming pioneer Arthur Samuels, in 1959. Based on pattern recognition and dependent on enough data to train the computer, machine learning is used for any repetitive task. Philip Hodgetts, who founded two companies integrating machine learning, Intelligent Assistance and Lumberjack System, notes that “there’s a big leap from doing a task really well to a generalized intelligence that can do multiple self-directed tasks.” Most experts agree that autonomous cars are the closest we have today to a real-world artificial intelligence.
Machine learning can and does form an important role in a growing number of applications aimed at the media and entertainment business, nearly all of them invisible to the end user. Perhaps the most obvious ones are the applications aimed at distribution of digital media. Iris.TV, which partners with numerous media companies from Time Warner’s Telepictures Productions to Hearst Digital Media, uses machine learning to create what it dubs “personalized video programming.” The company takes in the target company’s digital assets and creates a taxonomy and structure, with the metadata forming the basis of recommendations. The APIs, which integrate with most video players, learn what the user watches, then create a playlist based on those preferences. The results are pretty impressive: The Hollywood Reporter, for example, was able to double its video views from 80 million in October 2016 to 210 million in February 2017.
Machine learning also plays an increasingly significant role in video post-production — much more so than production, which is still a hands-on, very human job. “The production process is dependent on bipedal mobility,” notes Hodgetts wryly. “We’ve motorized cranes and so on, but it’ll be harder to replace a runner on set.” Even so, the process of creating digital imagery will feel the impact of machine learning in the not-so-distant future. Adobe, for example, is working with the Beckman Institute for Advanced Science and Technology to use a kind of machine learning to teach a software algorithm how to distinguish and eliminate backgrounds. With the goal of automating compositing, the software has been taught to do so via a dataset of 49,300 training images.
Today’s machine learning-enhanced tools fall under the umbrella of cognitive services, a term that covers any off-the-shelf programs that have already been trained at a task, whether it’s facial recognition or motion detection. At NAB 2017, Finnish company Valossa will debut its Alexa-integrated real-time video recognition platform, Val.ai.
Val.ai is intended to solve the problem of discoverability.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
Shift Difficult Problems Left with Graph Analysis on Streaming Data
29 April 2024
12 PM ET – 1 PM ET
Read MoreYou Might Be Interested In
Is AI just a fairy tale? Not in these successful use cases
18 Feb, 2021Getting artificial intelligence past the fairy tale stage is a challenge for some organizations. Here are two examples of AI …
Three Ways Big Data Helps Manufacturers Think Bigger
17 May, 2016Everyone is running around talking about Big Data — and yes, our databases are immense. A gigabyte used to sound …
From DevOps to DataOps
18 Nov, 2018Over the past 10 years, many of us in technology companies have experienced the emergence of “DevOps.” This new set …
Recent Jobs
Do You Want to Share Your Story?
Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.