The common thread running through most of the announcements at IBM Insight at World of Watson 2016 was, not surprisingly, IBM Watson. IBM is embedding Watson’s cognitive capabilities throughout its solution portfolio.
Several more key announcements emerged from IBM Insight at World of Watson 2016.
As discussed in a blog early in the week of 24 October 2016, IBM announced the general availability of IBM Watson Data Platform (WDP), an advanced, open source–based, cloud data and analytics platform that helps simplify and automate data-driven business innovation. WDP offers a single, cloud-based development platform for team data science that integrates all data for cognitively powered decision making.
In addition, it provides a self-service, task-oriented environment for teams of data scientists, data engineers and other professionals to collaboratively develop, iterate and deploy sophisticated artificial intelligence (AI), cognitive computing, machine learning and other advanced analytics. Related announcements, also discussed in that previous blog, are the expansion of WDP Plan, the general availability of IBM Data Science Experience (DSX), the closed beta of IBM Watson Machine Learning Service and the expansion of WDP’s open partner ecosystem.
IBM also announced new cloud services that apply cognitive capabilities to cloud video technology for helping uncover new data and insights that can increase audience engagement. These new IBM cloud-based services help clients to deliver more personalized and audience-targeted viewing experiences by providing deep understanding of video content and audience preferences. They use cognitive technology to automatically mine and analyze the complex data in video so companies can better understand and deliver the content consumers want. Several new services leverage innovations from IBM research and development (R&D) labs and cloud video platform capabilities of Clearleap and Ustream.
This capability combines Watson application programming interfaces (APIs) with IBM Cloud Video streaming video solutions to track near-real-time audience reaction of live events by analyzing social media feeds. IT combines the Watson Speech to Text and AlchemyLanguage APIs with IBM Cloud Video technology to track consumer feedback while an event is occurring. It processes natural language in streaming videos and concurrently analyzes social media feeds to provide word-by-word analysis of audience sentiment to a live event. Now in the demonstration phase with clients, companies can use this service to gauge and adjust to audience reaction before a speaker has even left the stage.
This feature uses cognitive technology to automatically segment videos into meaningful scenes to make finding and delivering targeted content more efficient. A new pilot project from IBM Research understands semantics and patterns in language and images and can identify higher-level concepts, such as when a show or movie changes topics. It can be used to automatically segment videos into meaningful chapters, instead of potential arbitrary breaks in action. A leading content provider is already piloting this service to help improve categorization of videos, indexing of specific chapters and searches for relevant content. It enables rich metadata services that can be used to help create highly-specific content pairings for viewers down to the segment, increasing engagement and time spent.
IBM announced plans to integrate its cognitive technologies with the IBM Cloud Video platform to provide deep insights on audience preferences and sentiment. The goal is to provide media and entertainment clients with detail into consumer viewing habits, such as other shows or networks watched, devices used for viewing and other interests for specific audiences. This approach is expected to involve integration of IBM Cloud Video solutions with the IBM Media Insights Platform, which is a cognitive solution.