Millennials know exactly what they want and expectations are high – very high. They are empowered
Developers interested in creating apps that understand text or speech can look to Google’s cloud for tools — Cloud Natural Language API and Cloud Speech API. The two offerings have now advanced into open beta status.
Continuing its evangelism of machine learning, Google on Wednesday said two of its machine learning APIs introduced in March have advanced into open beta status.
The first of these is the Cloud Natural Language API, which lets developers parse the meaning and structure of text. With initial support for English, Spanish, and Japanese, the API provides tools to understand the sentiment expressed in text, the relevant entities discussed (e.g. people, places, events, products, and media), and the syntax of the text.
It is, in short, a way to help software understand. For companies, potential applications might include understanding how people feel about a product, based on the sentiment expressed in online reviews, or how customers feel about support interaction, based on analysis of transcribed calls.
As an example of how sentiment analysis can be applied, Google in a separate blog post explored sentiment analysis data for stories published in the New York Times over the first two weeks of July. Google’s Cloud Natural Language API found US news stories had the the most negative sentiment while Arts stories were the most positive. This lends support to arguments that news consumption is bad for you.
The second is the Cloud Speech API, which offers a way to turn spoken content into text in over 80 languages. Speech-to-text transcription enables voice-based interaction with devices like Amazon Echo, Apple Siri, and the forthcoming Google Home. It offers app developers a way to accept commands by voice and to direct those commands at any network-accessible device.