Baidu has a new trick for teaching AI the meaning of language
- by 7wData
Earlier this month, a Chinese tech giant quietly dethroned Microsoft and Google in an ongoing competition in AI. The company was Baidu, China’s closest equivalent to Google, and the competition was the General Language Understanding Evaluation, otherwise known as GLUE.
GLUE is a widely accepted benchmark for how well an AI system understands human language. It consists of nine different tests for things like picking out the names of people and organizations in a sentence and figuring out what a pronoun like “it” refers to when there are multiple potential antecedents. A language model that scores highly on GLUE, therefore, can handle diverse reading comprehension tasks. Out of a full score of 100, the average person scores around 87 points. Baidu is now the first team to surpass 90 with its model, ERNIE.
The public leaderboard for GLUE is constantly changing, and another team will likely top Baidu soon. But what’s notable about Baidu’s achievement is that it illustrates how AI research benefits from a diversity of contributors. Baidu’s researchers had to develop a technique specifically for the Chinese language to build ERNIE (which stands for “Enhanced Representation through kNowledge IntEgration”). It just so happens, however, that the same technique makes it better at understanding English as well.
To appreciate ERNIE, consider the model it was inspired by: Google’s BERT. (Yes, they’re both named after the Sesame Street characters.)
Before BERT (“Bidirectional Encoder Representations from Transformers”) was created in late 2018, natural-language models weren’t that great. They were good at predicting the next word in a sentence—thus well suited for applications like Autocomplete—but they couldn’t sustain a single train of thought over even a small passage. This was because they didn’t comprehend meaning, such as what the word “it” might refer to.
But BERT changed that. Previous models learned to predict and interpret the meaning of a word by considering only the context that appeared before or after it—never both at the same time. They were, in other words, unidirectional.
BERT, by contrast, considers the context before and after a word all at once, making it bidirectional. It does this using a technique known as “masking.” In a given passage of text, BERT randomly hides 15% of the words and then tries to predict them from the remaining ones. This allows it to make more accurate predictions because it has twice as many cues to work from. In the sentence “The man went to the ___ to buy milk,” for example, both the beginning and the end of the sentence give hints at the missing word. The ___ is a place you can go and a place you can buy milk.
The use of masking is one of the core innovations behind dramatic improvements in natural-language tasks and is part of the reason why models like OpenAI’s infamous GPT-2 can write extremely convincing prose without deviating from a central thesis.
When Baidu researchers began developing their own language model, they wanted to build on the masking technique.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More