Written by Matt Southern
Google Search is capable of understanding human language with the assistance of multiple AI models that all work together to find the most relevant results.
Information about how these AI models work is explained in simple terms by Pandu Nayak, Google’s Vice President of Search, in a new article on the company’s official blog.
Nayak demystifies the following AI models, which play a major role in how Google returns search results:
- Neural matching
Neither of these models work alone. They all help each other out by performing different tasks to understand queries and match them to content searchers are looking for.
Here are the key takeaways from Google’s behind-the-scenes look at what its AI models do and how it all translates into better results for searchers.
Google’s AI Models Explained
Google’s first AI system, RankBrain, was launched in 2015.
As the name suggests, RankBrain’s purpose is to figure out the best order for search results by ranking them according to relevance.
Despite being Google’s first deep learning model, RankBrain continues to play a major role in search results today.
RankBrain helps Google understand how words in a search query relate to real-world concepts.
Nayak illustrates how RankBrain works:
“For example, if you search for ‘what’s the title of the consumer at the highest level of a food chain,’ our systems learn from seeing those words on various pages that the concept of a food chain may have to do with animals, and not human consumers.
By understanding and matching these words to their related concepts, RankBrain understands that you’re looking for what’s commonly referred to as an “apex predator.”
Google introduced neural matching to search results in 2018.
Neural matching allows Google to understand how queries relate to pages using the knowledge of the broader concepts.
Rather than looking at individual keywords, neural matching examines whole queries and pages to identify the concepts they represent.
With this AI model, Google is able to cast a wider net when we scanning its index for content that’s relevant to a query.
Nayak illustrates how neural matching works:
“Take the search “insights how to manage a green,” for example. If a friend asked you this, you’d probably be stumped. But with neural matching, we’re able to make sense of it.
By looking at the broader representations of concepts in the query — management, leadership, personality and more — neural matching can decipher that this searcher is looking for management tips based on a popular, color-based personality guide.”
BERT was first introduced in 2019 and is now used in all queries.
It’s designed to accomplish two things — retrieve relevant content and rank it.
BERT can understand how words relate to each other when used in a particular sequence, which ensures important words aren’t left out of a query.
This complex understanding of language allows BERT to rank web content for relevance faster than other AI models.
Nayak illustrates how BERT works in practice:
“For example, if you search for “can you get medicine for someone pharmacy,” BERT understands that you’re trying to figure out if you can pick up medicine for someone else.
Before BERT, we took that short preposition for granted, mostly sharing results about how to fill a prescription. Thanks to BERT, we understand that even small words can have big meanings.”
Google’s latest AI milestone in Search — Multitask Unified Model, or MUM, was introduced in 2021.
MUM is a thousand times more powerful than BERT, and capable of both understanding and generating language.
It has a more comprehensive understanding of information and world knowledge, being trained across 75 languages and many different tasks at once.
MUM’s understanding of language spans images, text, and more in the future. That’s what it means when you hear MUM being referred to as “multi-modal.”
Google is in the early days of realizing MUM’s potential, so it’s use in search is limited.
Currently, MUM is being used to improve searches for COVID-19 vaccine information. In the coming months it will be utilized in Google Lens as a way to search using a combination of text and images.
Here’s a recap of what Google’s major AI systems are and what they do:
- RankBrain: Ranks content by understanding how keywords relate to real-world concepts.
- Neural matching: Gives Google a broader understanding of concepts, which expands the amount of content Google is able to search through.
- BERT: Allows Google to understand how words can change the meaning of queries when used in a particular sequence.
- MUM: Understands information and world knowledge across dozens of languages and multiple modalities, such as text and images.
These AI systems all work together to find and rank the most relevant content for a query as fast as possible.
Featured Image: IgorGolovniov/Shutterstock