The Evolution of Matching Products to Search Queries

Part of the Ranking on Amazon (A9 Algorithm) Series

Searching for products online has become a regular part of our daily lives. With just a few taps or clicks, we can conveniently browse and purchase items on our phones or laptops. But providing this seamless shopping experience requires immense technological innovation behind the scenes. At the core of online commerce is the search engine, which matches user queries to products among massive catalogs containing billions of items.

In this post, we’ll explore how the technology for matching queries to relevant products has progressed over time – from basic lexical matching, to semantic matching with vector embeddings, to state-of-the-art techniques powered by AI like BERT. The goal of these innovations has been to help shoppers more easily find what they want.

The Rise of Lexical Matching

In the early days of e-commerce, the predominant approach for connecting user queries to products was lexical matching based on an inverted index. This technique involves creating an index that maps words to products containing those words. When a query is entered, the search engine efficiently looks up the words and retrieves products with those words.

For example, if a user searches for “red dress,” the engine would lookup “red” and “dress” in the index and return products having both those terms. While fast, this approach is limited – products containing synonyms like “crimson dress” would not be matched.

Lexical matching also has other shortcomings:

  • It lacks semantic understanding, matching on individual words without considering their meaning together. For instance, a search for “comfortable heels” could retrieve painful high heels simply because those words appear.
  • It’s brittle to morphological variations, since plurals (“shoe” vs “shoes”) or verb forms (“run” vs “running”) are seen as completely different words.
  • It’s sensitive to spelling errors, so a search for “heels” wouldn’t match correctly spelled products.

While techniques like stemming help a bit with word forms, lexical matching overall lacks true language understanding. This motivated the development of semantic matching.

Introducing Semantic Matching

Semantic matching revolutionized product search by using neural networks to map queries and products into dense vector embeddings that encode meaning and semantics. Queries and products with similar intent have embeddings close together in this vector space.

Pioneering models like DSSM (Deep Semantic Similarity Model) used a feedforward network to separately embed queries and products, then computed the cosine similarity of these embeddings to assess relevance.

More recent innovations like Semantic Product Search from Amazon built upon this approach:

  • A 3-part loss function to differentiate between purchased, interested but not purchased, and random products for a given query, improving matching accuracy.
  • Combining words, n-grams, and character trigrams to capture different granularities of meaning, plus hashing to handle unseen terms.
  • Sharing an embedding layer between the query and product networks to enable matching of words appearing in both.
  • Using average pooling to aggregate embeddings instead of more complex RNNs/CNNs, which worked better for concise product queries.

These advances showed substantial gains over lexical matching and earlier semantic models, boosting metrics like recall and mean average precision. Some examples of the learned semantic understanding:

  • Matching “sneakers” to a search for “running shoes” based on their semantic similarity.
  • Retrieving burgundy dresses for a query like “red dress” through color synonym understanding.
  • Relating latex-free gloves to a search for latex gloves by comprehending the negation.

Semantic matching greatly expanded the relevance of product results by moving beyond superficial word forms to actual meaning. However, it was still limited by the inferior semantic representations produced by earlier neural networks compared to more modern AI. This led to the rise of BERT.

The Advent of BERT

In 2018, Google open-sourced BERT (Bidirectional Encoder Representations from Transformers), which sparked a revolution in natural language processing through its groundbreaking deep language understanding capabilities.

Given BERT’s exceptional performance on other language tasks, it was a natural evolution to apply it to product matching. Models like TwinBERT adapted BERT for efficient semantic search via dual-encoder architectures. In this approach, queries and products are encoded separately by BERT-based networks, without cross-attention between them. This enabled fast nearest-neighbor search while still leveraging BERT’s linguistic abilities. Although not as accurate as full cross-encoder BERT models, it satisfied the speed demands of product matching.

Recent work at Amazon further advanced BERT-based matching:

  • Pretraining BERT on product text to adapt it to the shopping domain
  • Interaction pretraining to better align search and product representations
  • Distilling the large BERT model into a smaller version for faster inference

Their experiments demonstrated considerable gains over prior BERT models, with 23% better relevance than classic DSSM while maintaining speed.

The Future of Product Matching

Product matching has seen remarkable innovation, from lexical to semantic to BERT-based techniques. What does the future hold? Some promising directions include:

  • Incorporating user behavior data like searches, clicks and purchases to better model intent
  • Generative models that can synthesize relevant products for rare or ambiguous queries
  • Multi-modal matching combining text, images and structured data
  • Graph-based retrieval leveraging knowledge networks instead of just text

Online commerce continues growing exponentially, driving the need for ongoing improvements in product matching to help consumers find what they want. Lexical matching served its purpose initially but fails in today’s world of massive, multifaceted product catalogs. Deep learning paved the path to search based on meaning, and BERT moved us closer to genuine language understanding. While progress has been made, the journey continues as researchers enhance neural networks to achieve human-level mastery of product matching at unprecedented scales.

In summary, matching technology has progressively evolved from simple lexical methods to sophisticated AI that truly comprehends language and semantics. The goal has remained constant – to help people seamlessly discover and purchase products online. And the evolution persists, as researchers push the boundaries of what’s possible, one innovation at a time.

View and download the scientific paper here:

RELATED POSTS

Unlocking Click Bias Secrets Seller Sessions A9
Shopping queries dataset Seller Sessions
Multi objective ranking
seller sessions the complete guide to autocomplete on amazon
understanding product photos
answering product questions A9
A10 is a myth
Man with shopping list in the supermarket
shopping online Seller Sessions A9 Algorithm
adressing the cold start Seller Sessions A9
1 2
databrill logo

Looking for a Better Agency?

Are you a 7 or 8-figure Amazon seller who is…

Databrill Logo

Looking for a Better Agency?

Are you a 7 or 8-figure Amazon seller who is…