Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

Getting Ready for the Unforeseen

Handling Unseen Values in Machine Learning: A Case Study on Taboola’s Recommender System

In the world of machine learning, one of the challenges that often arise is dealing with categorical features that represent real-world objects, such as words, items, and categories. However, what happens when we encounter new object values during the inference stage that have never been seen before? How can we ensure that our model can still make sense of these new inputs?

These unseen values, also known as Out of Vocabulary (OOV) values, must be handled appropriately. Different algorithms have different methods for dealing with OOV values, and it’s important to consider the assumptions made about the categorical features as well.

In this blog post, we’ll focus on the application of deep learning to dynamic data, using Taboola’s recommender system as an example. This system encounters new values regularly, such as unique item identifiers and advertiser IDs. These unseen values pose a challenge as they were not present during the model’s training phase.

One solution to handling OOV values is to replace all rare values with a special OOV token before training. By exposing the model to the OOV token during training, it can learn a meaningful embedding for all OOV values and mitigate the risk of overfitting to rare values.

However, simply using an OOV token may not be enough to ensure optimal performance. Rare items that are injected with the OOV token may not benefit from the model’s memorization capabilities, leading to poorer performance on these items. Furthermore, if the OOV embedding is learned using a distribution specific to rare items, it may not generalize well to the general population of items.

To address this issue, Taboola’s recommender system implemented a new approach. Instead of randomly injecting the OOV token before training, the model trained on all available values during each epoch. At the end of the epoch, a random set of examples were sampled, and the OOV token was injected for further training. This allowed the model to benefit from both OOV and non-OOV embeddings, improving performance significantly.

By continuously seeking improvements and considering unexpected challenges, the recommender system was able to enhance its performance in production. This case highlights the importance of continuously exploring new approaches and fine-tuning models to achieve optimal results in machine learning applications. To read more about this approach, you can find the original post on engineering.taboola.com.

Latest

Optimizing Performance: Load Testing SageMaker AI Endpoints Using Observe.AI’s Testing Tool

Optimizing Machine Learning Infrastructure with OLAF and Amazon SageMaker A...

Here’s what it conveyed.

Alternative Investments to Gold: Insights and Recommendations for 2025 Exploring...

AI-Driven “Agent” Solution Enhances Efficiency, Offering Airlines Better Planning Insights and Decision-Making Abilities

AI-Powered "Agent" Solution Enhances Airline Efficiency and Decision-Making Published on...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Optimizing Performance: Load Testing SageMaker AI Endpoints Using Observe.AI’s Testing Tool

Optimizing Machine Learning Infrastructure with OLAF and Amazon SageMaker A Collaborative Journey with Aashraya Sachdeva from Observe.ai Leveraging SageMaker for Efficient ML Development The Challenge: Managing Scale...

Analyzing Sentiment Through Text and Audio with AWS Generative AI Services:...

Unlocking Customer Insights: A Comprehensive Guide to Sentiment Analysis with AWS and ICTi Enhancing Customer Experience through Emotional Intelligence in Text and Audio This post is...

Boosting LLM Inference Speed with Post-Training Weight and Activation Optimization Using...

Scaling Foundation Models: Harnessing the Power of Quantization for Efficient Deployment The Rapid Expansion of Language Models and Its Challenges The Importance of Post-Training Quantization (PTQ)...