Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

Getting Ready for the Unforeseen

Handling Unseen Values in Machine Learning: A Case Study on Taboola’s Recommender System

In the world of machine learning, one of the challenges that often arise is dealing with categorical features that represent real-world objects, such as words, items, and categories. However, what happens when we encounter new object values during the inference stage that have never been seen before? How can we ensure that our model can still make sense of these new inputs?

These unseen values, also known as Out of Vocabulary (OOV) values, must be handled appropriately. Different algorithms have different methods for dealing with OOV values, and it’s important to consider the assumptions made about the categorical features as well.

In this blog post, we’ll focus on the application of deep learning to dynamic data, using Taboola’s recommender system as an example. This system encounters new values regularly, such as unique item identifiers and advertiser IDs. These unseen values pose a challenge as they were not present during the model’s training phase.

One solution to handling OOV values is to replace all rare values with a special OOV token before training. By exposing the model to the OOV token during training, it can learn a meaningful embedding for all OOV values and mitigate the risk of overfitting to rare values.

However, simply using an OOV token may not be enough to ensure optimal performance. Rare items that are injected with the OOV token may not benefit from the model’s memorization capabilities, leading to poorer performance on these items. Furthermore, if the OOV embedding is learned using a distribution specific to rare items, it may not generalize well to the general population of items.

To address this issue, Taboola’s recommender system implemented a new approach. Instead of randomly injecting the OOV token before training, the model trained on all available values during each epoch. At the end of the epoch, a random set of examples were sampled, and the OOV token was injected for further training. This allowed the model to benefit from both OOV and non-OOV embeddings, improving performance significantly.

By continuously seeking improvements and considering unexpected challenges, the recommender system was able to enhance its performance in production. This case highlights the importance of continuously exploring new approaches and fine-tuning models to achieve optimal results in machine learning applications. To read more about this approach, you can find the original post on engineering.taboola.com.

Latest

Introducing Stateful MCP Client Features in Amazon Bedrock AgentCore Runtime

Unlocking Interactive AI Workflows: Introducing Stateful MCP Client Capabilities...

I Tried the ‘Let Them’ Rule for 24 Hours with ChatGPT — Here’s How I Stopped Overthinking

Embracing the "Let Them" Rule: How AI Helped Me...

Springwood High School Students in King’s Lynn Develop Problem-Solving Robots for Global Challenge

Aspiring Engineers at Springwood High School Tackle the First...

Non-Stop Work, 24/7

The Rise of AI Employees: Transforming the Modern Workplace Understanding...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Introducing Stateful MCP Client Features in Amazon Bedrock AgentCore Runtime

Unlocking Interactive AI Workflows: Introducing Stateful MCP Client Capabilities on Amazon Bedrock AgentCore Runtime Transforming Agent Interactions with Elicitation, Sampling, and Progress Notifications In this article,...

Contemporary Topic Modeling Techniques in Python

Unveiling Hidden Themes with BERTopic: A Comprehensive Guide to Advanced Topic Modeling Understanding the Basics of Topic Modeling Explore traditional methods vs. modern approaches. What is BERTopic? An...

Comprehensive Guide to the Lifecycle of Amazon Bedrock Models

Managing Foundation Model Lifecycle in Amazon Bedrock: Best Practices for Migration and Transition Overview of Amazon Bedrock Model Lifecycle Pricing Considerations During Extended Access Communication Process for...