Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

Getting Ready for the Unforeseen

Handling Unseen Values in Machine Learning: A Case Study on Taboola’s Recommender System

In the world of machine learning, one of the challenges that often arise is dealing with categorical features that represent real-world objects, such as words, items, and categories. However, what happens when we encounter new object values during the inference stage that have never been seen before? How can we ensure that our model can still make sense of these new inputs?

These unseen values, also known as Out of Vocabulary (OOV) values, must be handled appropriately. Different algorithms have different methods for dealing with OOV values, and it’s important to consider the assumptions made about the categorical features as well.

In this blog post, we’ll focus on the application of deep learning to dynamic data, using Taboola’s recommender system as an example. This system encounters new values regularly, such as unique item identifiers and advertiser IDs. These unseen values pose a challenge as they were not present during the model’s training phase.

One solution to handling OOV values is to replace all rare values with a special OOV token before training. By exposing the model to the OOV token during training, it can learn a meaningful embedding for all OOV values and mitigate the risk of overfitting to rare values.

However, simply using an OOV token may not be enough to ensure optimal performance. Rare items that are injected with the OOV token may not benefit from the model’s memorization capabilities, leading to poorer performance on these items. Furthermore, if the OOV embedding is learned using a distribution specific to rare items, it may not generalize well to the general population of items.

To address this issue, Taboola’s recommender system implemented a new approach. Instead of randomly injecting the OOV token before training, the model trained on all available values during each epoch. At the end of the epoch, a random set of examples were sampled, and the OOV token was injected for further training. This allowed the model to benefit from both OOV and non-OOV embeddings, improving performance significantly.

By continuously seeking improvements and considering unexpected challenges, the recommender system was able to enhance its performance in production. This case highlights the importance of continuously exploring new approaches and fine-tuning models to achieve optimal results in machine learning applications. To read more about this approach, you can find the original post on engineering.taboola.com.

Latest

Creating a Personal Productivity Assistant Using GLM-5

From Idea to Reality: Building a Personal Productivity Agent...

Lawsuits Claim ChatGPT Contributed to Suicide and Psychosis

The Dark Side of AI: ChatGPT's Alleged Role in...

Japan’s Robotics Sector Hits Record Orders Amid Growing Global Labor Shortages

Japan's Robotics Boom: Navigating Labor Shortages and Global Competition Add...

Analysis of Major Market Segments Fueling the Digital Language Sector

Exploring the Rapid Growth of the Digital Language Learning...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Creating a Personal Productivity Assistant Using GLM-5

From Idea to Reality: Building a Personal Productivity Agent in Just Five Minutes with GLM-5 AI A Revolutionary Approach to Application Development This headline captures the...

Creating Smart Event Agents with Amazon Bedrock AgentCore and Knowledge Bases

Deploying a Production-Ready Event Assistant Using Amazon Bedrock AgentCore Transforming Conference Navigation with AI Introduction to Event Assistance Challenges Building an Intelligent Companion with Amazon Bedrock AgentCore Solution...

A Comprehensive Guide to Machine Learning for Time Series Analysis

Mastering Feature Engineering for Time Series: A Comprehensive Guide Understanding Feature Engineering in Time Series Data The Essential Role of Lag Features in Time Series Analysis Unpacking...