Handling Unseen Values in Machine Learning: A Case Study on Taboola’s Recommender System
In the world of machine learning, one of the challenges that often arise is dealing with categorical features that represent real-world objects, such as words, items, and categories. However, what happens when we encounter new object values during the inference stage that have never been seen before? How can we ensure that our model can still make sense of these new inputs?
These unseen values, also known as Out of Vocabulary (OOV) values, must be handled appropriately. Different algorithms have different methods for dealing with OOV values, and it’s important to consider the assumptions made about the categorical features as well.
In this blog post, we’ll focus on the application of deep learning to dynamic data, using Taboola’s recommender system as an example. This system encounters new values regularly, such as unique item identifiers and advertiser IDs. These unseen values pose a challenge as they were not present during the model’s training phase.
One solution to handling OOV values is to replace all rare values with a special OOV token before training. By exposing the model to the OOV token during training, it can learn a meaningful embedding for all OOV values and mitigate the risk of overfitting to rare values.
However, simply using an OOV token may not be enough to ensure optimal performance. Rare items that are injected with the OOV token may not benefit from the model’s memorization capabilities, leading to poorer performance on these items. Furthermore, if the OOV embedding is learned using a distribution specific to rare items, it may not generalize well to the general population of items.
To address this issue, Taboola’s recommender system implemented a new approach. Instead of randomly injecting the OOV token before training, the model trained on all available values during each epoch. At the end of the epoch, a random set of examples were sampled, and the OOV token was injected for further training. This allowed the model to benefit from both OOV and non-OOV embeddings, improving performance significantly.
By continuously seeking improvements and considering unexpected challenges, the recommender system was able to enhance its performance in production. This case highlights the importance of continuously exploring new approaches and fine-tuning models to achieve optimal results in machine learning applications. To read more about this approach, you can find the original post on engineering.taboola.com.