Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Microsoft launches new AI tool to assist finance teams with generative tasks

Microsoft Launches AI Copilot for Finance Teams in Microsoft...

A Single Model to Bring Clarity

Modeling Uncertainty in Recommender Systems: A Unified Approach

When it comes to building effective models for recommender systems, handling uncertainty is key. In a recent series of posts, we explored the different types of uncertainty that can impact your model and discussed various methods for addressing them. Now, in this joint post with Inbar Naor, we’re excited to share how we at Taboola have implemented a neural network that estimates both the probability of an item being relevant to the user, as well as the uncertainty of this prediction.

The neural network we’ve designed consists of several modules, each serving a specific purpose in the model. The item module takes the features of an item, such as its title and thumbnail, and outputs a dense representation that contains important information about the item. The context module considers the context in which the item is being shown and generates a dense representation of that context. The fusion module combines the representations of the item and context to capture their interaction, similar to collaborative filtering. Finally, the estimation module predicts the click-through rate (CTR) of the item and also estimates uncertainty in this prediction.

But how does our model handle uncertainty? We’ll walk you through the three types of uncertainty – data uncertainty, model uncertainty, and measurement uncertainty – and show you how each is addressed in our model.

Data uncertainty is handled by explicitly estimating the noise inherent in the data. By introducing a node to output the data noise and allowing the gradients to propagate, our model can associate different levels of data uncertainty with different inputs. Additionally, we can estimate a mixture of Gaussians to capture more complex data distributions and improve the model’s capacity.

Measurement uncertainty, on the other hand, is related to noisy measurements in the data. By incorporating the measurement noise into the likelihood equation, we can separate data uncertainty from measurement uncertainty and use more data in the training process. This approach not only improves the model’s understanding of the data but also allows for greater flexibility in handling noisy features or labels.

Model uncertainty can be addressed by using techniques like dropout at inference time to understand what the model doesn’t know due to lack of data. By testing the model’s certainty over unique titles and sparse regions of the embedding space, we can see how uncertainty changes with exposure to different types of data. Encouraging exploration of these sparse regions can help reduce uncertainty and improve the model’s performance over time.

In conclusion, by modeling all three types of uncertainty in a unified way, our neural network at Taboola has shown promising results in improving recommendation accuracy and robustness. We hope this post has sparked some ideas on how you can leverage uncertainty in your own applications and training processes. Stay tuned for more insights and updates on our research in recommender systems!

This post is part of a series related to a paper we are presenting at a workshop in this year’s KDD conference on deep density networks and uncertainty in recommender systems. Check out the previous posts in the series for more in-depth discussions on handling uncertainty in models.

Latest

Comprehending the Receptive Field of Deep Convolutional Networks

Exploring the Receptive Field of Deep Convolutional Networks: From...

Using Amazon Bedrock, Planview Creates a Scalable AI Assistant for Portfolio and Project Management

Revolutionizing Project Management with AI: Planview's Multi-Agent Architecture on...

Boost your Large-Scale Machine Learning Models with RAG on AWS Glue powered by Apache Spark

Building a Scalable Retrieval Augmented Generation (RAG) Data Pipeline...

YOLOv11: Advancing Real-Time Object Detection to the Next Level

Unveiling YOLOv11: The Next Frontier in Real-Time Object Detection The...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Microsoft launches new AI tool to assist finance teams with generative tasks

Microsoft Launches AI Copilot for Finance Teams in Microsoft...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Comprehending the Receptive Field of Deep Convolutional Networks

Exploring the Receptive Field of Deep Convolutional Networks: From Human Vision to Deep Learning Architectures In this article, we delved into the concept of receptive...

Boost your Large-Scale Machine Learning Models with RAG on AWS Glue...

Building a Scalable Retrieval Augmented Generation (RAG) Data Pipeline on LangChain with AWS Glue and Amazon OpenSearch Serverless Large language models (LLMs) are revolutionizing the...

Utilizing Python Debugger and the Logging Module for Debugging in Machine...

Debugging, Logging, and Schema Validation in Deep Learning: A Comprehensive Guide Have you ever found yourself stuck on an error for way too long? It...