Unveiling Bias in Train-Test Split Procedure: The Story of Thumbnail Integration
The Pitfalls of Data Leakage in Train-Test Split Procedure
About a year ago, our team decided to incorporate thumbnails as a new feature in our content recommendation model. This was a significant step as we had been relying solely on item titles and metadata features up until that point. Little did we know, this decision would lead us down a path of data leakage and bias in our train-test split procedure.
Setting the Scene
When working with multiple types of features in a unified model, such as titles and thumbnails, it’s crucial to be aware of the potential for data leakage. In our case, many items shared the same thumbnail or title, making it impossible to apply a random split to our dataset. This meant that our model could potentially memorize titles/thumbnails from the training set and perform well on the test set without truly generalizing.
First Attempt
Our initial approach to solving the data leakage issue seemed simple enough. We marked all rows in the dataset as “train” and then iteratively converted rows to “test” until we reached our desired split ratio. However, despite our efforts, we began to notice unexpected results in our model performance on the test set.
And Then Things Escalated
Upon further investigation, we discovered that our new split method was biased towards selecting larger components for the test set. This led to significant discrepancies in model performance between the title-only model and the model that incorporated thumbnails. Our initial assumption that the split method would not impact the title-only model’s performance was proven wrong.
Second Try
Realizing our mistake, we refined our approach by sampling connected components instead of individual rows for the test set. This ensured that each component had an equal probability of being selected for the test set, eliminating the bias we had previously encountered.
Key Takeaway
The way you split your dataset into train-test sets can have a significant impact on the performance and generalization of your model. It’s essential to be mindful of data leakage and bias when working with multiple types of features. By understanding the nuances of your dataset and implementing proper splitting methods, you can ensure more accurate and reliable model performance.
Ultimately, our journey towards incorporating thumbnails into our model served as a valuable learning experience. As we continue to refine our models and explore new features, we will remain vigilant in our approach to data splitting to prevent any issues of bias or data leakage.
Originally published by me at engineering.taboola.com.