Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Enhance Cold-Start Recommendations Using vLLM on AWS Trainium

Optimizing Cold Start in Recommendation Systems: Leveraging LLMs for Enhanced User Experience


Introducing Our Cold-Start Solution Framework

Enhancing User Interest Profiles with Large Language Models

Efficient Encoding and Retrieval of User Interests

Measuring and Improving Recommendation Quality

Analyzing Model and Encoder Influence on Recommendations

Optimizing Cost-Performance Through Tensor Parallelism

Future Directions: From Experimentation to Implementation

Conclusion: Balancing Performance and Cost in Recommendations

About the Authors

Tackling the Cold Start Problem in Recommendation Systems: A Scalable Solution with LLMs on AWS

Cold starts in recommendation systems are a significant challenge, often manifesting as difficulties in providing relevant suggestions for new users or items. Yet, the problem extends beyond the mere absence of data; it’s about the scarcity of personalized signals right from initial interactions. When users arrive or new content appears, systems flooded with generic segments often see dampened click-through and conversion rates, and risk driving users away before ever having a chance to learn their preferences. Conventional methods like collaborative filtering or popularity lists are frequently inadequate for bridging this signal gap, leading to stale recommendations and missed opportunities.

Imagine an alternative approach—one that leverages large language models (LLMs) to create detailed interest profiles from the very start. By utilizing techniques for zero-shot reasoning, we can quickly synthesize rich, context-aware user and item embeddings without waiting for extensive historical interaction data. This post explores how you can transform a cold start into a warm welcome.

Solution Overview

Our cold start solution is built on Amazon EC2 Trainium chips, optimizing costs and performance. To streamline the deployment of our models, we utilize AWS Deep Learning Containers (DLCs) along with the AWS Neuron SDK, which includes pre-installed Neuron-optimized PyTorch modules and the latest Trainium drivers.

Sharding large models across multiple Trainium chips is facilitated by NeuronX Distributed (NxD), a distributed library that integrates effortlessly with vLLM. This capability allows for efficient parallel inference, even with 70B parameter LLMs. Such a combination of Trainium chips, Neuron Tools, and vLLM equips machine learning engineers with a cost-effective, scalable solution to experiment with various LLM-encoder configurations and rapidly iterate on recommendation quality metrics—without altering core model code.

In the upcoming sections, we’ll walk through a practical end-to-end workflow using Jupyter notebooks, from data loading to generating embeddings and retrieving candidates via FAISS. We’ll also delve into a reference implementation that demonstrates how to package and deploy your Neuron-optimized models on Amazon Elastic Kubernetes Service (EKS) with autoscaling capabilities.

Expanding User Interest Profiles with LLMs

We’ll utilize the Amazon Book Reviews dataset from Kaggle, containing real-world user reviews and metadata for thousands of books. This dataset is ideal for simulating cold-starts, enabling us to analyze how well our interest expansions—powered by distilled versions of Meta’s Llama 8B and 70B models—can enrich user profiles.

For instance, if a user has reviewed a single science fiction novel, the LLM can infer related subtopics, such as galactic empires or cyberpunk dystopias, that the user may enjoy. We construct a structured prompt to guide the model:

prompt = (
    f"The user has shown interest in: {user_review_category}.\n"
    "Suggest 3–5 related book topics they might enjoy.\n"
    "Respond with a JSON list of topic keywords."
)
expanded_topics = llm.generate([prompt])[0].text 

By mandating a JSON array response, we prevent irregular outputs and obtain a consistent list of interest expansions. Models like Meta’s Llama are adept at connecting related concepts, allowing us to synthesize new recommendation signals even from minimal input.

Encoding User Interests and Retrieving Relevant Content

After expanding user interests, we need to vectorize these interests and our catalog of books for comparisons. We experiment with three sizes of the Google T5 encoder—base, large, and XL—to observe how dimensionality affects matching quality.

The process unfolds as follows:

  1. Load the respective encoder.
  2. Encode book summaries and normalize them into a single NumPy matrix.
  3. Construct a FAISS index for rapid nearest neighbor searches.
  4. Query the index with the encoded interest to retrieve the top k recommendations.
from transformers import T5Tokenizer, T5EncoderModel
import faiss
import numpy as np

# Our dataset of book summaries
content_texts = df["review/summary"].tolist()
encoder_sizes = ["t5-base", "t5-large", "t5-xl"]
top_k = 5

for size in encoder_sizes:
    ...
    # 4. Encode a single expanded interest and query the index
    interest = "space opera with political intrigue"
    ...
    print(f"\nTop {top_k} recommendations using {size}:")
    for title in recommendations:
        print(" -", title) 

This approach allows for the comparison of different encoder scales’ effect on embedding spreads and retrieval outcomes.

Measuring and Improving Recommendation Quality

After creating FAISS indexes for various LLM-encoder combinations and calculating the mean distances, we gain insights into how compactly or distantly each model’s embeddings cluster. For instance, larger models like the 8B and 70B configurations reveal richer and more discriminative signals for recommendation.

In practical terms, we discover that an LLM with 8 billion parameters paired with a T5-large encoder strikes an efficient balance between performance and cost, avoiding the sometimes trivial gains from larger models.

Tweaking Tensor Parallel Size for Optimal Cost Performance

To optimize performance and cost, we assess how varying the Neuron tensor parallelism impacts latency while utilizing the Llama 8B model. Our tests show that an optimal tensor parallel size of 16 offers a productive trade-off, enhancing speed while managing operational costs.

Conclusion

This post illustrates how AWS Trainium, the Neuron SDK, and scalable LLM inference can effectively tackle cold start challenges by enriching sparse user profiles, improving recommendation quality from day one.

Crucially, our findings emphasize that bigger models aren’t always better; smaller models like the 8B LLM can achieve impressive results without incurring unnecessary costs. By prioritizing the optimal model-encoder pair, teams can deliver high-quality recommendations while maintaining cost-effective infrastructure.

About the Authors

Yahav Biran is a Principal Architect at AWS, focusing on large-scale AI workloads and contributing to open-source projects. He holds a Ph.D. in Systems Engineering.

Nir Ozeri is a Sr. Solutions Architect Manager at AWS, specializing in application modernization and scalable architecture, leading a team focused on ISV customers.


By employing these strategies, we can turn cold starts into robust, personalized experiences that engage users from their very first interaction.

Latest

Exploitation of ChatGPT via SSRF Vulnerability in Custom GPT Actions

Addressing SSRF Vulnerabilities: OpenAI's Patch and Essential Security Measures...

This Startup Is Transforming Touch Technology for VR, Robotics, and Beyond

Sensetics: Pioneering Programmable Matter to Digitize the Sense of...

Leveraging Artificial Intelligence in Education and Scientific Research

Unlocking the Future of Learning: An Overview of Humata...

European Commission Violates Its Own AI Guidelines by Utilizing ChatGPT in Public Documents

ICCL Files Complaint Against European Commission Over Generative AI...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Microsoft launches new AI tool to assist finance teams with generative tasks

Microsoft Launches AI Copilot for Finance Teams in Microsoft...

A Comprehensive Guide to Developing AI Agents in GxP Environments

Leveraging Generative AI in GxP-Compliant Healthcare: Key Strategies and Frameworks Transforming Healthcare with AI in Regulatory Environments A Risk-Based Framework for Implementing AI Agents in GxP...

Introducing Agent-to-Agent Protocol Support in Amazon Bedrock’s AgentCore Runtime

Unlocking Seamless Collaboration: Introducing Agent-to-Agent (A2A) Protocol on Amazon Bedrock AgentCore Runtime Maximize Efficiency and Interoperability in Multi-Agent Systems Explore how Amazon Bedrock AgentCore Runtime empowers...

Optimize VLMs for Multipage Document-to-JSON Conversion Using SageMaker AI and SWIFT

Leveraging Intelligent Document Processing: Unleashing the Power of Vision Language Models for Accurate Document-to-JSON Conversion Overview of Intelligent Document Processing Challenges and Solutions Advancements in Document...