Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

How the Amazon.com Catalog Team Developed Scalable Self-Learning Generative AI Using Amazon Bedrock

Transforming Catalog Management with Self-Learning AI: Insights from Amazon’s Approach

Overview of the Amazon.com Catalog System

The Amazon.com Catalog serves as the backbone of customer shopping, efficiently cataloging product attributes for enhanced search and discovery.

Addressing the Challenge of Model Performance

Improving AI model accuracy in catalog environments requires constant adaptation to edge cases and evolving terminology.

Innovative Solutions for Scalable Learning

By utilizing multiple smaller models with a consensus approach, Amazon has revamped its catalog enrichment strategy, enabling automatic learning and reducing costs.

Unlocking Opportunities through Disagreement

Amazon’s approach reveals how model disagreements can indicate complexity and offer valuable learning opportunities to refine AI systems.

Deep Dive into System Architecture

An in-depth look at the self-learning system’s components, including generator-evaluator models and supervisor agents, showcases how Amazon drives continuous improvements.

Knowledge Base: A Scalable Learning Repository

The hierarchical organization of insights not only aids in operational efficiency but also enhances the system’s learning capabilities.

Key Takeaways and Best Practices

Identifying the right conditions for deployment and scaling, along with effective knowledge management, can optimize AI systems for catalog management.

Conclusion

Amazon’s self-learning architecture exemplifies how AI can continuously evolve, capturing domain knowledge that benefits both customers and sellers alike.

Acknowledgments and About the Authors

Recognition of contributions from the Amazon team behind this innovative system and insights into the expertise of key authors.

Building a Self-Learning AI System for Amazon’s Product Catalog

The foundation of the shopping experience at Amazon.com is the rich and dynamic Amazon Catalog. This catalog serves as the definitive source for product information, providing crucial attributes that power search functionalities, recommendations, and customer discovery. When a seller lists a new product, a complicated process unfolds, extracting structured attributes—like dimensions and materials—and generating titles that align with customer search behaviors. Given the complexity involved—balancing seller intents with consumer needs—the catalog’s enhancement becomes an ideal arena for implementing self-learning AI.

In this blog post, we’ll delve into how Amazon’s Catalog Team harnessed Amazon Bedrock to develop a self-learning system that continuously enhances accuracy while simultaneously driving costs down.

The Challenge

In the realm of generative AI, model performance enhancement demands relentless vigilance. With millions of products to process, these models face various challenges—edge cases, terminology changes, and evolving domain-specific patterns—causing potential accuracy degradation. Traditional methodologies—such as having applied scientists analyze failures, update prompts, and redeploy—are labor-intensive and often fall short in keeping pace with real-world complexity.

At Amazon Catalog, we recognized a pressing challenge: how to create a scalable and automatic improvement mechanism rather than one reliant on constant manual input. The trade-offs appeared daunting: while larger models provided better accuracy, they struggled to scale efficiently. Conversely, smaller models faced hurdles with intricate, ambiguous cases where sellers required significant assistance.

Solution Overview

Our breakthrough originated from an unconventional experiment. Instead of deploying a single, larger model, we embraced a multi-model approach, allowing multiple smaller models to process identical products. When these models reached consensus on attribute extraction, we trusted the results. In cases of disagreement—whether due to genuine ambiguity, lack of context, or model errors—we unearthed a vital insight. These disagreements often indicated complexity rather than straightforward failures.

This realization encouraged us to design a self-learning system that redefined the scaling of generative AI. We established a framework where smaller models managed routine cases through consensus, engaging larger models only during disagreements. The larger model acted as a supervisor, equipped with tools for in-depth examination and analysis. Importantly, the supervisor didn’t merely resolve disputes; it generated reusable knowledge stored within a dynamic knowledge base to prevent entire classes of future disagreements. This mechanism allows invocations of powerful models only when the potential for high learning value arises.

Continuous Improvement Loop

The architecture we designed operates on several fronts. Product data flows through generator-evaluator workers, with disagreements sent to the supervisor for analysis. After inference, feedback signals from both sellers (via listing updates) and customers (through returns and negative reviews) are captured. Insights from various sources feed into a hierarchical knowledge base, creating a continuous improvement loop.

Using Amazon Bedrock, our self-learning system leverages multiple model architectures efficiently. For instance, we can deploy smaller models like Amazon Nova Lite for routine tasks while assigning more capable models like Anthropic Claude Sonnet as supervisor agents. Utilizing open-source small models on Amazon EC2 further enhances cost efficiency and control.

Insights: Transforming Disagreements into Opportunities

Our perspective shifted significantly during a debugging session. We discovered that disagreements among smaller models often correlated with products that needed additional human review. Rather than viewing these disagreements as failures, we realized they represented valuable learning opportunities. The supervisor agent could automatically investigate these discrepancies at scale, creating a feedback loop that drives meaningful insights and prevents recurring issues.

We identified that moderate disagreement rates were optimal; they surfaced significant patterns without indicating unsolvable ambiguity or model deficiencies. This insight allowed us to refine our models continually while managing disagreement rates effectively.

Deep Dive: The Operational Dynamics

At the core of our system are lightweight worker models functioning in parallel—some generating attribute extractions, while others evaluate these outputs. By promoting critical thinking in evaluators and establishing a generator-evaluator framework, we created an adversarial dynamic during inference time.

Disagreements among workers activate a supervisor model that dives into deeper analysis. The supervisor extracts reusable learnings to bolster future performance. This system of checks balances both efficiency and quality, ensuring that our models adapt to evolving product information needs.

Building a Knowledge Base for Future Readiness

The supervisor systematically categorizes learnings based on the context of disagreements. A hierarchical memory manager continuously navigates this knowledge base, dynamically organizing learnings that can directly affect future interactions. This structure ensures that insights are not only actionable but also scalable across the vast expanse of the Amazon catalog.

Lessons Learned and Best Practices

For our self-learning architecture to thrive, certain conditions proved beneficial:

  • High-Volume Inference: This drives compounded learning from varied inputs.
  • Quality-Critical Applications: Consensus naturally assures quality.
  • Evolving Domains: Constant emergence of new patterns and terminology.

Conversely, it’s less suited for low-volume scenarios or use cases with static rules.

Key Success Factors

  • Defining Disagreements: Establish clear definitions for disagreement and maintain tension for productive outcomes.
  • Tracking Learning Effectiveness: Monitor disagreement rates consistently; flat rates signal stagnation in learning.
  • Knowledge Organization: Maintain an actionable, accessible structure for learnings to enhance future model performance.

Avoiding Common Pitfalls

Do not prioritize cost over intelligent scaling. Ensure evaluators critically assess output, and supervisors extract generalizable patterns beyond individual cases. Prevent knowledge rot by maintaining a well-structured knowledge base.

Deployment Strategies

We explored two primary approaches for deploying our architecture:

  1. Learn-Then-Deploy: Initially educate the system aggressively in a pre-production setting, auditing learnings before deploying.

  2. Deploy-and-Learn: Launch with refined prompts, continuously improving through utilization in real production scenarios.

Both strategies harness the same architectural principles; the choice depends on the readiness of the application.

Conclusion

What initiated as an experimental endeavor in catalog enrichment revealed a profound truth: AI systems can evolve. By reframing disagreements as signals for learning, we constructed an architecture that accumulates domain knowledge through actual usage. Amazon Bedrock’s capabilities facilitated this journey, illustrating the potential for high-volume AI applications across domains.

In essence, we’ve transitioned from asking "which model" to "how can we enable learning patterns specific to our needs?" By implementing a system where every inference acts as an opportunity to integrate knowledge, we’re not simply scaling—we’re laying the groundwork for enduring institutional wisdom within our AI infrastructures.


Acknowledgments

We extend our deepest gratitude to Ankur Datta, Zhu Cheng, Xuan Tang, Mohammad Ghasemi, and all members of the team for their exceptional contributions to this work.

About the Authors

Tarik Arici is a Principal Scientist at Amazon, specializing in self-learning generative AI systems aimed at enhancing catalog quality.

Sameer Thombare is a Senior Product Manager at Amazon, focused on optimizing closed-loop systems combining signals from customers, sellers, and supply chains.

Amin Banitalebi is an Applied Science Manager in Amazon Everyday Essentials, with extensive experience in AI and machine learning.

Puneet Sahni is a Senior Principal Engineer at Amazon, working on improving catalog data quality using advanced AI methodologies.

Erdinc Basci has over two decades of experience in technology and leads efforts to enhance generative AI performance at Amazon.

Mey Meenakshisundaram is a Director at Amazon, pioneering advanced machine learning techniques to improve product catalog quality.

Feel free to share your thoughts or questions in the comments section below!

Latest

My Doctor Dismissed My Son’s Parasite Symptoms—But ChatGPT Recognized Them

The Role of AI in Health: Can ChatGPT Be...

Elevating AI for Real-World Applications

Revolutionizing Robotics: The Emergence of Rho-alpha and Vision-Language-Action Models The...

Significant Breakthrough in Lightweight and Privacy-Respecting NLP

EmByte: A Revolutionary NLP Model Enhancing Efficiency and Privacy...

Generative AI: Making Real-World Data Accessible in Biopharma

Transforming Real-World Evidence Generation: Challenges and Innovations The Limitations of...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

50+ Essential Machine Learning Resources for Self-Study in 2026

Unlocking the World of Machine Learning: Essential Resources for Success in 2026 Discover the Best Tools, Courses, and Communities for Your Learning Journey Are You Following...

How Thomson Reuters Developed an Agentic Platform Engineering Hub Using Amazon...

Transforming Platform Engineering with AI: A Case Study on Thomson Reuters' Agentic System Using Amazon Bedrock AgentCore Co-Authors: Naveen Pollamreddi and Seth Krause, Thomson Reuters Introduction...

100 Data Science Interview Questions and Answers for 2026

Ace Your Data Science Interview: Top 100 Questions You Need to Know Preparing for Success in Data Science Interviews Mastering Python for Data Science: Essential Interview...