Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Create an AI-Driven Website Assistant Using Amazon Bedrock

Building an AI-Powered Website Assistant with Amazon Bedrock

Introduction

Businesses face a growing challenge: customers need answers fast, but support teams are overwhelmed. Support documentation like product manuals and knowledge base articles typically require users to search through hundreds of pages, and support agents often run 20–30 customer queries per day to locate specific information.

Solution Overview

This post demonstrates how to solve this challenge by building an AI-powered website assistant using Amazon Bedrock and Amazon Bedrock Knowledge Bases. This solution is designed to benefit both internal teams and external customers, providing instant and relevant answers, reducing resolution times, and offering automated support.

Key Components of the Solution

  1. Amazon Bedrock Knowledge Bases
  2. Amazon Bedrock Managed LLMs
  3. Scalable Serverless Architecture
  4. Automated CI/CD Deployment

Workflow of the Solution

The workflow involves several key steps, from processing documents to deploying a scalable infrastructure, ensuring seamless support for both internal and external users.

Step-by-Step Guide

  • Create Knowledge Base and Ingest Website Data
  • Configure Amazon S3 Data Source
  • Upload Internal Documents
  • Deploy Solution Infrastructure
  • Submit Questions

Conclusion

This post demonstrated how to create an AI-powered website assistant to retrieve information quickly. You can leverage this approach to develop other AI applications and improve customer service effectively.

About the Authors

Learn more about the experts behind this solution and their backgrounds in cloud application architecture and AI/ML implementation.

Transforming Customer Support with an AI-Powered Assistant

In today’s fast-paced digital landscape, businesses are facing a dual challenge: customers demand quick answers, while support teams are often overwhelmed by inquiries. Traditional support documentation, including product manuals and knowledge base articles, can be cumbersome to navigate. Customers typically spend precious time searching through extensive resources, while support agents juggle 20-30 queries a day, seeking the right information to assist users.

In this blog post, we’ll explore how to address these challenges by building an AI-powered website assistant using Amazon Bedrock and Amazon Bedrock Knowledge Bases. With this innovative solution, both internal teams and external customers can experience the benefits of instant, relevant answers, enhanced knowledge retrieval, and automated support available 24/7.

Solution Overview

Our solution leverages Retrieval-Augmented Generation (RAG) to seamlessly pull relevant information from a knowledge base and deliver it to users based on their access permissions. The key components of this solution include:

  1. Amazon Bedrock Knowledge Bases: This allows content from your company’s website and documents from Amazon S3 to be crawled, indexed, and stored. Multiple data sources can be configured, ensuring clear differentiation between internal and external information and maintaining data security.

  2. Amazon Bedrock Managed LLMs: A large language model generates AI-powered responses to user queries, ensuring quick and efficient resolution of customer inquiries.

  3. Scalable Serverless Architecture: Using Amazon Elastic Container Service (ECS) to host the user interface, along with AWS Lambda for handling requests, this solution scales effortlessly to meet user demands.

  4. Automated CI/CD Deployment: Utilizing the AWS Cloud Development Kit (AWS CDK), we establish a continuous integration and delivery (CI/CD) pipeline for seamless updates and maintenance of the solution.

How It Works

The architecture of the solution is designed for efficiency and ease of use. Let’s outline the workflow:

  1. Data Ingestion:

    • Knowledge Base Processing: Amazon Bedrock Knowledge Bases processes documents uploaded to S3 by chunking them and generating embeddings.
    • Web Crawling: A web crawler extracts and ingests relevant website content.
  2. User Interaction:

    • Internal and external users access the application through their web browsers via Elastic Load Balancing (ELB) after logging in with credentials managed by Amazon Cognito.
  3. Query Resolution:

    • When a user submits a question, a Lambda function is invoked. This function retrieves relevant information from the knowledge base and sends relevant data source IDs to Amazon Bedrock, ensuring only appropriate information is pulled based on user type.
    • The Amazon Nova Lite LLM processes this data to generate a comprehensive response, which is relayed back to the user.

Setting Up the Knowledge Base

To maximize the solution’s effectiveness, begin by creating a knowledge base for sourcing website and operational documentation data. Follow these steps:

  1. In Amazon Bedrock Console, navigate to Knowledge Bases under Builder tools.
  2. Select “Create Knowledge Base with Vector Store” and configure the necessary settings, including the data source as a web crawler and specifying relevant URLs.
  3. Use the Amazon Titan Text Embeddings V2 model for data processing and choose Amazon OpenSearch Serverless for vector storage.
  4. Sync your new data source to crawl and ingest the relevant website content.

Configuring Internal Data Sources

Next, set up your S3 bucket to pull internal documentation:

  1. Choose to add a new data source in your Knowledge Base settings, selecting Amazon S3.
  2. Sync the bucket containing your operational documents for data indexing.

Deploying the Infrastructure

To deploy the necessary infrastructure, download the code from the repository and update the parameters.json file with your knowledge base and data source IDs. Follow the README instructions for a step-by-step deployment process through AWS CDK.

Interacting with the AI Assistant

Once deployed, users can interact with the AI assistant to resolve queries efficiently. Internal users, for example, can access both operational guides and public documents, while external users are limited to publicly available content.

Clean Up Resources

If you decide to discontinue the solution, simply follow the provided steps to remove the associated resources through the Amazon Bedrock console.

Conclusion

This post illustrated how to create an AI-powered website assistant capable of delivering swift answers and enhancing customer support experiences. By leveraging Amazon Bedrock, you can streamline knowledge retrieval and automate support processes effectively.

For those keen to dive deeper into generative AI and LLMs, consider exploring the hands-on course "Generative AI with LLMs." This three-week program, tailored for data scientists and engineers, offers foundational insights into building generative AI applications using Amazon Bedrock.

About the Authors

Shashank Jain is a Cloud Application Architect at AWS, focusing on generative AI solutions and application architecture.
Jeff Li is a Senior Cloud Application Architect passionate about modernizing applications for business innovation.
Ranjith Kurumbaru Kandiyil is a Data and AI/ML Architect at AWS, specializing in implementing AI/ML solutions for complex business challenges.

With this robust AI-powered solution, the future of efficient customer support is just a few clicks away!

Latest

Pictory AI: Streamlined Script-to-Video Creation for Businesses and Marketers | AI News Update

The Transformation of Content Creation: The Impact of AI-Driven...

Generative AI Revolutionizes Marketing in the Middle East

Embracing Generative AI: Transforming Marketing Campaigns for Greater ROI Key...

China Proposes 2-Hour Break Cues for Chatbot Conversations

China's Proposal for Digital Well-Being in AI Companionship China is...

Breakthrough Moves Us Closer to Realizing Space Factories

Space Forge Aims to Revolutionize Semiconductor Manufacturing in Orbit This...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Microsoft launches new AI tool to assist finance teams with generative tasks

Microsoft Launches AI Copilot for Finance Teams in Microsoft...

Migrate MLflow Tracking Servers to Amazon SageMaker AI Using Serverless MLflow

Streamlining Your MLflow Migration: From Self-Managed Tracking Server to Amazon SageMaker's Serverless MLflow A Comprehensive Guide to Optimizing MLflow with Amazon SageMaker AI Migrating Your Self-Managed...

Introducing Visa Intelligent Commerce on AWS: Empowering Agentic Commerce with Amazon...

Revolutionizing Commerce: The Emergence of Agentic AI with Visa and AWS Transforming Commerce with Agentic AI: A New Frontier Co-written with Sangeetha Bharath and Seemal Zaman...

Enhancing ADHD Diagnosis: Developing a Mobile AI Assessment Model with Qbtech...

Advancing ADHD Diagnosis: The Integration of Machine Learning and Mobile Technology by Qbtech Revolutionizing ADHD Diagnosis: The Story Behind QbMobile Co-written with Dr. Mikkel Hansen, Qbtech The...