Building an AI-Powered Website Assistant with Amazon Bedrock
Introduction
Businesses face a growing challenge: customers need answers fast, but support teams are overwhelmed. Support documentation like product manuals and knowledge base articles typically require users to search through hundreds of pages, and support agents often run 20–30 customer queries per day to locate specific information.
Solution Overview
This post demonstrates how to solve this challenge by building an AI-powered website assistant using Amazon Bedrock and Amazon Bedrock Knowledge Bases. This solution is designed to benefit both internal teams and external customers, providing instant and relevant answers, reducing resolution times, and offering automated support.
Key Components of the Solution
- Amazon Bedrock Knowledge Bases
- Amazon Bedrock Managed LLMs
- Scalable Serverless Architecture
- Automated CI/CD Deployment
Workflow of the Solution
The workflow involves several key steps, from processing documents to deploying a scalable infrastructure, ensuring seamless support for both internal and external users.
Step-by-Step Guide
- Create Knowledge Base and Ingest Website Data
- Configure Amazon S3 Data Source
- Upload Internal Documents
- Deploy Solution Infrastructure
- Submit Questions
Conclusion
This post demonstrated how to create an AI-powered website assistant to retrieve information quickly. You can leverage this approach to develop other AI applications and improve customer service effectively.
About the Authors
Learn more about the experts behind this solution and their backgrounds in cloud application architecture and AI/ML implementation.
Transforming Customer Support with an AI-Powered Assistant
In today’s fast-paced digital landscape, businesses are facing a dual challenge: customers demand quick answers, while support teams are often overwhelmed by inquiries. Traditional support documentation, including product manuals and knowledge base articles, can be cumbersome to navigate. Customers typically spend precious time searching through extensive resources, while support agents juggle 20-30 queries a day, seeking the right information to assist users.
In this blog post, we’ll explore how to address these challenges by building an AI-powered website assistant using Amazon Bedrock and Amazon Bedrock Knowledge Bases. With this innovative solution, both internal teams and external customers can experience the benefits of instant, relevant answers, enhanced knowledge retrieval, and automated support available 24/7.
Solution Overview
Our solution leverages Retrieval-Augmented Generation (RAG) to seamlessly pull relevant information from a knowledge base and deliver it to users based on their access permissions. The key components of this solution include:
-
Amazon Bedrock Knowledge Bases: This allows content from your company’s website and documents from Amazon S3 to be crawled, indexed, and stored. Multiple data sources can be configured, ensuring clear differentiation between internal and external information and maintaining data security.
-
Amazon Bedrock Managed LLMs: A large language model generates AI-powered responses to user queries, ensuring quick and efficient resolution of customer inquiries.
-
Scalable Serverless Architecture: Using Amazon Elastic Container Service (ECS) to host the user interface, along with AWS Lambda for handling requests, this solution scales effortlessly to meet user demands.
-
Automated CI/CD Deployment: Utilizing the AWS Cloud Development Kit (AWS CDK), we establish a continuous integration and delivery (CI/CD) pipeline for seamless updates and maintenance of the solution.
How It Works
The architecture of the solution is designed for efficiency and ease of use. Let’s outline the workflow:
-
Data Ingestion:
- Knowledge Base Processing: Amazon Bedrock Knowledge Bases processes documents uploaded to S3 by chunking them and generating embeddings.
- Web Crawling: A web crawler extracts and ingests relevant website content.
-
User Interaction:
- Internal and external users access the application through their web browsers via Elastic Load Balancing (ELB) after logging in with credentials managed by Amazon Cognito.
-
Query Resolution:
- When a user submits a question, a Lambda function is invoked. This function retrieves relevant information from the knowledge base and sends relevant data source IDs to Amazon Bedrock, ensuring only appropriate information is pulled based on user type.
- The Amazon Nova Lite LLM processes this data to generate a comprehensive response, which is relayed back to the user.
Setting Up the Knowledge Base
To maximize the solution’s effectiveness, begin by creating a knowledge base for sourcing website and operational documentation data. Follow these steps:
- In Amazon Bedrock Console, navigate to Knowledge Bases under Builder tools.
- Select “Create Knowledge Base with Vector Store” and configure the necessary settings, including the data source as a web crawler and specifying relevant URLs.
- Use the Amazon Titan Text Embeddings V2 model for data processing and choose Amazon OpenSearch Serverless for vector storage.
- Sync your new data source to crawl and ingest the relevant website content.
Configuring Internal Data Sources
Next, set up your S3 bucket to pull internal documentation:
- Choose to add a new data source in your Knowledge Base settings, selecting Amazon S3.
- Sync the bucket containing your operational documents for data indexing.
Deploying the Infrastructure
To deploy the necessary infrastructure, download the code from the repository and update the parameters.json file with your knowledge base and data source IDs. Follow the README instructions for a step-by-step deployment process through AWS CDK.
Interacting with the AI Assistant
Once deployed, users can interact with the AI assistant to resolve queries efficiently. Internal users, for example, can access both operational guides and public documents, while external users are limited to publicly available content.
Clean Up Resources
If you decide to discontinue the solution, simply follow the provided steps to remove the associated resources through the Amazon Bedrock console.
Conclusion
This post illustrated how to create an AI-powered website assistant capable of delivering swift answers and enhancing customer support experiences. By leveraging Amazon Bedrock, you can streamline knowledge retrieval and automate support processes effectively.
For those keen to dive deeper into generative AI and LLMs, consider exploring the hands-on course "Generative AI with LLMs." This three-week program, tailored for data scientists and engineers, offers foundational insights into building generative AI applications using Amazon Bedrock.
About the Authors
Shashank Jain is a Cloud Application Architect at AWS, focusing on generative AI solutions and application architecture.
Jeff Li is a Senior Cloud Application Architect passionate about modernizing applications for business innovation.
Ranjith Kurumbaru Kandiyil is a Data and AI/ML Architect at AWS, specializing in implementing AI/ML solutions for complex business challenges.
With this robust AI-powered solution, the future of efficient customer support is just a few clicks away!