Implementing Amazon Bedrock Guardrails: Ensuring Safe and Compliant Generative AI in Healthcare Insurance Applications
Overview of Amazon Bedrock Guardrails
Challenges in Generative AI and their Solutions
Prerequisites for Setting Up Guardrails
Creating and Configuring Your Guardrail
Multimodal Content Filters for Enhanced Security
Establishing Denied Topics to Maintain Compliance
Implementing Word Filters to Focus Discussions
Filtering Sensitive Information for Patient Privacy
Automated Reasoning Checks to Validate Responses
Testing Your Guardrail for Effectiveness
Using the Independent API for Broader Applications
Conclusion: Enhancing Safety and Compliance in AI Interactions
References and Further Reading
About the Authors
Navigating Generative AI Safeguards with Amazon Bedrock Guardrails in Healthcare
As organizations increasingly adopt generative AI, they encounter significant challenges in ensuring their applications uphold designed safety standards. While foundation models (FMs) deliver powerful capabilities, they also present unique risks—ranging from harmful content generation to vulnerabilities that can lead to prompt injection attacks and model hallucinations. This post explores how Amazon Bedrock Guardrails addresses these critical challenges, particularly in the context of a healthcare insurance use case.
The Role of Amazon Bedrock Guardrails
Amazon Bedrock Guardrails has effectively supported organizations like MAPFRE, KONE, Fiserv, PagerDuty, and Aha in creating secure applications. Just as traditional applications necessitate multi-layered security, Amazon Bedrock Guardrails implements essential safeguards at the model, prompt, and application levels, blocking up to 88% more undesirable and harmful multimodal content. It has proven to filter over 75% of hallucinated responses in use cases like Retrieval Augmented Generation (RAG) and summarization, establishing itself as the first safeguard utilizing Automated Reasoning to prevent factual errors.
In this article, we will demonstrate how to implement safeguards using Amazon Bedrock Guardrails within a healthcare insurance setting.
Solution Overview
Consider an innovative AI assistant designed to streamline interactions between policyholders and healthcare insurance firms. This AI-powered solution enables policyholders to check coverage details, submit claims, find in-network providers, and understand their benefits through natural conversations. By providing all-day support, the assistant handles routine inquiries, allowing human agents to focus on more complex cases.
To ensure the secure and compliant operation of this AI assistant, we leverage Amazon Bedrock Guardrails to establish a crucial safety framework. This not only protects users but also builds trust in the AI system, encouraging greater adoption and enhancing the overall customer experience.
Implementing Safeguards Using Amazon Bedrock Guardrails
Prerequisites
Before we dive into the configuration, ensure you have access to the console with the appropriate permissions for Amazon Bedrock. If you haven’t set up Amazon Bedrock yet, refer to the Getting Started guide in the Amazon Bedrock console.
Creating a Guardrail
- Navigate to the Amazon Bedrock console and select "Guardrails" in the navigation pane.
- Click "Create guardrail."
- In the "Provide guardrail details" section, enter a name (e.g., MyHealthCareGuardrail), an optional description, and a message to display if the guardrail blocks a user prompt, then click "Next."
Configuring Multimodal Content Filters
Security remains paramount in building AI applications. Amazon Bedrock Guardrails can now detect and filter both text and image content across six protection categories: Hate, Insults, Sexual, Violence, Misconduct, and Prompt Attacks.
- For maximum protection, especially in sensitive sectors like healthcare, set your confidence thresholds to "High" across all categories for both text and image content.
- Enable prompt attack protection to prevent system instruction tampering, and utilize input tagging for accurate classification of system prompts, then choose "Next."
Denied Topics
In healthcare applications, clear boundaries are essential, particularly around medical advice.
- In the "Add denied topics" section, create a new topic called "Disease Diagnosis" and add example phrases representing diagnostic queries, then choose "Confirm."
- This configuration ensures our application remains focused on insurance-related queries while avoiding discussions of medical diagnostics, blocking requests like "Do I have diabetes?" or "What’s causing my headache?"
Word Filters
To maintain a professional discourse and relevance in responses:
- In the "Add word filters" section, input custom words or phrases to filter. For instance, include terms like "stocks," "investment strategies," and "financial performance," then choose "Next."
Sensitive Information Filters
Configure filters to block email addresses, phone numbers, and other personally identifiable information (PII), ensuring compliance with regulations like HIPAA.
- Set filters for blocking email addresses and phone numbers, then choose "Next."
Contextual Grounding Checks
Use contextual grounding and relevance checks to validate model responses, detect hallucinations, and ensure alignment with reference sources.
- Set thresholds for contextual grounding and relevance checks (we suggest 0.7), then choose "Next."
Automated Reasoning Checks
Automated Reasoning checks can help detect hallucinations and validate the accuracy of the model’s responses.
- Create an Automated Reasoning policy by choosing "Automated Reasoning" under Safeguards.
- Upload a relevant document defining the correct solution space (e.g., an HR guideline or insurance coverage policy document).
- Once defined, attach the policy to your guardrail.
Testing Your Guardrail
Now it’s time to test your configured guardrail within your healthcare insurance application.
- On the Amazon Bedrock console, go to the guardrail details page and select the model you wish to apply.
- Enter prompts to see how your configured guardrails intervene. Test denied topics, word filters, PII detection, and finally, the Automated Reasoning checks to ensure everything functions as intended.
Independent API Usage
Amazon Bedrock Guardrails can also assess prompts and model responses outside of Amazon Bedrock itself using the ApplyGuardrail API.
- Test the configuration using the ApplyGuardrail API to validate user inputs without invoking a hosted model.
Conclusion
In this blog post, we explored how Amazon Bedrock Guardrails can effectively block harmful and undesirable multimodal content in a healthcare insurance call center scenario. We guided you through setting up and testing various guardrails and highlighted the flexibility of the ApplyGuardrail API for broader model application.
Ready to enhance safety and compliance in your AI applications? Learn more about Amazon Bedrock Guardrails and how to implement mandatory guardrails for model inference calls, helping to consistently enforce security measures across AI interactions.
Authors:
- Divya Muralidharan, Solutions Architect at AWS, with a passion for using technology to drive growth and value.
- Rachna Chadha, Principal Technologist at AWS, dedicated to ethical AI use, particularly within the healthcare domain.