Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Develop Responsible AI Solutions Using Amazon Bedrock Guardrails

Implementing Amazon Bedrock Guardrails: Ensuring Safe and Compliant Generative AI in Healthcare Insurance Applications


Overview of Amazon Bedrock Guardrails

Challenges in Generative AI and their Solutions

Prerequisites for Setting Up Guardrails

Creating and Configuring Your Guardrail

Multimodal Content Filters for Enhanced Security

Establishing Denied Topics to Maintain Compliance

Implementing Word Filters to Focus Discussions

Filtering Sensitive Information for Patient Privacy

Automated Reasoning Checks to Validate Responses

Testing Your Guardrail for Effectiveness

Using the Independent API for Broader Applications

Conclusion: Enhancing Safety and Compliance in AI Interactions

References and Further Reading

About the Authors

Navigating Generative AI Safeguards with Amazon Bedrock Guardrails in Healthcare

As organizations increasingly adopt generative AI, they encounter significant challenges in ensuring their applications uphold designed safety standards. While foundation models (FMs) deliver powerful capabilities, they also present unique risks—ranging from harmful content generation to vulnerabilities that can lead to prompt injection attacks and model hallucinations. This post explores how Amazon Bedrock Guardrails addresses these critical challenges, particularly in the context of a healthcare insurance use case.

The Role of Amazon Bedrock Guardrails

Amazon Bedrock Guardrails has effectively supported organizations like MAPFRE, KONE, Fiserv, PagerDuty, and Aha in creating secure applications. Just as traditional applications necessitate multi-layered security, Amazon Bedrock Guardrails implements essential safeguards at the model, prompt, and application levels, blocking up to 88% more undesirable and harmful multimodal content. It has proven to filter over 75% of hallucinated responses in use cases like Retrieval Augmented Generation (RAG) and summarization, establishing itself as the first safeguard utilizing Automated Reasoning to prevent factual errors.

In this article, we will demonstrate how to implement safeguards using Amazon Bedrock Guardrails within a healthcare insurance setting.

Solution Overview

Consider an innovative AI assistant designed to streamline interactions between policyholders and healthcare insurance firms. This AI-powered solution enables policyholders to check coverage details, submit claims, find in-network providers, and understand their benefits through natural conversations. By providing all-day support, the assistant handles routine inquiries, allowing human agents to focus on more complex cases.

To ensure the secure and compliant operation of this AI assistant, we leverage Amazon Bedrock Guardrails to establish a crucial safety framework. This not only protects users but also builds trust in the AI system, encouraging greater adoption and enhancing the overall customer experience.

Implementing Safeguards Using Amazon Bedrock Guardrails

Prerequisites

Before we dive into the configuration, ensure you have access to the console with the appropriate permissions for Amazon Bedrock. If you haven’t set up Amazon Bedrock yet, refer to the Getting Started guide in the Amazon Bedrock console.

Creating a Guardrail

  1. Navigate to the Amazon Bedrock console and select "Guardrails" in the navigation pane.
  2. Click "Create guardrail."
  3. In the "Provide guardrail details" section, enter a name (e.g., MyHealthCareGuardrail), an optional description, and a message to display if the guardrail blocks a user prompt, then click "Next."

Configuring Multimodal Content Filters

Security remains paramount in building AI applications. Amazon Bedrock Guardrails can now detect and filter both text and image content across six protection categories: Hate, Insults, Sexual, Violence, Misconduct, and Prompt Attacks.

  1. For maximum protection, especially in sensitive sectors like healthcare, set your confidence thresholds to "High" across all categories for both text and image content.
  2. Enable prompt attack protection to prevent system instruction tampering, and utilize input tagging for accurate classification of system prompts, then choose "Next."

Denied Topics

In healthcare applications, clear boundaries are essential, particularly around medical advice.

  1. In the "Add denied topics" section, create a new topic called "Disease Diagnosis" and add example phrases representing diagnostic queries, then choose "Confirm."
  2. This configuration ensures our application remains focused on insurance-related queries while avoiding discussions of medical diagnostics, blocking requests like "Do I have diabetes?" or "What’s causing my headache?"

Word Filters

To maintain a professional discourse and relevance in responses:

  1. In the "Add word filters" section, input custom words or phrases to filter. For instance, include terms like "stocks," "investment strategies," and "financial performance," then choose "Next."

Sensitive Information Filters

Configure filters to block email addresses, phone numbers, and other personally identifiable information (PII), ensuring compliance with regulations like HIPAA.

  1. Set filters for blocking email addresses and phone numbers, then choose "Next."

Contextual Grounding Checks

Use contextual grounding and relevance checks to validate model responses, detect hallucinations, and ensure alignment with reference sources.

  1. Set thresholds for contextual grounding and relevance checks (we suggest 0.7), then choose "Next."

Automated Reasoning Checks

Automated Reasoning checks can help detect hallucinations and validate the accuracy of the model’s responses.

  1. Create an Automated Reasoning policy by choosing "Automated Reasoning" under Safeguards.
  2. Upload a relevant document defining the correct solution space (e.g., an HR guideline or insurance coverage policy document).
  3. Once defined, attach the policy to your guardrail.

Testing Your Guardrail

Now it’s time to test your configured guardrail within your healthcare insurance application.

  1. On the Amazon Bedrock console, go to the guardrail details page and select the model you wish to apply.
  2. Enter prompts to see how your configured guardrails intervene. Test denied topics, word filters, PII detection, and finally, the Automated Reasoning checks to ensure everything functions as intended.

Independent API Usage

Amazon Bedrock Guardrails can also assess prompts and model responses outside of Amazon Bedrock itself using the ApplyGuardrail API.

  1. Test the configuration using the ApplyGuardrail API to validate user inputs without invoking a hosted model.

Conclusion

In this blog post, we explored how Amazon Bedrock Guardrails can effectively block harmful and undesirable multimodal content in a healthcare insurance call center scenario. We guided you through setting up and testing various guardrails and highlighted the flexibility of the ApplyGuardrail API for broader model application.

Ready to enhance safety and compliance in your AI applications? Learn more about Amazon Bedrock Guardrails and how to implement mandatory guardrails for model inference calls, helping to consistently enforce security measures across AI interactions.


Authors:

  • Divya Muralidharan, Solutions Architect at AWS, with a passion for using technology to drive growth and value.
  • Rachna Chadha, Principal Technologist at AWS, dedicated to ethical AI use, particularly within the healthcare domain.

Latest

Integrating Responsible AI in Prioritizing Generative AI Projects

Prioritizing Generative AI Projects: Incorporating Responsible AI Practices Responsible AI...

Robots Shine at Canton Fair, Highlighting Innovation and Smart Technology

Innovations in Robotics Shine at the 138th Canton Fair:...

Clippy Makes a Comeback: Microsoft Revitalizes Iconic Assistant with AI Features in 2025 | AI News Update

Clippy's Comeback: Merging Nostalgia with Cutting-Edge AI in Microsoft's...

Is Generative AI Prompting Gartner to Reevaluate Its Research Subscription Model?

Analyst Downgrades and AI Disruption: A Closer Look at...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Microsoft launches new AI tool to assist finance teams with generative tasks

Microsoft Launches AI Copilot for Finance Teams in Microsoft...

Integrating Responsible AI in Prioritizing Generative AI Projects

Prioritizing Generative AI Projects: Incorporating Responsible AI Practices Responsible AI Overview Generative AI Prioritization Methodology Example Scenario: Comparing Generative AI Projects First Pass Prioritization Risk Assessment Second Pass Prioritization Conclusion About the...

Developing an Intelligent AI Cost Management System for Amazon Bedrock –...

Advanced Cost Management Strategies for Amazon Bedrock Overview of Proactive Cost Management Solutions Enhancing Traceability with Invocation-Level Tagging Improved API Input Structure Validation and Tagging Mechanisms Logging and Analysis...

Creating a Multi-Agent Voice Assistant with Amazon Nova Sonic and Amazon...

Harnessing Amazon Nova Sonic: Revolutionizing Voice Conversations with Multi-Agent Architecture Introduction to Amazon Nova Sonic Explore how Amazon Nova Sonic facilitates natural, human-like speech conversations for...