Streamlining Medical Record Analysis: How Care Access Transformed Operations with Amazon Bedrock’s Prompt Caching
This heading encapsulates the essence of the post, emphasizing the focus on efficiency and technological transformation in medical record processing.
Streamlining Medical Record Processing with Prompt Caching: A Success Story from Care Access
This post is co-written with Michelle Tat, Christopher Penrose, Rasmus Buchmann, and Daniel Hansen from Care Access.
Healthcare organizations are increasingly challenged with the efficient processing of vast quantities of medical records. With stringent security and compliance requirements, it can be daunting to maintain operational efficiency while ensuring patient data privacy. Care Access, a leader in global health services, faced these challenges when scaling its health screening program, but through innovative solutions like prompt caching with Amazon Bedrock, they turned their challenges into opportunities for growth.
The Challenge
Every day, Care Access processes 300 to 500 medical records, each requiring a deep analysis. The conventional approach involved multiple prompts for each analysis, necessitating significant reprocessing of the same data for every unique question. As the number of new participants sharing their medical records surged, an innovative and scalable solution became imperative.
Care Access: A Leader in Health Screening
Care Access is dedicated to improving global health by providing clinical research services and health screenings directly to communities facing barriers to care. They serve nearly 15,000 new participants monthly, offering advanced testing and connecting individuals to relevant health resources, including clinical trials. However, the rapid success brought logistical challenges that demanded a new approach to medical record processing.
Enter Amazon Bedrock Prompt Caching
Care Access implemented a Large Language Model (LLM) solution via Amazon Bedrock, which dramatically enhanced their record analysis process. The breakthrough came with the introduction of prompt caching—a technology that enables the storage and reuse of static medical record content, allowing only the dynamic analysis questions to vary across requests. This capability was pivotal in reducing operational costs while enhancing processing speeds.
What is Prompt Caching?
Prompt caching allows for parts of prompts that can remain static (the medical record content) to be stored and reused, thus preventing the need for constant reprocessing of the same information. In Care Access’s case, this meant only the specific questions asked about a static medical record would require fresh computation, dramatically streamlining the analysis process.
How It Works
-
Medical Record Retrieval: Electronic health records (EHRs) are retrieved and prepared from an Amazon S3 bucket, allowing for efficient processing.
-
Prompt Cache Management: Static content is cached as a prefix, while dynamic analysis questions vary by query.
-
LLM Inference: Each record is analyzed multiple times with various questions using the cached content, optimizing time and resources.
-
Output Processing: The results are compiled into manageable JSON outputs for further analysis, ultimately matching participants to clinical trials.
Data Security and Privacy
Compliance with stringent privacy standards is a cornerstone of Care Access’s operations. Drawing from AWS solutions, they ensure:
- HIPAA Compliance: Adherence to high privacy and security standards for handling personal health information (PHI).
- Data Minimization: Only essential PHI is processed, with unnecessary data discarded.
- Audit Trails: Amazon CloudWatch provides detailed audit logs for all data access.
Impact of Prompt Caching
The implementation of prompt caching led to transformative advantages for Care Access:
- Cost Optimization: A staggering 86% reduction in Amazon Bedrock costs.
- Performance Improvements: Processing time per record was reduced by 66%, saving multiple hours daily.
- Operational Benefits: Enhanced token reuse reduced consumption, leading to faster response times while maintaining context integrity.
Josh Brandoff, Head of Applied Machine Learning & Analytics at Care Access, expressed, “AWS was a fantastic partner as we launched our first generation of LLM-powered solutions. Amazon Bedrock quickly integrated with our existing architecture, allowing us to scale efficiently.”
Best Practices and Recommendations
Through their experience, Care Access identified critical implementation strategies for success:
-
Token Threshold Strategy: Automate caching when records exceed a minimum size to optimize processing.
-
Default Caching Approach: Enable caching by default for varying prompt sizes.
-
Cache Optimization: Structure prompts to separate static and dynamic content effectively.
Conclusion
Care Access has successfully transformed its medical record processing challenges into significant organizational strengths through strategic implementation of prompt caching with Amazon Bedrock. This innovative technological adjustment allows them to maintain high standards of compliance and privacy while enabling rapid program growth.
For organizations facing similar challenges in healthcare data processing, this case illustrates how the right technological strategy can not only address immediate operational needs but also support long-term mission objectives. Care Access continues to connect communities with critical health resources and clinical research opportunities, paving the way for a healthier future.
For more information about implementing prompt caching on Amazon Bedrock, visit Prompt caching for faster model inference.
About Care Access
Care Access is dedicated to improving global health by bridging gaps in access to healthcare and empowering communities with advanced health screenings and research opportunities.
Visit www.CareAccess.com for more details.
About the Authors
- Deepthi Paruchuri: Senior Solutions Architect at AWS, specializing in GenAI and Analytics.
- Nishanth Mudkey: Specialist Solutions Architect for Data, AI/ML at AWS.
- Pijush Chatterjee: GenAI/ML Specialist at AWS with extensive data and analytics experience.
- Michelle Tat, Christopher Penrose, Rasmus Buchmann, and Daniel Hansen: Experts at Care Access focused on optimizing machine learning and LLM technologies.
Together, they showcase the intersection of technology and healthcare innovation.