The Promise and Pitfalls of Generative AI in Pharmaceutical Software Development
By Ricardo Torres-Rivera, PMP, CEO and president, Xevalics Consulting, LLC
The Promise and Peril of Vibe Coding in Pharma
By Ricardo Torres-Rivera, PMP, CEO and President, Xevalics Consulting, LLC
Generative artificial intelligence (AI) tools are revolutionizing software development, enabling individuals to create functional applications through simple natural language prompts. This phenomenon, often dubbed “vibe coding,” empowers professionals across industries, particularly in pharmaceuticals, to sidestep traditional coding barriers and create tailored solutions to meet their needs without formal training. While the potential for increased efficiency is exciting, there lies a crucial distinction between software that functions and software that meets stringent quality standards paramount in regulated environments.
The Promise of Generative AI
Imagine a pharmaceutical landscape where assurance teams craft their own deviation tracking dashboards, scientists develop specialized data analysis tools for stability studies, and operational staff automate batch record reviews—all without depending solely on an IT department. The allure of generative AI lies in granting these domain experts the ability to innovate independently.
However, the blind spot emerges when we conflate domain expertise with software engineering discipline. Those trained in software development are well-acquainted with defining requirements, version control, testing procedures, and documentation. These best practices are not bureaucratic hurdles; they are fundamental to ensuring quality and reliability, particularly in environments where patient safety and regulatory compliance are at stake.
The Regression to Inspection at the End
For decades, the pharmaceutical industry has prioritized embedding quality into processes over mere inspection post-production. Pioneers like W. Edwards Deming advocated for a culture where quality is integrated from the outset. As Deming articulated, reliance on inspection to achieve quality creates waste and defects that are difficult to rectify later.
Regulatory frameworks have since embraced this philosophy, emphasizing that quality must be planned and embedded rather than merely inspected. The potential regression towards an “inspect-at-the-end” model, facilitated by vibe coding, threatens to undo this progress, as the predominant workflow of generating, testing, and deploying without thorough documentation or validation becomes the norm.
Accountability Challenges in Pharma
Adopting generative AI introduces significant accountability issues. When non-technical personnel create software solutions without predefined requirements or proper validation, the risks multiply. As Philip B. Crosby noted, if software conforms to ambiguously defined criteria, it ultimately meets the wrong target.
Concerns about prompt accuracy and control further exacerbate accountability issues. Dependence on generative AI outputs could lead to inconsistencies in critical bio-pharmaceutical processes where reproducibility is vital for regulatory compliance.
Risks of Uncontrolled Innovation
While democratizing code creation is beneficial, it does not eliminate the necessity of built-in quality assurance. Generative AI introduces risks that traditional software validation approaches may not address, including:
-
Prompts as Executable Logic: In a regulated environment, the informal nature of prompts can instigate risks, as small changes in wording can yield significantly different outputs.
-
Non-deterministic Outputs: AI models yield probabilistic outputs rather than fixed results, complicating processes that require consistency and traceability.
-
Model Drift: AI models are subject to unannounced updates, jeopardizing the reliability and validity of the generated outputs for pharmaceutical processes.
-
Context Dependence: Changes in the external context fed into AI systems can lead to shifting conclusions, further compounding regulatory challenges.
Moving Forward
The widening accountability gap highlights the urgent need for the pharmaceutical sector to apply the rigorous quality principles established by industry pioneers to emerging technologies. As the sector accelerates its adoption of generative AI, it must do so with mindful quality assurance practices that anticipate and mitigate inherent risks.
The question remains: will our industry harness the principles of Deming, Crosby, and Juran with the same intensity devoted to drug manufacturing and validation?
In the upcoming Part 2 of this series, we will explore regulatory inquiries about generative AI’s role, redefine quality in the context of AI, and provide actionable strategies to build accountability into the development process. As we embrace innovation, let’s ensure quality remains at the forefront rather than an afterthought.
Transparency Statement: This article was developed with AI tools for research and drafting. The author has reviewed, edited, and takes full responsibility for the content.
Regulatory Disclaimer: This article is for informational purposes only and does not constitute regulatory guidance or legal advice. Organizations are responsible for interpreting regulatory applicability and compliance.
About the Author:
Ricardo Torres-Rivera, PMP, is the CEO and President of Xevalics Consulting, LLC, specializing in computer systems validation, computer software assurance, and quality compliance in regulated life sciences organizations. As a recognized speaker and expert in quality management, he champions the integration of robust quality processes in the face of digital innovation.