Prioritizing Generative AI Projects: Incorporating Responsible AI Practices
Responsible AI Overview
Generative AI Prioritization Methodology
Example Scenario: Comparing Generative AI Projects
First Pass Prioritization
Risk Assessment
Second Pass Prioritization
Conclusion
About the Author
Prioritizing Generative AI Projects with Responsible AI Practices
In recent years, businesses have increasingly realized the importance of implementing a robust project prioritization methodology specifically for generative AI. As innovative use cases proliferate, companies face the challenge of evaluating each project’s business value against critical metrics such as cost, effort, and potential risks—including concerns unique to generative AI, like hallucination and regulatory uncertainties. The following discussion outlines how organizations can incorporate responsible AI practices into their project prioritization strategies to systematically address these issues.
Responsible AI Overview
The AWS Well-Architected Framework defines responsible AI as the practice of designing, developing, and deploying AI solutions to maximize benefits while minimizing risks. The framework identifies eight dimensions of responsible AI: fairness, explainability, privacy and security, safety, controllability, veracity and robustness, governance, and transparency. By systematically assessing these dimensions at crucial points in a project’s lifecycle, teams can implement risk mitigation strategies and continuously monitor them.
Incorporating responsible AI practices right from the project prioritization stage provides a clearer picture of the risks involved and the mitigation efforts required. This proactive approach mitigates the chances of encountering substantial rework later in the development cycle, which can delay project timelines and harm customer trust or regulatory compliance.
Generative AI Prioritization
While various prioritization methods exist, one effective approach is the Weighted Shortest Job First (WSJF) method, popularized in Scaled Agile frameworks. The WSJF formula is simple:
[ \text{Priority} = \frac{\text{Cost of Delay}}{\text{Job Size}} ]
Cost of Delay
The cost of delay consists of three components:
- Direct Value: Business impact, such as revenue or cost savings.
- Timeliness: Urgency of project delivery—what value is lost by delaying it?
- Adjacent Opportunities: Potential new opportunities that arise from project completion.
Job Size
Job size encompasses the effort required to deliver the project, including both development costs and any necessary infrastructure. Importantly, this is where responsible AI assessments come into play—considering the risks and the development effort for mitigations identified during the initial evaluation.
Example Scenario
Let’s examine two hypothetical generative AI projects:
- Project 1: Automating product descriptions using a large language model (LLM).
- Project 2: Creating visual brand assets using a text-to-image model.
First Pass Prioritization
Using the WSJF method, we assign scores from 1 to 5 based on factors such as direct value, timeliness, and opportunities.
| Project | Direct Value | Timeliness | Adjacent Opportunities | Job Size | Score |
|---|---|---|---|---|---|
| Project 1 | 3 | 2 | 2 | 2 | 3.5 |
| Project 2 | 3 | 4 | 3 | 2 | 5 |
At first glance, Project 2 appears more compelling, primarily because visual asset creation tends to be more time-consuming than writing product descriptions.
Risk Assessment
Next, we’ll conduct a responsible AI risk assessment. Each project is evaluated across the dimensions mentioned earlier, highlighting specific risks and suggested mitigations.
| Project | Dimension | Severity Level | Mitigation |
|---|---|---|---|
| Project 1 | Fairness | L | Implement guardrails |
| Project 1 | Privacy | L | Data governance |
| Project 2 | Fairness | L | Implement checks |
| Project 2 | Safety | L | Guardrails for content |
| … | … | … | … |
Second Pass Prioritization
After assessing the risks, we reevaluate job sizes:
| Project | Job Size | Score |
|---|---|---|
| Project 1 | 3 | 2.3 |
| Project 2 | 5 | 2 |
With the incorporation of risk assessments, Project 1 now shows a better score. This outcome aligns with the understanding that erroneous images have a broader impact than poorly written descriptions.
Conclusion
Integrating responsible AI practices into the prioritization of generative AI projects can fundamentally change outcomes by unveiling extensive mitigation work that may not have been apparent initially. As organizations continue adopting generative AI, developing a responsible AI policy becomes vital. It’s a strategic move that ultimately fosters trust and ensures compliance with evolving regulations. For those interested in transitioning responsible AI from theory into practice, further resources are available online.
About the Author
Randy DeFauw is a seasoned Sr. Principal Solutions Architect at AWS, boasting over 20 years of technology experience. He has a rich background in autonomous vehicles and has collaborated with a range of clients from startups to Fortune 50 companies. Randy holds an MSEE and MBA, and he actively contributes to K-12 STEM education initiatives, sharing insights from conferences and industry publications.
Integrating responsible AI practice into your project prioritization not only enhances decision-making but can also safeguard your organization against future risks. Let’s make responsible AI a cornerstone of your generative AI initiatives!