Growing Trust in Generative AI: Insights from the IDC Data and AI Impact Report
Key Findings and Trust Dynamics in AI Deployment
The Rise of Generative AI and Its Implications
Trust Gaps: Organizational Investments vs. Confidence
The Connection Between Trust and ROI in AI Initiatives
Challenges in Data Management and Governance for AI Success
The Imperative of Trust in AI: A Call to Action for Leaders
Trust in Generative AI: Insights from Recent Research
Recent research commissioned by SAS reveals a promising trend: global trust in generative AI is on the rise, even as significant gaps in AI safeguard investments persist. The findings are outlined in the IDC Data and AI Impact Report: The Trust Imperative, which surveyed more than 2,375 respondents from IT and business backgrounds across various regions.
A Closer Look at Trust Levels
The report highlights that IT and business leaders currently exhibit greater confidence in generative AI compared to other AI forms. Notably, 48% of respondents reported "complete trust" in generative AI, while only 33% expressed the same for agentic AI, and a mere 18% for traditional machine learning-based AI. This trend is intriguing, particularly given the challenges of explainability that generative AI technologies face.
Kathy Lange, Research Director of the AI and Automation Practice at IDC, pointed out an intriguing contradiction: "Forms of AI with human-like interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy." This prompts a fundamental question: Is the trust in generative AI warranted? And are businesses applying necessary safeguards and governance practices?
The Surge in Visibility and Application
Generative AI is gaining prominence, with 81% of organizations reporting its use, surpassing traditional AI at 66%. However, this rise introduces new risks, particularly regarding responsible deployment. The study emphasizes the need for organizations to implement oversight measures as generative AI becomes a staple in operations.
Trust Gaps and Investment Discrepancies
Interestingly, the report reveals a disconnect between the high levels of trust in AI and the actual investment in safeguarding measures. While 78% of organizations express full trust in AI, only 40% have allocated resources to formal governance, explainability efforts, or ethical safeguards. Concerns persist regarding data privacy (62%), transparency and explainability (57%), and ethical usage (56%).
Additionally, trust in emerging technologies like quantum AI is growing, with 26% of respondents expressing complete confidence. However, most applications of quantum AI are still in early development stages.
The ROI Connection
The research underscores that organizations prioritizing trustworthy AI practices—defined as those investing in governance frameworks, responsible AI policies, and relevant technologies—are 60% more likely to report doubling their return on investment for AI projects. Alarmingly, only a small fraction, 2%, cited the development of an AI governance framework as a top priority.
Respondents were categorized as "trustworthy AI leaders" or "followers." The leaders—those investing in practices that enhance AI reliability—reported a 1.6 times higher likelihood of achieving doubled returns on their AI investments compared to followers.
Challenges with Data Management
Data quality and governance emerged as critical elements for trust in AI. Respondents identified three primary obstacles to successful AI implementation: weak data infrastructure, poor governance processes, and insufficient AI expertise. 49% reported that noncentralized or suboptimal cloud data environments hinder progress, while 44% cited inadequate governance processes and 41% noted shortages in skilled specialists.
Access to relevant data sources was deemed the leading challenge for AI deployment by 58% of participants, followed by concerns about data privacy and compliance (49%) and data quality (46%).
Bryan Harris, Chief Technology Officer at SAS, emphasized the importance of trust in AI for societal benefit. He stated, "To achieve this, the AI industry must increase the success rate of implementations, humans must critically review AI results, and leadership must empower the workforce with AI."
Conclusion
This research illustrates a complex landscape where trust in generative AI is rising, but investment in governance and ethical safeguards is lacking. As AI systems become more integrated into critical operations, ensuring the foundational quality of data and strategic governance will be essential for maximizing returns while minimizing risks. The future of AI largely hinges on the industry’s ability to foster trust through responsible practices and transparency.