Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

Imagining Climate Security: A Warning on the Impacts of Generative AI

The Risks of Relying on Generative AI in Climate Security Research: A Cautionary Tale

The Pitfalls of Generative AI in Climate Security Research

Recent studies have indicated a remarkable shift in how students and policymakers engage with research, with over 90% turning to generative artificial intelligence (GenAI) to sift through vast quantities of data in 2025. Amid the climate security discourse, which is replete with numerous academic articles—over 1,000—and an even greater volume of grey literature, the potential benefits of leveraging GenAI for efficient information processing seem enticing. However, the risks far outweigh the short-term advantages.

Climate Security’s Complex Landscape

With climate change increasingly influencing policies related to security, it’s crucial to employ reliable methods of information retrieval and analysis. As highlighted by various policymakers, there are emerging calls for more robust climate policies, a shift towards greener military practices, and amplified defense budgets—all justified by findings in climate security research. Yet, adopting measures based on potentially biased or unreliable GenAI outputs could lead to severe missteps.

As a researcher deeply entrenched in climate security, I embarked on my own examination of GenAI’s performance. The results I found were not just disappointing; they were alarmingly misleading.

The Illusion of Accurate Information

One of the first tests I conducted involved asking Microsoft Copilot to extract information from the reference lists of all 13 articles in the Journal of Peace Research’s 2021 special issue on Security Implications of Climate Change. The initial results were implausible, leading to a revelation: Copilot had been generating simulated reference data instead of sourcing actual information. This lack of transparency is deeply concerning, especially when decisions made based on such results could significantly impact policy.

The situation worsened when I requested an overview of the articles. Only two out of the 13 articles listed in Copilot’s response were actually related to the special issue, with the remainder being fictitious constructs or misattributed works. For example, Copilot created an imaginary article attributed to authors who had published legitimate works—conflating their contributions with non-existent titles.

ChatGPT’s Missteps

I didn’t stop at testing Microsoft Copilot; I also turned to ChatGPT for a similar task. While it accurately identified the first article and its authors, the subsequent entries were either unrelated pieces from other journals or entirely fabricated. Alarmingly, ChatGPT even invented a non-existent author, highlighting the risks of reliance on generative AI for fact-based inquiries.

When I probed ChatGPT about one of its fictitious creations, I received an elaborate yet erroneous summary brimming with jargon and high-level buzzwords, but fundamentally devoid of any factual basis.

The Human Element in Research

What does this imply for researchers, students, and policymakers? The confidence exuded by both platforms is misleading. While I was able to quickly identify the hallucinations and inaccuracies due to my familiarity with the climate security literature, less experienced users may easily take these responses at face value. In a field where the stakes are extraordinarily high, a reliance on GenAI as a primary source of information is precarious.

Ultimately, these findings reinforce the essential role of human oversight in research. GenAI can serve as a tool for assistance, but it should never replace rigorous academic inquiry and critical thinking.

A Call for Caution

As we continue to explore the potential of GenAI in the research landscape, the lessons from my experiments serve as a vital reminder: the stakes in climate security and related fields are immense, and the costs of misinformation can be catastrophic. While GenAI offers a semblance of efficiency, the integrity of our research and the responsibility of our policymakers must remain paramount.

As we stride into an era increasingly defined by AI technologies, let us insist on maintaining our critical faculties, ensuring that our decisions are grounded in reliable, verifiable sources—not merely the confident output of generative AI.


This blog post draws insights from recent assessments conducted by Tobias Ide, an established authority in the intersections of climate change, peace, and conflict. His findings call for a reassessment of AI’s role in research and policy formation.

Latest

I Stopped Relying on ChatGPT: Discover the AI Models That Outperform It in Research, Coding, and More

Decoding the AI Model Maze: Choosing the Right Tool...

Swancor Ventures into AI Robotics and UAV Markets, Showcases Circular Economy Innovations

Swancor Technologies Showcase: Pioneering Innovations at JEC World Transforming the...

AI Exposes Academic Polish as a Tool for Gatekeeping

The Tensions of Knowledge Production: Universities vs. Professional Standards...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

AI Exposes Academic Polish as a Tool for Gatekeeping

The Tensions of Knowledge Production: Universities vs. Professional Standards in the Age of AI This title captures the essence of the debate between university policies...

RELX Confronts Generative AI Challenges Amid Potential Valuation Opportunities

RELX (LSE:REL) Faces New Challenges as Generative AI Disrupts Legal and Financial Information Landscape Key Developments and Implications for Investors As generative AI tools gather momentum...

How Agentic AI is Transforming Tax and Accounting Practices

Transforming Tax Professionals: The Rise of Agentic AI in Accounting Highlights Elevating Roles: Agentic AI autonomously executes multi-step workflows, turning accountants from compliance processors into strategic...