The Imperative for a Multilateral Synthetic Media Disclosure Agreement: Addressing the Dangers of Generative AI in Information Ecosystems
The Need for a Synthetic Media Disclosure Agreement in the Age of Generative AI
In the past decade, few technological advancements have transformed society as profoundly as generative AI. This powerful tool has altered how we work, communicate, and produce information, ushering in an era characterized by convenience and heightened productivity. However, with these advancements come significant challenges, particularly the structural vulnerabilities associated with synthetic media. The ability of AI systems to replicate authentic human communication at scale and with striking realism poses a dangerous threat to the integrity of our information landscape.
The Rise of Synthetic Media and Potential Dystopia
In early 2019, Chinese scholar Li Bicheng envisioned a troubling future where AI systems could create realistic personas that simulate human activities to manipulate political opinions and further agendas (Irving, 2024). Fast forward to today, and we stand on the precipice of making this dystopian vision a reality. The capabilities of generative AI have advanced to a point where distinguishing between authentic and synthetic information is becoming increasingly difficult, leading to what is increasingly known as "truth decay."
The Information Crisis
The vulnerability of synthetic media isn’t merely a consequence of its existence; it stems from its unregulated circulation. As modern AI systems become adept at generating realistic content, anyone—from state organizations to private actors—can produce and distribute synthetic material. Unfortunately, current countermeasures, such as warning labels, often fall short in effectiveness, partially due to inconsistencies influenced by corporate priorities and political pressures (Martel & Rand, 2023; Bateman & Jackson, 2024).
While initiatives like the European Commission’s Code of Practice on Disinformation have improved transparency, these legal frameworks are often limited by jurisdictional boundaries and cannot fully address the global nature of synthetic media circulation (European Commission, 2022).
The Risks of AI-Driven Disinformation
The security risks presented by AI-generated disinformation are profound. The erosion of informational trust can undermine political and social stability, which is essential for any functioning democracy. The ongoing Russo-Ukrainian war illustrates the dangers of synthetic media—fabricated videos and false diplomatic communications have circulated widely, leaving policymakers, militaries, and civilians vulnerable to psychological manipulation and misinformation (Kuźnicka-Błaszkowska & Kostyuk, 2025).
Beyond military conflicts, misleading synthetic media can distort public policy and democratic processes, highlighting the urgent need for a regulatory framework addressing these risks.
Policy Proposal: A Synthetic Media Disclosure Agreement
To combat the dangers of undisclosed synthetic media, we need a groundbreaking multilateral agreement—a Synthetic Media Disclosure Agreement. This agreement would require mandatory disclosure of synthetic content and impose accountability on individuals who misuse it.
Key Pillars of the Agreement
-
Mandatory Labeling: The first pillar mandates clear labeling for all synthetic content intended for public distribution. This requirement would help alleviate ambiguity and inform users about the media’s synthetic origin, much like public health warning labels.
-
Individual Accountability: The second pillar would establish legal frameworks in individual countries that hold accountable those who use synthetic media for deception. This is crucial in contexts where misleading information can have immediate and severe repercussions, such as elections or emergency announcements.
-
Enforcement Mechanisms: The agreement would also outline enforcement strategies similar to those seen in nuclear nonproliferation agreements. By employing diplomatic pressure and sanctions, the global community can encourage states to comply with the regulations and mitigate the risks associated with synthetic media.
Feasibility and Effectiveness
Establishing a Synthetic Media Disclosure Agreement is not only feasible but essential. The EU’s Code of Practice demonstrates that transparency measures can be implemented on a large scale, while existing international security frameworks show that cooperation among nations is possible (European Commission, 2022; NATO, 2024).
The goal here isn’t to ban synthetic media or suppress creativity. Rather, it’s to create norms that protect society from deception while allowing for the legitimate use of AI technologies.
Conclusion
Generative AI is reshaping our global information environment. As warned by Li Bicheng, the challenge lies not in the existence of synthetic media but in the manipulation and eroding trust that thrives in its shadows. A Synthetic Media Disclosure Agreement offers a robust way to safeguard our informational landscape and restore public confidence. By mandating transparency and accountability, we can stabilize the global information system, ensuring that society not only benefits from generative AI but does so responsibly and ethically. Without such measures, the future of our information environment looks increasingly precarious.