In Brief: Privacy Concerns with AI Chatbots
A recent study by IMDEA Networks has revealed over 13 third-party trackers embedded in major AI chatbots like ChatGPT, Claude, Grok, and Perplexity, with Grok being particularly concerning due to its public default settings for guest conversations. Notably, TikTok’s tracker even obtained verbatim message content. Standard privacy measures, like rejecting cookies, may not fully protect users, highlighting significant risks regarding conversation privacy and data sharing.
Surprising Truths: AI Chatbots Are Sharing Your Data
When interacting with AI chatbots, many users operate under the assumption that their conversations remain private. Unfortunately, recent research from IMDEA Networks Institute reveals a troubling reality: your chats with AI assistants like ChatGPT, Claude, Grok, and Perplexity are not as private as you might believe.
The Study’s Findings: A Web of Trackers
Released on May 4 as part of the LeakyLM project, the study identified 13+ third-party trackers embedded within these platforms. Notable companies implicated include Meta, Google, and TikTok. Alarmingly, none of these trackers are disclosed in user agreements or straightforward language, raising significant privacy concerns.
Who’s Listening In?
The leaked information includes the conversation URLs, which may seem harmless at first glance. However, many platforms make these URLs publicly accessible by default. This essentially allows anyone with the link to access your conversations without any login required. By transmitting these URLs to ad networks, companies like Meta and Google can gain unauthorized access to your chats.
The study articulated this risk well: “Leaking a URL is not just metadata—it can be equivalent to leaking the conversation itself.”
Grok: The Worst Offender
Among the apps scrutinized, Grok, an AI developed by Elon Musk’s xAI, exposed user conversations the most. Guest interactions are public by default, allowing anyone to read them without requiring a login. Shockingly, TikTok’s tracking system even extracted verbatim content from conversations via Open Graph metadata—essentially taking screenshots of your chats.
In contrast, while Claude (from Anthropic) and ChatGPT (from OpenAI) offer better access controls (users must choose to make chats public), they still transmit URLs and identifying data to third-party platforms. For Claude, this data flows through Anthropic’s servers, making it immune to ad blockers.
What You Can Do
While the study did not conclusively prove that companies like Meta or Google are reading your chats, the possibility exists, and the necessary infrastructure is already in place. Researchers call the offered privacy controls misleading, suggesting that they may imply stronger protections than are actually enforced.
Here are some immediate steps users can take to protect their privacy:
- Grok: Adjust settings to restrict conversation visibility and revoke shared links.
- Claude: Reject non-essential cookies to disable the Meta Pixel.
- Perplexity: Set conversations to Private.
- ChatGPT: Opt-out of cookies where possible, although it won’t completely eliminate exposure.
Looking Ahead
The researchers plan to expand their investigation to other major AI tools like Meta AI, Microsoft Copilot, and Google Gemini, due to their dual roles as AI providers and advertising companies, complicating the threat landscape.
The findings have been submitted to relevant Data Protection Authorities, with notifications sent to xAI. As of now, no responses have been received from the implicated companies.
In a world increasingly reliant on AI technologies, understanding how your data is handled is crucial. As users, we must remain vigilant and informed about our digital privacy, ensuring that our conversations—both with machines and with each other—remain as private as we expect them to be.