Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

“Over 30 Chrome Extensions Posing as AI Chatbots Compromise User Privacy” • The Register

Beware: Over 30 Malicious Chrome Extensions Masquerade as AI Assistants, Compromising User Data for 260,000+ Users

Beware: Malicious Chrome Extensions Masquerading as AI Assistants

In a rapidly evolving digital landscape, users are increasingly seeking tools that enhance their productivity, especially those promising to harness the power of artificial intelligence. Unfortunately, this burgeoning interest has created a fertile ground for cybercriminals who have unleashed a wave of malicious Chrome extensions, posing as helpful AI assistants. With over 30 of these extensions installed by at least 260,000 unsuspecting users, the threat is both alarming and ongoing.

The Deceptive Nature of Malicious Extensions

Recent findings from LayerX Security have unearthed a sophisticated campaign dubbed AiFrame. These rogue extensions impersonate popular AI platforms like Claude, ChatGPT, Gemini, and Grok, or claim to be generic tools designed to assist with document summarization, message writing, and Gmail management. Beneath their friendly façades lies a dark reality: they are all designed to steal users’ API keys, email messages, and other sensitive personal data.

Despite numerous reports and even the removal of earlier versions, many of these extensions remain available on the Chrome Web Store. This is particularly concerning because many are re-uploaded under new IDs after being removed, thereby keeping the malicious cycle alive. For instance, the AI Sidebar (gghdfkafnhfpaooiolhncejnlgglhkhe) emerged after the prior version, Gemini AI Sidebar (fppbiomdkfbhgjjdmojlogeceejinadg), was taken down, showcasing the adaptability of these scams.

Stealthy Operations Underneath

Among the most notorious is the extension named AI Assistant (nlhpidbjmmffhoogcennoiopekbiglbp), which notably acquired a "Featured" badge on the Chrome Web Store despite its malicious intent. With approximately 60,000 users, this extension directs users to a remote domain (claude.tapnetic.pro) and employs an iframe overlay that simulates a legitimate interface. This crafty maneuver allows the operators to load remote content without needing any updates from the Chrome Web Store, essentially creating a hidden channel to harvest data.

LayerX Security researcher Natalie Zargarov highlighted the insidious nature of this extension. It not only queries the active tab but also extracts readable content via Mozilla’s Readability library. All gathered information—including API keys and user credentials—is sent back to remote servers operated by the criminals.

Targeting Gmail and Beyond

Interestingly, many of the malicious extensions target Gmail directly, using a shared codebase for Gmail integration. This enables them to read visible email content by accessing the DOM (Document Object Model) and extracting text. It’s not just email messages; drafts and ongoing compositions are also harvested, making it easier for thieves to gain a comprehensive view of users’ communications.

Zargarov emphasized that this campaign effectively exploits the conversational nature of AI interactions. Users, conditioned to provide detailed information, inadvertently feed these malicious extensions with more data than they realize. The strategy creates a nearly invisible man-in-the-middle attack, intercepting sensitive information before it reaches legitimate services.

A Call to Action

As users continue to look for AI assistants to aid their daily tasks, caution is paramount. It’s crucial to conduct thorough research and verify the legitimacy of extensions before installation. LayerX has compiled a list of all 32 malicious extension IDs, so be sure to check it before enhancing your browsing experience with an AI tool.

Despite the serious implications of this threat, Google has yet to respond decisively to inquiries regarding these malicious extensions still lurking within the Chrome Web Store. As the lines blur between helpful AI tools and malicious software, staying informed is our best defense.

Conclusion

The rise of faux AI assistants serves as a stark reminder that not everything appearing shiny and beneficial is safe. By being vigilant and discerning about the tools we choose, we can better safeguard our personal information and keep our online experiences secure. So next time you’re tempted to install the latest AI extension, remember: a little caution can go a long way.

Latest

Unveiling Amazon Polly Bidirectional Streaming: Real-Time Speech Synthesis for Conversational AI Solutions

Announcing Amazon Polly's New Bidirectional Streaming API: Revolutionizing Real-Time...

OpenAI Expands ChatGPT Advertising Reach to Additional Markets

OpenAI Expands Advertising Pilot for ChatGPT to New Markets...

Living Sensors and Robotics Unite to Monitor Aquatic Biodiversity | CORDIS News

Revolutionizing Aquatic Biodiversity Monitoring with Biohybrid Robots Harnessing Living Sensors...

Agentic AI in Data Engineering Projected to Reach USD 66.7 Billion by 2034

The Expanding Landscape of Agentic AI in Data Engineering:...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

New Study Reveals Rising Instances of AI Chatbots Ignoring User Instructions:...

The Emergence of AI Deception: Unpacking the Concerns and Responses What's Happening? Why is This Concerning? What's Being Done About AI Deceitfulness? The Evolution of AI: From Innocent...

What You Need to Know Before Seeking Medical Advice from ChatGPT...

The Rise of AI in Health Consultations: ChatGPT as a Patient's Ally ChatGPT as a Health Ally: Navigating Medical Questions with AI In today's digital age,...

AI Promoting New Forms of Violence Against Women

Urgent Call for Action: New Report Highlights Risks of AI Chatbots in Addressing Violence Against Women and Girls The Report that Demands Action: AI Chatbots...