AI browsers can be tricked into stealing your data — here's how to protect yourself

Hooded cybercriminal sitting with laptop surround by hooks
(Image credit: Getty Images)

AI-powered browsers such as Perplexity's Comet and ChatGPT Atlas are becoming more common, though their rise has sparked debate in the tech community about their security implications. One of the biggest threats is called prompt injection, where attackers manipulate the AI by feeding it hidden malicious instructions through specially crafted website content or URL links.
In other words, someone secretly adds or manipulates text in a prompt to trick an AI into doing something it shouldn’t — like ignoring instructions, leaking data or acting against its intended behavior.

These attacks can trick your AI browser into displaying phishing sites, stealing personal information you've entered or giving you dangerous recommendations. The issue is you might not even realize it's happening. While developers are working on fixes, there's steps you can take to protect yourself. Here's how to stay safe.

1. Never share sensitive information

Treat your AI browser like you would a public computer — don't input anything you wouldn't want someone else to see. This includes credit card numbers, Social Security numbers, passwords, bank account details or any personal information that could be used for identity theft.

Prompt injection attacks can potentially capture data you enter into AI chat windows or forms, so keeping sensitive information out of the AI browser entirely eliminates that risk. If you need to make a purchase or log into a sensitive account, use a traditional browser instead. The convenience of having AI help you isn't worth the risk of having your financial information stolen.

This is especially important because prompt injection vulnerabilities could lead to serious financial consequences if attackers gain access to payment details or account credentials.

2. Keep your AI browser and devices updated

AI browsers and the AI systems powering them need regular security updates just like any other software. When updates become available, install them immediately rather than postponing them. These updates often include patches for newly discovered vulnerabilities, including defenses against prompt injection techniques.

Delaying updates leaves your browser exposed to exploits that developers have already fixed. This also applies to your operating system and any other software on your devices — attackers often target outdated systems with known vulnerabilities.

Set your AI browser to update automatically if that option is available, and check periodically to make sure you're running the latest version.

3. Question everything the AI tells you

Just because your AI assistant provided an answer or recommendation doesn't mean it's accurate or safe. AI systems can be manipulated through prompt injection to give you false information, direct you to phishing sites or provide malicious links disguised as legitimate resources.

Before clicking any link an AI browser suggests, verify it looks legitimate. Be skeptical of urgent requests, unusual recommendations, or anything that seems off. And alway cross-reference important information the AI gives you with trusted sources rather than blindly trusting AI responses.

4. Watch for AI-powered phishing attempts

If you're using AI to manage your email, create documents or handle other tasks on your behalf, understand that compromised AI can also be used for phishing. Attackers can use prompt injection to make your AI assistant display fake contact information, malicious phone numbers, or fraudulent links while making everything appear legitimate.

Always manually verify phone numbers, email addresses, and website URLs before using them, even if your AI assistant provided them. If the AI suggests contacting "customer service" at a specific number or visiting a "support page" at a particular URL, look up that information independently through official channels.

Scammers are evolving their techniques to exploit AI systems, so the same skepticism you apply to traditional phishing emails should extend to AI-generated content.

5. Use multi-factor authentication on all accounts

Enable multi-factor authentication (MFA) on every account that offers it, especially email, banking, and social media accounts. MFA adds an extra security layer beyond just your password — even if a prompt injection attack compromises your credentials, attackers still can't access your accounts without the second authentication factor.

This could be a code sent to your phone, an authenticator app, or a biometric verification like fingerprint or face recognition. Think of MFA as your backup defense when other security measures fail.

Also, consider using the best VPN when browsing with AI-enabled browsers, which adds another layer of protection by encrypting your internet traffic and hiding your actual IP address from potential attackers.


Google

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!


More from Tom's Guide

TOPICS
Kaycee Hill
How-to Editor

Kaycee is Tom's Guide's How-To Editor, known for tutorials that skip the fluff and get straight to what works. She writes across AI, homes, phones, and everything in between — because life doesn't stick to categories and neither should good advice. With years of experience in tech and content creation, she's built her reputation on turning complicated subjects into straightforward solutions. Kaycee is also an award-winning poet and co-editor at Fox and Star Books. Her debut collection is published by Bloodaxe, with a second book in the works.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.