A new attack method - HashJack - shows how AI browsers can be tricked with nothing more than a URL fragment.
It works like this: drop malicious instructions after the # in a link, and AI copilots like Comet, Copilot for Edge, and Gemini for Chrome might swallow them whole. No need to hack the site. The LLM reads the URL’s tail, pulls in the prompt, and boom - indirect prompt injection.
Phishing, data leaks, fake content, malware - it’s all on the table.
System shift: HashJack calls out a core design flaw in how these AI tools treat client-side URL fragments like trusted input. It’s a quiet exploit, but a loud wake-up call for anyone shipping LLMs into browsers.









