AI Chatbots Often Misdirect Login Links, Netcraft Finds

▼ Summary
– AI chatbots frequently direct users to phishing sites when asked for login URLs to major services, with 34% of suggested links being inactive, unrelated, or dangerous.
– A security test found that 29% of AI-generated hostnames were unregistered or inactive, while 5% pointed to unrelated businesses, and only 66% led to correct brand domains.
– Smaller brands, like regional banks, face higher misrepresentation rates due to limited training data, increasing risks of financial and reputational damage.
– Cybercriminals are exploiting AI by creating phishing content tailored for ingestion by language models, such as fake APIs and documentation.
– Users should verify AI-provided login links via traditional search or direct URL entry, as AI outputs lack reliability for secure navigation.
AI chatbots frequently provide incorrect or dangerous login links, according to new cybersecurity research. A recent investigation reveals these tools often direct users to phishing sites or inactive domains when asked for access to major online services. The findings raise serious concerns about relying on AI for critical web navigation tasks.
Security analysts tested several AI models by requesting login URLs for 50 well-known brands. Shockingly, over a third of the suggested links were either broken, unrelated, or outright malicious. The study used straightforward prompts mimicking real user behavior, such as asking for help finding official login pages after losing a bookmark.
Key discoveries from the research shows that 29% of generated hostnames were unregistered or inactive, making them prime targets for cybercriminals. One alarming example involved an AI-powered search engine suggesting a fraudulent Wells Fargo login page hosted on Google Sites. The fake site closely imitated the bank’s branding, potentially tricking unsuspecting users. Without proper context or warnings, AI responses can inadvertently amplify phishing risks.
Smaller financial institutions faced even higher error rates, likely because their websites appear less frequently in AI training data. This leads to more incorrect “hallucinated” responses, exposing both customers and businesses to financial and reputational harm.
Cybercriminals are exploiting this vulnerability by creating content specifically designed to fool AI systems. Researchers identified thousands of phishing pages disguised as legitimate documentation, along with fake APIs and developer resources. Some malicious links even made their way into public code repositories after being recommended by AI coding assistants.
Traditional security measures like defensive domain registration struggle against this threat, as AI can generate countless variations of misleading URLs. Experts recommend proactive monitoring and AI-specific threat detection to combat the issue effectively.
For businesses, maintaining accurate representation in AI responses is becoming crucial as more users turn to chatbots instead of search engines. Consumers should remain cautious, manually typing known URLs or using traditional search remains safer than blindly trusting AI-generated links.
(Source: Search Engine Journal)