AI & TechArtificial IntelligenceCybersecurityNewswireSecurity

AI Chatbots Can Deliver Phishing, Malware, or Risky Code

▼ Summary

AI chatbots often provide inaccurate or harmful information, including phishing URLs and fake download pages, as threat actors exploit their vulnerabilities.
– Users increasingly rely on AI chatbots for information due to search engines’ declining relevance and the prevalence of SEO poisoning and malvertising.
– A study found that 34% of domains suggested by AI chatbots for brand login pages were incorrect or potentially harmful, with 29% being unregistered or inactive.
– AI-generated answers remove traditional trust indicators like verified domains, making users more vulnerable to malicious links they may blindly trust.
– Threat actors are actively poisoning AI chatbot results by creating seemingly legitimate pages and content designed to rank high in AI responses, including fake APIs and malware distribution.

AI chatbots are increasingly being manipulated to spread dangerous links, malware, and deceptive code, posing serious cybersecurity risks. While these tools promise quick answers, their responses often contain inaccuracies—whether due to flawed data or deliberate exploitation by cybercriminals.

Search engines have long struggled with SEO poisoning and malvertising, where fake sites mimic legitimate ones to steal credentials or infect devices. Now, as users turn to AI chatbots for faster answers, attackers are adapting their tactics. Researchers recently tested chatbots powered by advanced language models, asking them to generate login pages for major brands. Shockingly, 34% of the suggested domains were either unrelated to the brand or completely unregistered, leaving them ripe for phishing scams.

The problem extends beyond phishing. Malware distribution is finding new life through AI-generated responses, with attackers crafting convincing fake tutorials, cracked software blogs, and forum posts designed to trick both users and AI systems. These malicious pages are optimized not just for human eyes but also for AI algorithms, making them harder to detect.

Even AI-powered coding assistants aren’t safe. Hackers publish deceptive code snippets—like fake APIs—and bolster them with seemingly credible documentation, GitHub repositories, and social media promotions. Despite safeguards, these tactics exploit the trust users place in AI-generated answers, stripping away traditional warning signs like verified domains or search previews.

As AI becomes a primary source of information, users must remain skeptical of chatbot responses, verifying links and downloads independently. Meanwhile, developers and cybersecurity teams face the challenge of strengthening AI defenses against these evolving threats. The race is on to prevent AI tools from becoming unwitting accomplices in large-scale cyberattacks.

For real-time updates on emerging threats, consider subscribing to cybersecurity alerts—staying informed is the first line of defense.

(Source: HelpNet Security)

Topics

ai chatbot vulnerabilities 95% cybersecurity risks from ai 90% phishing malicious domains 85% malware distribution via ai 80% seo poisoning malvertising 75% trust ai-generated answers 70% ai-powered coding assistants risks 65% user verification ai responses 60% ai defense strengthening 55% cybersecurity alerts importance 50%
Show More

The Wiz

Wiz Consults, home of the Internet is led by "the twins", Wajdi & Karim, experienced professionals who are passionate about helping businesses succeed in the digital world. With over 20 years of experience in the industry, they specialize in digital publishing and marketing, and have a proven track record of delivering results for their clients.
Close

Adblock Detected

We noticed you're using an ad blocker. To continue enjoying our content and support our work, please consider disabling your ad blocker for this site. Ads help keep our content free and accessible. Thank you for your understanding!