Report: AI-Driven Insider Risk Is a “Critical Business Threat”

▼ Summary
– Insider cybersecurity threats from malicious or negligent employees are rising and are now considered a critical business threat.
– A significant driver of this increased risk is the mishandling or abuse of AI tools by employees in the workplace.
– Over the past year, 42% of organizations reported more threats from malicious insiders, and an equal percentage saw a rise in incidents due to employee negligence.
– Attackers exploit this insider negligence or malice to gain access, and security leaders now expect an average of six insider-driven threats per month.
– AI also amplifies the threat, as attackers use it to craft better phishing emails and malicious insiders use it to exfiltrate data at scale.
A new cybersecurity report reveals that the danger posed by insider threats has escalated dramatically, now representing a critical business threat for organizations worldwide. This surge in risk is increasingly linked to how employees interact with and misuse artificial intelligence tools in the workplace. Security leaders are growing more anxious as large language models and other AI productivity applications create new vulnerabilities that can be exploited from within.
The data indicates a sharp rise in incidents stemming from both deliberate malice and simple carelessness. Over the past year, 42% of organizations have reported an increase in threats from malicious insiders. These are individuals intent on harming their employer by stealing, manipulating, or destroying sensitive data. An identical percentage noted a rise in security breaches due to employee negligence, where easily avoidable mistakes like using weak passwords, transferring files via insecure personal cloud accounts, or clicking phishing links lead to serious incidents.
Cybercriminals are keen to capitalize on this human vulnerability, whether it’s accidental or intentional. The report notes that IT and cybersecurity leaders now anticipate facing an average of six insider-driven threats every single month. This trend underscores a fundamental shift in the security landscape, where attackers increasingly view employees as a primary vector to bypass traditional perimeter defenses entirely.
The integration of AI tools into daily work is a significant driver of this expanded risk. Employees might mishandle these powerful systems, inadvertently exposing data or creating security gaps. More alarmingly, malicious insiders can actively weaponize AI to search for and exfiltrate files on a massive scale. Simultaneously, external attackers are deploying AI to craft hyper-realistic and effective phishing campaigns, further increasing the likelihood that an employee will make a costly mistake.
Security strategies must evolve to address this human-centric challenge. As one expert noted, the ease with which AI can enable large-scale data theft means that security protocols must meet users at the point of risk. This involves implementing smarter controls and continuous education tailored to the new realities of an AI-augmented workplace. The threat from within is no longer a secondary concern but a central pillar of modern cybersecurity planning that demands immediate and focused attention.
(Source: InfoSecurity Magazine)





