Artificial IntelligenceBusinessCybersecurityNewswire

Insider Threats: Australia’s Top Cybersecurity Risk

▼ Summary

– Traditional cybersecurity has focused on external threats, but insider threats from within organizations are now considered more significant.
– Most Australian organizations expect insider threats to increase, yet many lack adequate detection tools like user behavior analytics.
Generative AI is creating new vulnerabilities by enabling more convincing phishing attacks and increasing the impact of insider threats.
– While AI is being used to combat insider risks, there is a gap between executive perception and the actual maturity of these defenses.
– Addressing insider threats requires cultural changes, transparent monitoring, and separating AI agent behavior from user activity to detect compromises.

For years, cybersecurity strategies have largely focused on defending against external attackers trying to break into corporate networks. A significant shift is underway, however, as insider threats are increasingly viewed as the primary cybersecurity risk for Australian organisations. This change in perspective moves the danger from outside the walls to within them, fundamentally altering how security must be approached.

New research indicates that the traditional external-first security mindset has become dangerously obsolete. A rapidly growing proportion of cyber threats no longer originate from anonymous hackers overseas but emerge from inside the organisation itself. These insiders, whether acting with malicious intent or simply making a careless mistake, are now considered by many businesses to represent a more serious risk than any external actor. The data underscores this concern, revealing that a striking 84% of Australian respondents anticipate the number of insider threats will increase over the coming year. Even more telling, 58% now rank insiders as a greater danger than external attackers. This represents a fundamental transformation of the threat environment, raising critical questions about whether companies are properly equipped to defend against a problem that begins behind their own firewall.

Despite the escalating risk, many Australian organisations remain ill-prepared to identify and halt insider activity before it results in damage. The study discovered that a mere 34% of Australian businesses currently utilise user and entity behaviour analytics (UEBA). This technology is specifically engineered to detect anomalies in user behaviour that could signal a compromised insider. Most continue to depend on more conventional security measures like identity and access management, data loss prevention, and endpoint detection and response tools. While these technologies offer value, they are primarily designed to counter known external threats rather than the subtle behavioural shifts of a trusted employee. Without the capability to detect unusual internal actions early, many companies are effectively operating blind to insider threats. Often, by the time data is stolen or systems are sabotaged, the damage is already irreversible.

The rapid adoption of generative AI tools in the workplace further complicates the situation, creating a double-edged sword for security teams. While AI promises to enhance defensive capabilities, it is simultaneously creating new vulnerabilities that can be exploited. According to the findings, 76% of Australian organisations report unauthorised use of GenAI by their employees, and a overwhelming 93% believe AI is amplifying the potential impact of insider attacks. Cybercriminals are already leveraging this technology to craft highly convincing and personalised phishing emails. This dramatically increases the chance that an employee will be deceived into clicking a malicious link or handing over their login credentials. Once attackers gain access, they can infiltrate AI-powered systems that are deeply integrated into business workflows, often operating with little human oversight. If an outsider successfully takes control of these systems via a phishing attack, the consequence can be severe operational disruption or massive data loss.

On the defensive side, many organisations are starting to leverage AI to combat insider risk. The research shows that 94% of Australian businesses are now using some form of AI as part of their insider threat strategy. These tools can help identify anomalous behaviour, automate response actions, and significantly reduce the time between detecting a threat and remediating it. A notable perception gap exists within corporate hierarchies regarding the maturity of these AI capabilities. While 55% of global executives believe AI tools are fully deployed for this purpose, that confidence plummets to 40% among analysts and just 37% among managers. This disparity suggests that in many instances, AI-based defences are less comprehensive on the ground than senior leadership assumes. Compounding the challenge is the fact that governance frameworks for AI security tools are still in their infancy. Without clear policies and continuous monitoring, the AI deployments meant to protect the business can themselves introduce new risks.

The research also identified the top organisational barriers companies face when tackling insider threats. Privacy resistance was cited as the number one challenge by 20% of global respondents, highlighting the ongoing tension between effective employee monitoring and respecting individual privacy rights. This was followed by the inherent difficulty of understanding user intent and behaviour (17%) and a general lack of visibility into internal activity (16%). These hurdles indicate that a solution requires both a cultural and a technical response. Organisations must develop security programs that are transparent, proportionate, and respectful of employee privacy, while still providing the necessary visibility to detect threats at an early stage.

The collective findings emphasise the urgent need for organisations to take decisive action to close the insider threat gap. This will require broadening their detection capabilities, embedding behavioural analytics into their security infrastructure, and making more strategic use of AI tools while simultaneously strengthening the governance frameworks that guide their use. Critically, AI agent behaviour needs to be baselined and separated from human user activity. This separation allows organisations to more easily spot subtle deviations that may indicate a system compromise, enabling them to neutralise threats before they can escalate into a full-blown incident. Insider threats have firmly established themselves as one of the most pressing cybersecurity challenges for Australian businesses. Effectively addressing them will demand not just new technologies, but a fundamental shift in mindset, from treating security as a perimeter problem to recognising it as a pervasive issue of trust.

(Source: ITWire Australia)

Topics

insider threats 98% ai security 92% Generative AI 88% detection capabilities 86% behavioral analytics 85% threat landscape 84% australian cybersecurity 83% phishing campaigns 82% security governance 81% ai vulnerabilities 80%