Why We Can’t Quit Bad Authentication Habits

▼ Summary
– Many organizations lack adequate cybersecurity training, with 40% of employees never receiving it and outdated policies leaving them unprepared for current risks.
– Weak authentication methods like passwords and SMS codes remain common despite being vulnerable to phishing and social engineering attacks.
– Employees often mix personal and work activities on devices, creating security gaps as personal accounts frequently lack multi-factor authentication.
– AI tools are increasing account risks by enabling realistic phishing and fake content, making it harder for users to distinguish legitimate messages.
– Younger workers are more open to newer authentication technologies, suggesting future adoption may improve, but a significant gap between awareness and action persists.
Many companies continue to depend on weak authentication methods, while employees’ personal security practices introduce further vulnerabilities, a recent study reveals. This reliance on outdated systems persists despite the ready availability of more robust security solutions, creating significant and preventable risks for organizations of all sizes.
A major contributing factor is a widespread gap in training and security policies. A striking 40% of employees report they have never received any cybersecurity training. Even among staff who have undergone training, the information is frequently obsolete, as many organizations take months to revise their security protocols. This lag leaves the workforce unprepared for emerging threats, causing them to revert to familiar, often insecure, habits that cybercriminals are adept at exploiting.
Traditional authentication tools still dominate the corporate landscape. Passwords remain the universal standard, and SMS-based one-time codes are also extensively used. Both passwords and SMS codes are highly vulnerable to phishing, credential theft, and social engineering attacks. Nevertheless, a large number of employees continue to perceive these methods as secure. This employee perception heavily influences organizational policy; when staff believe a system is strong, companies are often slow to phase it out, even when demonstrably superior alternatives are accessible. More secure options like device-bound passkeys, which store credentials on a physical hardware key and are inherently resistant to phishing, exist but have not yet seen widespread adoption.
The line between personal and professional digital habits is increasingly blurred. It is common for staff to access personal email on company-issued laptops or to check work accounts on their personal smartphones. Frequently, multi-factor authentication is not activated on these personal accounts, creating an unprotected entry point that attackers can leverage. Many employees admit to avoiding MFA, viewing it as an inconvenient or complicated process, while others are simply unaware of the security features available to them.
Because security behaviors at home and work are often identical, a breach of personal credentials can easily spill over into the professional sphere. An attacker who compromises a personal account can use that foothold to target corporate assets without ever needing to penetrate the company’s main network defenses directly.
Data indicates that adoption rates for modern authentication may improve with time, as younger workers who are more accustomed to new technologies enter the workforce. For the present, however, a substantial chasm remains between security awareness and the implementation of safe practices.
The threat environment is being further complicated by the rise of artificial intelligence. A majority of survey participants now feel their accounts are at increased risk due to AI-powered tools. Cybercriminals can use this technology to craft highly convincing phishing emails, fabricate fraudulent websites, and even generate audio or video that convincingly impersonates a trusted coworker. This dramatically lowers the skill and time investment required to execute widespread attacks.
The study also evaluated individuals’ capacity to identify text produced by AI. Most respondents found the task challenging, frequently mistaking human-written content for machine-generated text and the reverse. This demonstrates that users can no longer depend on their intuition alone to judge the legitimacy of a digital communication.
(Source: HelpNet Security)





