Artificial IntelligenceCybersecurityNewswireTechnology

UK Cybersecurity Experts Question AI’s Effectiveness

▼ Summary

– Offensive cybersecurity experts remain skeptical about AI’s benefits, citing overstated capabilities and ethical concerns, per a government study.
– Cloud adoption has had a greater impact on offensive cybersecurity services than AI, according to the research by Prism Infosec.
– Threat actors primarily use AI for sophisticated social engineering, while experts highlight privacy risks, costs, and security issues as barriers to adoption.
– Experts hope future “more accessible models” will enable AI use in areas like attack surface monitoring and vulnerability research.
– Quantum computing was dismissed as too abstract, with focus shifting to testing high-risk environments like operational technology and automated vehicles.

UK cybersecurity specialists remain unconvinced about AI’s current value in offensive security operations, with many viewing its capabilities as exaggerated and impractical for real-world applications. A recent government-backed study reveals that while cloud computing has transformed service offerings, artificial intelligence has yet to make a meaningful impact in this high-stakes field.

The research, conducted by Prism Infosec for the Department for Science, Innovation and Technology, surveyed red team professionals, experts who simulate cyberattacks to test defenses. Findings indicate widespread skepticism toward AI, with participants criticizing its overhyped potential and ethical concerns. Many believe threat actors currently leverage AI primarily for advanced social engineering scams rather than groundbreaking offensive tactics.

Cost barriers, data privacy risks, and vulnerabilities in public AI models were cited as major obstacles to adoption. Despite this, respondents acknowledged future possibilities as more customizable and secure models emerge. These could eventually enhance areas like attack surface analysis and vulnerability prioritization. For now, however, human expertise remains irreplaceable in delivering sophisticated offensive security services.

The report also addressed quantum computing, dismissing it as impractical outside controlled lab environments. Instead, cybersecurity efforts are shifting toward previously untested frontiers, including operational technology systems and autonomous vehicles, spanning drones, maritime assets, and aerial platforms.

Interestingly, some experts argue AI could eventually streamline red team workflows by automating reconnaissance or evasion tactics. Others see potential in simplifying administrative tasks like policy drafting and bug reporting. Yet the consensus remains clear: until AI matures significantly, manual human intervention will continue dominating this critical sector.

The findings highlight a broader industry tension between technological promise and real-world applicability. While innovation accelerates, cybersecurity professionals prioritize proven methods over unproven tools, especially when defending against ever-evolving threats.

(Source: InfoSecurity Magazine)

Topics

ai skepticism offensive cybersecurity 95% cloud computing impact cybersecurity 85% ethical concerns ai cybersecurity 80% human expertise cybersecurity 75% ai social engineering attacks 75% barriers ai adoption cybersecurity 70% industry tension between innovation applicability 70% future potential ai cybersecurity 65% testing high-risk environments 60% quantum computing cybersecurity 50%