Shadow AI vs. Managed AI: Kaspersky’s META Region Analysis

▼ Summary
– 81.7% of professionals in the META region use AI tools for work, primarily for writing/editing texts, emails, creating media, and data analytics.
– Only 38% of surveyed employees have received training on cybersecurity aspects of AI, highlighting a significant gap in preparedness for AI-related risks.
– AI tools are often used as ‘shadow IT’ in organizations, with 72.4% of respondents stating they are permitted at work, but many lack corporate guidance.
– Kaspersky recommends implementing a company-wide AI policy, including training, approved tool lists, and monitoring to balance innovation and security.
– A tiered access model and specialized training for employees and IT specialists are advised to secure AI use and protect against threats like data leaks.
A recent study from Kaspersky focusing on the Middle East, Turkiye, and Africa reveals a significant gap between the widespread adoption of artificial intelligence tools in the workplace and the necessary cybersecurity training to support it. The research indicates that while 81.7% of professionals across the META region use AI for work tasks, only 38% have received any formal instruction on the cybersecurity risks involved. This disparity highlights a critical vulnerability for organizations, as employees increasingly integrate AI into daily operations without adequate safeguards against threats like data leakage or prompt injection attacks.
Survey participants from countries including South Africa, Kenya, and Egypt demonstrated a strong familiarity with generative AI concepts, with 94.5% claiming to understand the term. For many, this knowledge is actively applied: AI tools are regularly used for writing and editing texts (63.2%), managing work emails (51.5%), creating neural network-generated images or videos (45.2%), and performing data analytics (50.1%). Despite this high engagement, a third of all professionals reported receiving no AI-related training whatsoever. Among those who had training, nearly half focused on effective tool usage and prompt creation, while cybersecurity received considerably less attention.
This situation often results in AI tools becoming part of the “shadow IT” landscape, where employees use applications without official corporate approval or oversight. The survey found that 72.4% of respondents work where generative AI is permitted, 21.3% said it is not allowed, and 6.3% were uncertain about their company’s stance. To address these security challenges, organizations are advised to develop and enforce a comprehensive company-wide AI policy. Such a policy should clearly outline prohibited uses, specify approved tools, and be formally documented alongside mandatory employee training.
Chris Norton, General Manager for Sub-Saharan Africa at Kaspersky, emphasizes that a balanced, tiered access model represents the most effective strategy for corporate AI implementation. This approach aligns AI usage levels with departmental data sensitivity, supported by thorough cybersecurity training. This method encourages innovation and efficiency without compromising security standards.
Kaspersky provides several key recommendations for securing corporate AI use:
Organizations should train employees on responsible AI practices. Specialized courses on AI security available through the Kaspersky Automated Security Awareness Platform can enhance existing educational programs.
IT teams need relevant knowledge about exploitation techniques and practical defense. Training such as the ‘Large Language Models Security’ course from the Kaspersky Cybersecurity Training portfolio can boost both professional development and organizational security.
All devices used to access business data, whether work-issued or personal, should be protected with a robust cybersecurity solution. Kaspersky Next products defend against various threats, including phishing attempts and fake AI applications that may contain hidden infostealers.
Regular surveys help monitor AI usage frequency and applications. Analyzing this data allows companies to assess both benefits and risks, enabling ongoing policy adjustments.
Implementing a specialized AI proxy can enhance security by scrubbing sensitive information like names or customer IDs from queries in real-time. Role-based access control can further block inappropriate use cases.
Developing a comprehensive policy that addresses the full spectrum of AI-related risks is essential. Kaspersky’s implementation guidelines provide a useful resource for establishing these protocols.
The underlying survey was conducted in 2025 by the Toluna research agency, gathering 2800 online interviews with employees and business owners who use computers for work across seven countries: Türkiye, South Africa, Kenya, Pakistan, Egypt, Saudi Arabia, and the UAE.
(Source: MEA Tech Watch)




