Microsoft tackles new security threats in Windows 11 AI tools

▼ Summary
– Microsoft is adding experimental “agentic” AI features to Windows 11, including a Copilot Actions toggle in Settings for testers.
– These AI agents are designed to perform background tasks like organizing files and scheduling meetings to boost user productivity.
– Microsoft acknowledges that these agents can make errors and present novel security risks, such as vulnerability to attacker instructions.
– To mitigate risks, agents run under separate user accounts with limited permissions and require user approval for data access.
– All agent actions are observable and logged, with users able to supervise activities and review planned multi-step tasks.
Microsoft is actively advancing the integration of artificial intelligence within Windows 11, introducing both generative and agentic AI capabilities that embed more deeply into the operating system’s core. A recent update distributed to Windows Insider Program participants includes a new toggle in Settings labeled “experimental agentic features,” designed to enable Copilot Actions. Microsoft has also released a detailed support document explaining how these experimental functions will operate, signaling a significant step in the evolution of AI-assisted computing.
The term “agentic” refers to AI systems that can autonomously perform assigned tasks in the background, freeing users to focus on other activities. According to Microsoft, these agents are intended to handle routine duties such as organizing files, scheduling appointments, or sending emails. The company describes Copilot Actions as providing an “active digital collaborator” capable of executing complex workflows to boost efficiency and productivity.
However, like other forms of artificial intelligence, these agents are not infallible. They can produce errors or confabulate information, sometimes proceeding with confidence even when incorrect. Microsoft has acknowledged that these systems introduce “novel security risks,” particularly if malicious actors manage to issue commands to an agent. To address these concerns, Microsoft is carefully balancing agent functionality with security, granting agents enough access to perform tasks while isolating them from sensitive system areas.
Currently, the experimental agentic features are optional, available only in early Windows 11 test builds, and disabled by default. To mitigate risks, each AI agent operates under a separate user account, distinct from the primary user profile. This ensures agents cannot alter system-wide settings and are confined to their own virtual desktop environment, preventing interference with the user’s active workspace. User consent is required before any agent can access personal data, and all agent activities are logged and visibly differentiated from user actions. Microsoft also emphasizes that agents must maintain activity logs and offer supervision mechanisms, such as displaying a step-by-step plan before executing multi-stage tasks.
(Source: Ars Technica)





