Artificial IntelligenceCybersecurityNewswireSecurity

Critical Privilege Escalation Vulnerability Discovered in Azure ML

▼ Summary

– A privilege escalation vulnerability in Azure Machine Learning (AML) allows attackers with Storage Account access to execute arbitrary code and potentially compromise subscriptions.
– The flaw stems from AML storing invoker scripts in a Storage Account, where modified scripts run with the compute instance’s broad permissions, enabling privilege escalation.
– Attackers could replace invoker scripts, extract secrets, escalate privileges, and assume creator-level roles, including “Owner” permissions on Azure subscriptions.
Microsoft acknowledged the issue as “by design” but updated AML to use snapshots of component code instead of real-time script execution from storage.
– Mitigation recommendations include restricting Storage Account write access, disabling SSO, using minimal-permission identities, and enforcing script immutability and validation.

A newly uncovered security flaw in Azure Machine Learning (AML) could allow attackers to escalate privileges and gain unauthorized control over cloud environments. Cybersecurity experts warn this vulnerability enables malicious actors with basic storage access to execute harmful code within machine learning pipelines, potentially compromising entire Azure subscriptions when default settings remain unchanged.

The problem stems from how AML handles invoker scripts, Python files responsible for managing machine learning components. These scripts are stored in automatically generated Storage Accounts and execute with the same permissions as AML compute instances. Researchers found that attackers with write access to storage could manipulate these scripts to run malicious commands, extract sensitive data from Azure Key Vault, and even assume the identity of the instance creator, including “Owner” privileges on a subscription.

Single Sign-On (SSO) being enabled by default exacerbates the risk, as compute instances often inherit high-level permissions from their creators. In a proof-of-concept demonstration, security firm Orca illustrated how an attacker could exploit this weakness to gain extensive control over cloud resources.

Microsoft responded by stating this behavior was intentional, equating storage account access with compute instance access. However, the company has since updated its documentation and modified AML’s functionality, jobs now run using snapshots of component code rather than live scripts from storage.

Despite Microsoft’s stance, Orca maintains that the vulnerability remains a serious concern under default configurations. Organizations using AML should take proactive steps to mitigate risks, including restriction of write permissions on AML Storage Accounts

While AML’s security framework may be theoretically sound, real-world deployments often leave gaps that attackers can exploit. Regular security audits and strict adherence to the principle of least privilege are critical in protecting machine learning workflows from unauthorized access. Without proper safeguards, organizations risk exposing sensitive data and infrastructure to potential breaches.

(Source: InfoSecurity Magazine)

Topics

ai 95% privilege escalation 90% storage account access 85% malicious code execution 80% microsofts response 75% Mitigation Strategies 70% single sign- sso risks 65% security audits 60% principle least privilege 55%
Show More

The Wiz

Wiz Consults, home of the Internet is led by "the twins", Wajdi & Karim, experienced professionals who are passionate about helping businesses succeed in the digital world. With over 20 years of experience in the industry, they specialize in digital publishing and marketing, and have a proven track record of delivering results for their clients.
Close

Adblock Detected

We noticed you're using an ad blocker. To continue enjoying our content and support our work, please consider disabling your ad blocker for this site. Ads help keep our content free and accessible. Thank you for your understanding!