AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

IBM and Arm to Bring AI to Mainframes

▼ Summary

– IBM and Arm announced a strategic collaboration on April 2, 2026, to enable Arm-based software to run on IBM Z and LinuxONE mainframes.
– The partnership focuses on three areas: virtualisation for hosting Arm software, security/compliance for regulated industries, and long-term ecosystem interoperability.
– This aims to bring the native Arm AI software stack closer to the enterprise data that IBM Z customers cannot move to the public cloud.
– The collaboration is described as a future direction and intent, with no shipping date or existing products announced.
– For IBM, this addresses a strategic vulnerability by keeping its mainframes relevant for AI; for Arm, it extends its ecosystem into a major, previously unserved enterprise environment.

A new strategic partnership announced on April 2, 2026, aims to bridge a critical divide in enterprise technology. IBM and Arm have revealed a collaboration to enable Arm-based software to operate on IBM Z and LinuxONE mainframes. These systems form the backbone for the world’s most sensitive financial and regulated transactions, housing data that often cannot migrate to public clouds. The initiative focuses on three key areas: developing a virtualisation layer for hosting Arm environments, ensuring security and compliance for regulated workloads, and fostering long-term ecosystem interoperability. This move is designed to bring the dominant Arm-native AI software stack directly to where mission-critical enterprise data resides.

The core challenge is architectural. The modern AI ecosystem, including frameworks like PyTorch, TensorFlow, and the containerized workloads built for Kubernetes, has been developed primarily for x86 and Arm architectures. IBM’s mainframes, however, run on its proprietary s390x architecture. With an estimated 50% of compute shipped to major cloud providers in 2025 being Arm-based, a significant software gap has emerged. Enterprises relying on IBM Z as their system of record face a dilemma: their most valuable data lives on the mainframe, but the AI tools to analyze it are built for a different platform. Porting software to s390x is slow and costly, often lagging behind the rapid pace of AI innovation.

The collaboration outlines three primary workstreams to address this. First is creating the virtualisation technology to run Arm software environments on IBM hardware without application rewrites. Second is guaranteeing that these workloads meet the stringent data residency, encryption, and availability standards demanded by banking, government, and healthcare sectors. Third is establishing shared technology for long-term software flexibility. A crucial caveat accompanies this announcement: these are future plans, not shipping products. IBM has not provided a release date, framing the news as a statement of intent and strategic direction.

This software initiative builds upon recent IBM hardware advancements. The IBM z17 mainframe, featuring the Telum II processor, offers a 50% improvement in AI inference throughput over its predecessor. Complementing this is the IBM Spyre Accelerator, a low-power card adding 32 AI-optimised cores to z17 and LinuxONE 5 systems, designed to run large language models on-premises. The partnership with Arm represents the necessary software layer for this hardware, aiming to ensure the AI tools enterprises want are actually available for their mainframe investments.

For IBM, this move addresses a strategic vulnerability. As AI inference workloads grow, the inability to run the native Arm stack directly on Z systems creates friction that cloud competitors, with their Arm-optimised environments, do not face. This collaboration seeks to keep the mainframe relevant in the AI era, preventing it from being sidelined as a transactional backend. For Arm, the partnership represents an expansion into the last major enterprise stronghold it does not natively serve. Gaining a foothold in the regulated-industry environment of IBM Z, with its unparalleled security credentials, offers a pathway that cloud deployments alone cannot match.

Executives from both companies highlighted the strategic rationale. IBM’s chief product officer for Z and LinuxONE described the collaboration as a natural extension of the company’s history of anticipating enterprise needs, aimed at expanding software choice while maintaining expected reliability and security. Arm’s executive vice president noted the extension of the Arm ecosystem into mission-critical enterprise environments, providing organizations with greater deployment flexibility for AI.

The broader context is the Arm-first trajectory of cloud AI infrastructure. Major frameworks now feature optimised kernels for Arm, and cloud-native development assumes its compatibility. IBM Z customers have operated in a parallel universe, requiring separate porting efforts and facing delayed access to new tooling. This partnership is a structural attempt to collapse that gap and make the mainframe a first-class citizen in the Arm software ecosystem.

Success hinges on execution details that remain unspecified. Building a virtualisation layer that runs Arm binaries reliably at enterprise scale, with the security and availability guarantees IBM Z clients require, is a formidable engineering challenge. While IBM has a strong track record in backward compatibility, running a foreign instruction set architecture at production performance is a different order of difficulty. In a year where enterprise AI has shifted from experiment to deployment, the stakes are high. Clients are no longer asking if they will run AI on their mainframes, but when and with what software. The IBM-Arm collaboration is the proposed answer, though its timeline is still to be determined.

(Source: The Next Web)

Topics

ibm-arm collaboration 100% mainframe computing 95% ai software stack 92% enterprise ai 90% hardware virtualization 88% security compliance 86% ecosystem interoperability 84% cloud ai infrastructure 82% ibm z17 hardware 80% ai inference 78%