IBM: Enterprises Use All AI Tools, But Picking the Right LLM Is Key

▼ Summary
– VB Transform 2025 highlighted IBM’s approach to generative AI, emphasizing multi-model strategies over single-vendor solutions for enterprise deployments.
– IBM positions itself as an AI “control tower,” offering a gateway API to switch between LLMs while maintaining governance and observability.
– IBM developed ACP, an open protocol for agent-to-agent communication, competing with Google’s A2A to standardize AI interactions.
– AI transformation should go beyond chatbots to automate entire workflows, as demonstrated by IBM’s HR agent handling complex processes autonomously.
– Enterprises should prioritize multi-model flexibility, workflow automation, and open communication protocols to avoid vendor lock-in and scale AI effectively.
Enterprises today are leveraging multiple AI tools simultaneously, with selecting the right large language model (LLM) for each task becoming a critical strategic decision. IBM’s recent insights reveal that businesses are moving away from single-vendor solutions, opting instead for a tailored approach where different models address specific needs.
At a recent industry event, Armand Ruiz, IBM’s VP of AI Platform, highlighted how enterprises are adopting a multi-model strategy, matching LLMs to precise use cases rather than relying on a one-size-fits-all solution. While IBM offers its own Granite series of open-source models, the company positions itself as an orchestrator rather than a sole provider, helping businesses integrate the best tools for their workflows.
The rise of multi-LLM gateways reflects this shift. IBM’s newly introduced model gateway allows enterprises to seamlessly switch between different LLMs via a single API while maintaining governance and observability. This flexibility enables companies to run open-source models on private infrastructure for sensitive tasks while tapping into public cloud APIs like AWS Bedrock or Google’s Gemini for less critical applications.
Beyond model selection, agent orchestration protocols are emerging as essential infrastructure for AI deployments. IBM has contributed its Agent Communication Protocol (ACP) to the Linux Foundation, joining Google’s similar Agent2Agent (A2A) protocol. These standards streamline interactions between AI agents, reducing the need for custom integrations, a crucial advantage as enterprises scale, with some already testing over 100 agents in pilot programs.
Ruiz emphasized that AI’s real value lies in transforming workflows, not just deploying chatbots. He shared IBM’s internal HR example, where specialized agents handle routine inquiries, such as compensation or promotions, automatically routing requests to the right systems and escalating only when human intervention is necessary. This shift from human-computer interaction to AI-driven process automation represents a fundamental change in how businesses operate.
For enterprises investing in AI, IBM’s findings suggest key strategic takeaways:
- Move beyond chatbots, focus on end-to-end workflow automation.
- Adopt multi-model flexibility, avoid vendor lock-in by integrating diverse LLMs.
- Prioritize open protocols, support emerging standards like ACP and A2A for seamless agent communication.
As Ruiz noted, business leaders must embrace AI-first thinking, understanding not just the technology but how it reshapes entire operational processes. The future belongs to organizations that strategically align AI tools with their most critical workflows.
(Source: VentureBeat)