AMD invests up to $250M in Nutanix to build an open, full-stack AI platform optimized for enterprise inference across data center, hybrid and edge.
Advanced Micro Devices and Nutanix announced a multi-year strategic partnership on Tuesday aimed at building an open, full-stack artificial intelligence infrastructure platform for enterprises and service providers.
The agreement aligns AMD’s silicon and AI software ecosystem with Nutanix’s cloud orchestration platforms to support what the companies describe as the next phase of enterprise AI: agentic systems and inference-heavy workloads deployed across data centers, hybrid clouds and the edge.
As part of the deal, AMD will invest $150 million in Nutanix common stock at $36.26 per share and commit up to $100 million to joint engineering and go-to-market initiatives. The equity investment, totaling up to $250 million in combined commitments, is expected to close in the second quarter of 2026, pending regulatory approvals.
Also Read: Building AI That Compounds, Not Just Ships
A Full-Stack Bet on Openness
The partnership centers on integrating AMD’s EPYC CPUs and Instinct GPUs with the Nutanix Cloud Platform and Nutanix Kubernetes Platform. The companies also plan to embed AMD’s ROCm software ecosystem and AMD Enterprise AI stack directly into Nutanix’s infrastructure offerings.
The goal: deliver production-ready, scalable AI platforms optimized for inference and agentic applications without locking enterprises into vertically integrated stacks.
“Enterprise customers need the freedom to run the models and workloads that matter most to their business, without compromise,” said Dan McNamara, senior vice president and general manager of Compute and Enterprise AI at AMD. “Through our partnership with Nutanix we’re building a scalable, full-stack AI platform rooted in openness.”
Tarkan Maner, president and chief commercial officer of Nutanix, described the alliance as a shared commitment to “scalable, production-ready AI infrastructure” optimized for inference across hybrid environments.
Also Read: The Missing Layer in AI’s Enterprise Ambition
Inference Takes Center Stage
The collaboration reflects a broader industry shift. As AI adoption matures, enterprise demand is moving from model training to inference — the real-time execution of AI workloads in production environments.
Inference requires infrastructure that balances high-performance acceleration with operational efficiency and lifecycle management. The companies say their jointly engineered platform will combine AMD Instinct GPUs for acceleration, EPYC processors for high core-density compute and orchestration, and Nutanix Enterprise AI for unified lifecycle management.
The first jointly developed agentic AI platform is expected to reach the market in late 2026.
By emphasizing open standards and interoperability, AMD and Nutanix are positioning their alliance as an alternative to tightly controlled AI ecosystems. For enterprises wary of vendor lock-in, the promise of architectural choice may prove as important as raw performance.
In a market defined by rapid model innovation and intensifying infrastructure demands, the partnership signals a new phase of competition: not just over chips or cloud services, but over the foundational architecture that will run AI agents and multimodel inference systems at scale.


