“By enabling our enterprise-grade Red Hat AI Inference Server, built on the innovative vLLM framework, with AWS AI chips, we’re empowering organizations to deploy and scale AI workloads with enhanced efficiency and flexibility. Building on Red Hat’s open source heritage, this collaboration aims to make generative AI more accessible and cost-effective across hybrid cloud environments,” said Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat.
Red Hat’s collaboration with AWS empowers organizations with a full-stack gen AI strategy by bringing together Red Hat’s comprehensive platform capabilities with AWS cloud infrastructure and AI chipsets, AWS Inferentia2 and AWS Trainium3. Key aspects of the collaboration include:
Upstream community contribution: Red Hat and AWS are collaborating to optimize an AWS AI chip plugin up-streamed to vLLM. As the top commercial contributor to vLLM, Red Hat is committed to enabling vLLM on AWS to help accelerate AI inference and training capabilities for users. vLLM is also the foundation of llm-d, an open source project focused on delivering inference at scale and now available as a commercially supported feature in Red Hat OpenShift AI 3.
Red Hat AI Inference Server on AWS AI chips: Red Hat AI Inference Server, powered by vLLM, will be enabled to run with AWS AI chips, including AWS Inferentia2 and AWS Trainium3, to deliver a common inference layer that can support any gen AI model, helping customers achieve higher performance, lower latency and cost-effectiveness for scaling production AI deployments, delivering up to 30-40% better price performance than current comparable GPU-based Amazon EC2 instances.
Enabling AI on Red Hat OpenShift: Red Hat worked with AWS to develop an AWS Neuron operator for Red Hat OpenShift, Red Hat OpenShift AI and Red Hat OpenShift Service on AWS, a comprehensive and fully managed application platform on AWS, providing customers with a more seamless, supported pathway to run their AI workloads with AWS accelerators.
Ease of access and deployment: By supporting AWS AI chips, Red Hat will offer enhanced and easier access to high-demand, high-capacity accelerators for Red Hat customers on AWS. In addition, Red Hat recently released the amazon.ai Certified Ansible Collection for Red Hat Ansible Automation Platform to enable orchestrating AI services on AWS.
The AWS Neuron community operator is now available in the Red Hat OpenShift OperatorHub for customers using Red Hat OpenShift or Red Hat OpenShift Service on AWS. Red Hat AI Inference Server support for AWS AI chips is expected to be available in developer preview in January 2026.
Colin Brace, vice president, Annapurna Labs, AWS, said, “Our collaboration with Red Hat provides customers with a supported path to deploying generative AI at scale, combining the flexibility of open source with AWS infrastructure and purpose-built AI accelerators to accelerate time-to-value from pilot to production.”






