AWS to host OpenAI’s training and inference workloads under a $38 billion contract, expanding compute supply through 2026 and beyond.
OpenAI signs a $38 billion multi-year agreement with Amazon Web Services (AWS) to secure cloud and computing capacity, marking its first major partnership outside Microsoft. The deal gives OpenAI access to hundreds of thousands of Nvidia GPUs hosted across AWS data centres in the US, with further expansion planned through 2026.
The agreement marks a strategic shift for OpenAI, which had previously relied on Microsoft as its exclusive cloud provider. Microsoft’s exclusivity expired this year, allowing OpenAI to work with multiple hyperscalers, including Google Cloud, Oracle, and now AWS, the global leader in cloud infrastructure. The diversification aims to reduce dependence on a single provider and ensure sufficient compute supply for training and deploying large-scale AI models.
Under the arrangement, OpenAI workloads will run on AWS’s existing data centres initially, with Amazon set to build new dedicated capacity as demand grows. The compute allocation will be used for both AI model training and inference, supporting products such as ChatGPT and other foundation models.
The deal also deepens integration with AWS’s Bedrock platform, which hosts multiple foundation models for enterprise users. Companies already running OpenAI models on AWS include Thomson Reuters, Comscore, and Triomics, using them for automation, coding, and data analysis tasks.
For Amazon, the agreement strengthens its position in the fast-expanding AI infrastructure market. AWS recently reported over 20% year-on-year revenue growth, though competition remains intense as Microsoft Azure and Google Cloud record faster expansion rates.
Amazon’s stock closed 4% higher following the announcement, reaching a record high. The company has existing AI investments through Anthropic, for which it is building a dedicated $11 billion data centre campus in Indiana.
The OpenAI-AWS deal highlights a continuing realignment across the AI industry as compute demands surge and companies secure long-term access to specialised hardware such as Nvidia’s Blackwell-class GPUs.


















