Inside Amazon’s Race to Beat OpenAI’s ‘Stargate’ Project

RAMI Sinno is crouching next to a workbook, fighting on a disc the size of a beach in a box, when a dull blow resonates around his laboratory.
“I just dropped tens of thousands of dollars in equipment,” he said, laughing.
Raising itself, Sinno reveals the goods: a slice of golden silicon, which shines in the fluorescent light of the laboratory. This circular tray is divided into some 100 rectangular tiles, each of which contains billions of microscopic electric switches. These are the brains of the most advanced chip in Amazon: The Trainium 2, announced in December.
For years, artificial intelligence companies depend on a company, Nvidia, to design the cutting -edge chips necessary to train the most powerful AI models in the world. But while the AI race warms up, the cloud giants like Amazon and Google have accelerated their internal efforts to design their own chips, looking for market share in the Cloud Computing industry, which was evaluated at $ 900 billion at the beginning of 2025.
This Austin laboratory, Texas, in Texas, is where Amazon meets his candidacy for the sumi-nuancers’ supremacy. Sinno is a key player. He is director of engineering at Annapurna Labs, the subsidiary of the design of Amazon Cloud Computing, Amazon Web Services (AWS). After putting on the protection of the ear and slipping his card to enter a secure room, Sinno proudly displays a set of finished trainium 2, which he helped to design, exploiting the way they would normally do it in a data center. He must shout to be heard about the cacophony of fans who swirl that search the hot air, warmed by the insatiable energy demand of these chips, in the building’s air conditioning system. Each chip can be easily integrated into the palm of Sinno’s hand, but the calculation infrastructure around them – mothers, memory, data cables, fans, thermal dissipators, transistors, power supports – represents this rack of only 64 laps on him, drowning his voice.
As large as this unit may be, it is only a miniaturized simulacrum of the natural habitat of chips. Soon, thousands of these superordinators the size of a refrigerator will be put in several places not disclosed in the United States and connected together to form “Project Rainier” – one of the largest data groups never built in the world, named after the giant mountain which is looming on the Amazon headquarters in Seattle.
Project Rainier is Amazon’s response to the Openai and Microsoft “Stargate” project, announced by President Trump at the White House in January. Meta and Google also currently build similar “hyperscalers” data centers, costing billions of dollars each, to form their next generation of powerful AI models. Large technological companies have spent the last decade to raise huge lots of liquidity; Now, they all spend it in a race to build the gargantuan physical infrastructure necessary to create AI systems which, according to them, will fundamentally change the world. The computer infrastructure of this scale has never been seen before in human history.
The precise number of chips involved in Project Rainier, the total cost of its data centers and their locations are all closely accomplices secrets. (Although Amazon does not leave the cost of rainier in itself, the company indicated that it planned to invest some $ 100 billion in 2025, the majority going to AWS.) The feeling of competition is fierce. Amazon claims that the finished Rainier project will be “IA’s largest calculation cluster” – Bigger, the involvement is, that even Stargate. Employees here use the fight against questions in response to questions about OpenAi’s challenge. “Stargate is easy to announce,” explains Gadi Hutt, director of Annapurna product. “Let’s see it first.”
Amazon builds a Rainier project specifically for a client: AI Anthropic, which has agreed with a long lease on massive data centers. (How long? It is also classified.) There, out of hundreds of thousands of trainium 2 chips, Anthropic plans to train successors in his popular Claude family of IA models. Fleas inside Rainier will be collectively five times more powerful than the systems used at the best of these models. “It’s good, much larger,” said Tom Brown, an anthropogenic co-founder, says time.
No one knows what the results of this huge leap in the calculation power power be. The CEO of anthropic, Dario Amodei, has publicly predicted that a “powerful AI” (the term he prefers on artificial general intelligence – a technology that can do most tasks and faster than human experts) could arrive in 2026.
The effect of the steering wheel
Anthropic is not only an Amazon client; It also partly belongs to the technology giant. Amazon has invested $ 8 billion in Anthropic for a minority participation in the company. A large part of this money, in a strangely circular way, will eventually be spent for the rental costs of AWS data centers. This strange relationship reveals an interesting facet of the forces that stimulate the AI industry: Amazon mainly uses anthropic as proof of concept for its AI data center activities.
It is a dynamic similar to Microsoft’s relationship with the relationship of Openai and Openai with its subsidiary Deepmind. “Having a border laboratory on your cloud is a way to improve your cloud,” says Brown, the anthropogenic co-founder that manages the company’s relationship with Amazon. He compared it to the AWS partnership with Netflix: in the early 2010s, streamer was one of the first Big AWS customers. Due to the enormous infrastructure challenge to deliver a quick video to users around the world, “it meant that WS won all the comments they needed to make all different systems work on this scale,” said Brown. “They paved the way for the whole cloud industry.”
All cloud suppliers are now trying to reproduce this model in the AI era, says Brown. “They want someone who will go through the jungle and use a machete to cut a path, because no one has already followed this path. But once you have done it, there is a nice path and everyone can follow you.” By investing in Anthropic, which then spends most of this money on AWS, Amazon creates what he likes to call a steering wheel: a self-reinforced process that helps him create more advanced chips and data centers, reduces the cost of the “calculation” required to execute AI systems, and shows other companies the advantages of AI, which is translated into its turn AWS in the long term. Startups like Openai and Anthropic get glory, but the real winners are the large technological companies that manage the main cloud platforms in the world.
Admittedly, Amazon always depends strongly on Nvidia chips. Meanwhile, Google’s personalized chips, known as TPU, are considered by many in the industry as that of Amazon. And Amazon is not the only major technological company with participation in Anthropic. Google has also invested some $ 3 billion for a 14%stake. Anthropic uses both the Google and Amazon clouds in order to depend on both. Despite all this, Project Rainier and the Trainium 2 chips which will fill its data centers are the culmination of Amazon’s efforts to speed up its steering wheel in the pole position.
Trainium 2 chips, known as Sinno, were designed using intense comments from Anthropic, which shared details with AWS on how its software has interacted with trainium 1 hardware, and made suggestions on how the next generation of fleas could be improved. Such tight collaboration is not typical for AWS customers, says Sinno, but is necessary for Anthropic to compete in the world of “border” AI. The capacities of a model are essentially correlated with the quantity of calculation spent to train and execute it, so the more you can get calculation for your money, the better your final AI. “At the scale they work, each point of a percentage of performance improvement is of enormous value,” explains Sinno about Anthropic. “The more they can use the infrastructure, the better the return on investment is, as a customer.”
The more Amazon’s internal chips become sophisticated, the less it will have to count on the leader of the NVIDIA industry – the demand for fleas which far exceed the supply, which means that Nvidia can choose its customers while invoicing well above the production costs. But there is also another dynamic at stake than Annapurna employees hope to give Amazon a long -term structural advantage. NVIDIA sells physical chips (called GPU) directly to customers, which means that each GPU must be optimized to operate alone. Amazon, meanwhile, does not sell its trainium tokens. It simply sells access For them, running in AWS data centers. This means that Amazon can find efficiency gains that Nvidia would find it difficult to reproduce. “We have much more degrees of freedom,” says Hutt.
Back in the laboratory, Sinno returns the silicon slice to its box and moves to another part of the room, making the different stages of the design process in gesture for fleas which could – potentially very soon – summon a new powerful AIS. He is with enthusiasm for the statistics on the trainium 3, later expected this year, which, according to him, will be double the speed and 40% more energy efficient than his predecessor. The neural networks operating on Trainium 2 helped the design of the future chip, he said. This is an indication of how AI is already accelerating the speed of its own development, in a process that becomes faster and faster. “It’s a steering wheel,” explains Sinno. “Absolutely.”