Back to list
Mar 11, 2026
49
0
0
IT NewsNEW

Thinking Machines Lab Secures Gigawatt-Scale Nvidia Deal: Mira Murati's $50B Compute Bet

Mira Murati's Thinking Machines Lab announces a multi-year partnership with Nvidia for at least one gigawatt of Vera Rubin systems, estimated at $50 billion in hardware procurement.

#Thinking Machines Lab#Nvidia#Mira Murati#Vera Rubin#AI Compute
Thinking Machines Lab Secures Gigawatt-Scale Nvidia Deal: Mira Murati's $50B Compute Bet
AI Summary

Mira Murati's Thinking Machines Lab announces a multi-year partnership with Nvidia for at least one gigawatt of Vera Rubin systems, estimated at $50 billion in hardware procurement.

A Partnership Measured in Gigawatts

On March 10, 2026, Thinking Machines Lab and Nvidia announced a multi-year strategic partnership that will see Mira Murati's AI startup deploy at least one gigawatt of computing capacity using Nvidia's next-generation Vera Rubin systems. The deal, which includes a significant but undisclosed investment from Nvidia, represents one of the largest compute procurement agreements in the AI industry's history, with infrastructure costs estimated at approximately $50 billion.

The partnership establishes Thinking Machines Lab as a major player in the frontier AI race, just over a year after Murati founded the company upon leaving her role as Chief Technology Officer at OpenAI. With this deal, Thinking Machines joins a small group of organizations, alongside OpenAI, Google, Meta, and Anthropic, that are operating at the scale required to train the next generation of frontier AI models.

Inside the Vera Rubin Hardware

The compute deployment centers on Nvidia's Vera Rubin platform, which represents the chipmaker's next-generation architecture following the current Blackwell generation. The Rubin GPU features 336 billion transistors, a substantial increase over current designs. The platform is paired with Vera CPUs built on 88 Arm cores using the Armv9.2 instruction set, supporting 176 threads per processor.

Deployment of the Vera Rubin systems is scheduled to begin in early 2027. The one-gigawatt minimum commitment translates to data center infrastructure capable of powering roughly the equivalent of a mid-sized city's electricity consumption, dedicated entirely to AI training and inference workloads.

SpecificationDetails
GPU ArchitectureNvidia Rubin
Transistor Count336 billion
CPUVera (88 Arm cores, Armv9.2)
CPU Threads176 per processor
Compute Commitment1+ gigawatt
Estimated Cost~$50 billion
Deployment StartEarly 2027

Beyond hardware procurement, the companies will collaborate on designing training and serving systems optimized for Nvidia architectures, suggesting a deeper technical integration than a standard supplier-customer relationship.

Murati's Post-OpenAI Trajectory

Mira Murati founded Thinking Machines Lab in February 2025 after serving as OpenAI's CTO, where she oversaw the development of ChatGPT, the Sora video generator, and several other flagship products. Her departure from OpenAI was part of a broader wave of senior leadership exits from the company during 2024-2025.

Since its founding, Thinking Machines Lab has raised more than $2 billion in seed funding at a $12 billion valuation, with investors including Andreessen Horowitz, Accel, AMD, ServiceNow, and Nvidia itself. The speed of fundraising and the scale of the Nvidia partnership reflect the AI industry's willingness to make multi-billion-dollar bets on teams with proven track records in frontier model development.

Murati described the partnership's significance in a statement: "This partnership accelerates our capacity to build AI that people can shape." The phrasing suggests a focus on customizable and steerable AI systems rather than monolithic foundation models, aligning with the company's product direction.

Tinker: The Open-Source Cloud Platform

Thinking Machines Lab has already launched a product called Tinker, a cloud service that enables enterprises to fine-tune and customize open-source large language models. The platform currently supports more than 13 LLMs, including Meta's Llama series, and uses LoRA (Low-Rank Adaptation) technology for efficient fine-tuning without requiring full model retraining.

Tinker positions Thinking Machines in the enterprise AI infrastructure market, where companies increasingly want to deploy customized AI models using their proprietary data while leveraging open-source base models. This approach is complementary to, rather than competitive with, the frontier model training that the Nvidia partnership will support.

Job postings from the company indicate active development across audio processing AI, visual reasoning models, and custom Transformer architecture implementations, suggesting ambitions well beyond the current Tinker product.

Strategic Implications for the AI Compute Market

The deal underscores several trends in the AI industry. First, the compute requirements for frontier AI training continue to escalate dramatically. A gigawatt-scale deployment represents an order of magnitude increase over the infrastructure that trained models like GPT-4 and Claude Opus 4, signaling that the next generation of AI systems will require proportionally more computational resources.

Second, Nvidia's investment in Thinking Machines Lab extends its strategy of taking equity positions in major AI companies, ensuring demand for its hardware while gaining exposure to the value created by the models trained on that hardware. Nvidia has made similar investments across the AI ecosystem, creating a network of hardware-dependent companies that reinforce its market position.

Third, the partnership challenges the notion that frontier AI development is limited to a handful of incumbents. With sufficient funding and access to compute, a startup founded just over a year ago can secure the infrastructure needed to compete at the highest levels. This dynamic suggests that the frontier AI landscape could become more competitive rather than more concentrated over time.

Risks and Open Questions

The $50 billion estimated infrastructure cost raises questions about Thinking Machines Lab's path to generating sufficient revenue to justify the investment. While the company's $12 billion valuation and $2 billion in raised capital are substantial, the hardware commitment alone dwarfs current funding. The company will likely need to raise significantly more capital or generate substantial revenue from Tinker and future products to sustain operations at this scale.

The early 2027 deployment timeline means the Vera Rubin systems will not be available for training runs until next year, creating a gap between the announcement and actual compute availability. During this period, Thinking Machines must rely on existing infrastructure for model development.

The partnership with Nvidia also creates hardware dependency. While Nvidia currently dominates the AI accelerator market, alternative architectures from AMD, custom chips from Google (TPUs) and Amazon (Trainium), and emerging competitors could shift the landscape by the time the Vera Rubin deployment is fully operational.

Conclusion

The Thinking Machines Lab and Nvidia partnership is significant not just for its scale but for what it represents about the current state of AI development. A year-old startup, led by a former OpenAI executive, can secure a compute commitment that rivals the infrastructure plans of the largest technology companies in the world. For Nvidia, the deal secures a massive hardware customer and validates the demand trajectory for its next-generation architecture. For the AI industry broadly, it signals that the compute arms race is accelerating rather than stabilizing, with implications for energy consumption, infrastructure investment, and the competitive dynamics of frontier model development.

Pros

  • Gigawatt-scale compute access puts Thinking Machines Lab on par with the largest AI developers for training frontier models
  • Nvidia strategic investment provides both capital and a deep technical partnership beyond standard hardware procurement
  • Tinker platform already generates revenue through enterprise open-source LLM customization services
  • Murati's track record at OpenAI (ChatGPT, Sora) lends credibility to the company's frontier AI ambitions
  • Focus on AI 'that people can shape' suggests differentiation through customizability and steerability

Cons

  • The $50 billion estimated infrastructure cost dwarfs current $2 billion in funding, raising sustainability questions
  • Vera Rubin deployment does not begin until early 2027, creating a gap in compute availability
  • Heavy Nvidia hardware dependency creates vendor lock-in risk if alternative architectures emerge
  • Undisclosed investment size and revenue figures make it difficult to assess the company's financial health

Comments0

Key Features

Thinking Machines Lab, founded by former OpenAI CTO Mira Murati in February 2025, secured a multi-year strategic partnership with Nvidia on March 10, 2026. The deal includes at least one gigawatt of Nvidia Vera Rubin computing systems (336 billion transistor GPUs, 88-core Vera CPUs) with deployment starting early 2027, at an estimated infrastructure cost of $50 billion. Nvidia made a significant undisclosed investment. The company has raised $2 billion at a $12 billion valuation and operates Tinker, a cloud platform for fine-tuning 13+ open-source LLMs using LoRA technology.

Key Insights

  • A gigawatt-scale compute commitment represents an order of magnitude increase over infrastructure used for current frontier models like GPT-4 and Claude Opus 4
  • Thinking Machines Lab has raised $2 billion and secured a $50 billion hardware commitment in just over one year since founding, reflecting the AI industry's appetite for frontier compute bets
  • Nvidia's investment strategy of taking equity in major AI customers creates a network of hardware-dependent companies that reinforces its dominant market position
  • The Tinker platform's focus on open-source LLM customization positions the company in both frontier training and enterprise AI infrastructure markets simultaneously
  • Vera Rubin's 336 billion transistor GPU architecture signals the next major step in AI accelerator capabilities beyond the current Blackwell generation
  • The partnership challenges assumptions that frontier AI development is limited to incumbent companies like OpenAI, Google, and Anthropic
  • Early 2027 deployment creates a significant gap between announcement and actual compute availability for training runs
  • The estimated $50 billion infrastructure cost far exceeds current funding, implying additional capital raises will be necessary

Was this review helpful?

Share

Twitter/X