YK Research

Transaction Note

SpaceX–Anthropic Colossus 1 Lease

Why xAI leasing Colossus 1 to Anthropic is an asset-rotation trade ahead of a possible SpaceXAI IPO.

9 May 2026 · YK Research

The Mispricing

The transaction is a single-tenant lease of Colossus 1 to Anthropic. The key analytical distinction is that Colossus 1 is mixed-generation NVIDIA capacity, while xAI reportedly retains the newer homogeneous Blackwell capacity for its own training workloads.

xAI capacity
550k+ H100e
Colossus 1 share
~40%
Lease revenue
$5–6B/yr
IPO angle
recurring lease

If xAI has roughly 550k+ GPUs on an H100-equivalent basis, then leasing the 220k mixed-generation Colossus 1 transfers about 40% of capacity. The retained capacity is the all-Blackwell Colossus 2 cluster, which is better suited for large-scale training. Anthropic rents the older mixed cluster for inference, where the workload is less sensitive to synchronized step time.

The financial angle is straightforward: if the reported lease economics are close, Colossus 1 becomes recurring infrastructure revenue instead of underutilized training capacity. That matters ahead of any SpaceXAI financing or IPO process.

For Anthropic, the relevant point is timing. Its April capacity announcements include AWS, Google/Broadcom TPU capacity, Google Cloud capacity, and now SpaceX. Colossus 1 is small in GW terms versus the larger commitments, but it is expected to come online within the month.

Source: Anthropic primary announcement for SpaceX/Colossus capacity; user-supplied Mirae Asset Securities excerpt dated 8 May 2026 for lease economics and IPO framing; YK Research synthesis. Treat lease rates, IPO valuation/timing and loss figures as reported estimates, not audited disclosures.

Reported Colossus 1 Mix

The reported configuration is roughly 150k H100s, 50k H200s and 20k GB200s. That is enormous nominal compute, but not one clean training fabric. For large-scale training, the slowest or most failure-prone slice sets the pace. For inference, the same heterogeneity is much less poisonous.

Source: reported Colossus 1 mix cited in the prompt: ~150k H100, ~50k H200, ~20k GB200.

MFU Was the Problem. Lease Yield Is the Solution.

A low model FLOPs utilization number is not just an engineering footnote. It is the P&L of the data center. The trick is that an 11% MFU training cluster can still become a high-utilization inference rental if a single tenant fills the whole thing.

Source: The Information figure cited in prompt for xAI; Meta/Google comparison cited in prompt.

Evidence Stack: What Is Hard vs. What Is Reported

Source quality matters. The capacity figures are partly supported by Anthropic and OpenAI primary sources. The lease economics and IPO framing are Mirae-reported estimates and should not be treated as disclosed contract terms.

Primary-source support
  • SpaceX / Colossus: Anthropic says it signed with SpaceX to use all compute capacity at Colossus 1: more than 300MW, over 220k NVIDIA GPUs, within the month.
  • AWS: Anthropic says the Amazon agreement secures up to 5GW, includes more than $100B of AWS commitments over 10 years, and nearly 1GW by end-2026.
  • Google + Broadcom: Anthropic says it signed for multiple GW of next-gen TPU capacity, starting in 2027.
  • Microsoft + NVIDIA: Anthropic says it committed to $30B of Azure compute and up to 1GW of additional capacity.
  • OpenAI benchmark: OpenAI says Stargate intends to invest $500B over four years, with $100B deployed immediately.
Mirae / estimate layer
  • April blitz math: 13.8GW in April and 14.8GW cumulative committed capacity are useful aggregations, but should be attributed to Mirae/user note.
  • Lease economics: $2.60/GPU-hour and $5–6B annual revenue are model estimates, not disclosed contract terms.
  • xAI offset: $6B annualized net loss and the “break-even hedge” framing are Mirae-reported.
  • IPO angle: SpaceXAI at ~$1.75T and June timing are market-color claims until corroborated by filings or formal investor docs.
  • Anthropic upside case: $5B spend to $15B ARR is a forecast, not realized revenue.
Primary links: Anthropic “Higher usage limits for Claude and a compute deal with SpaceX” (May 6, 2026); Anthropic “Amazon expand collaboration for up to 5GW” (Apr 20, 2026); Anthropic “Google and Broadcom ... multiple GW” (Apr 6, 2026); Anthropic “Microsoft, NVIDIA, and Anthropic strategic partnerships” (Nov 18, 2025); OpenAI “Announcing The Stargate Project” (Jan 21, 2025).
Asset Rotation: Training Pain Becomes IPO RevenueColossus 1 shifts from training use to single-tenant inference capacity.Training modeAll workers sync every stepFast GPUs wait for slow GPUsNetwork latency compounds at 100k+ scaleOne straggler can stall the batchReported MFU: ~11%Inference modeRequests shard more flexiblyMixed GPUs can serve different workloadsLatency matters, but not one giant sync stepCapacity shortage makes utilization valuableAnthropic gets capacity nowlease / handoffYK Research interpretation based on Anthropic primary disclosures, Mirae note excerpts, reported Colossus 1 GPU mix, and MFU commentary.

1. Straggler effect

In synchronous training, every GPU has to finish the step before the job advances. Faster GB200s do not create full value if H100s, H200s, network stalls, failed nodes or stack issues determine the wall-clock pace. For inference, requests can be parceled out; the straggler problem is largely neutralized.

2. Network topology

Ring-style collectives work well at smaller scale. At 100k+ GPUs, communication latency becomes a first-order bottleneck. But a single inference tenant avoids the worst multi-tenant jitter and queueing noise. Anthropic taking all 220k GPUs is technically cleaner than slicing the cluster across many customers.

3. Blackwell software gap

GB200/Blackwell changes power and scheduling behavior. The reported split is internally consistent: keep the homogeneous all-Blackwell Colossus 2 for xAI training, and lease the older mixed Colossus 1 for inference.

Anthropic Capacity Additions: 13.8GW in April

The SpaceX lease is small on power versus the hyperscaler blocks — 300MW against multi-GW commitments — but it is near-term capacity. Anthropic reportedly added 13.8GW of cumulative capacity in April: AWS expansion, Google/Broadcom TPU capacity, Google Cloud, and Colossus 1.

Source: Mirae Asset Securities excerpt dated 8 May 2026: AWS 5GW, Google + Broadcom TPU 3.5GW, Google Cloud 5GW, SpaceXAI Colossus 1 0.3GW.

Roadmap Size vs. Near-Term Capacity

OpenAI's 30GW roadmap remains larger than Anthropic's reported 14.8GW committed capacity. The difference is timing: if Colossus 1 turns on within the month, Anthropic receives usable inference capacity now rather than later.

Source: Mirae Asset excerpt/user note: OpenAI 30GW 2030 roadmap; Anthropic 14.8GW cumulative committed capacity; 13.8GW added in April.

Capacity Split: He Did Not Give Away the Crown Jewels

If xAI has roughly 550k+ H100-equivalent GPUs, Colossus 1 is not the whole capacity base. It is about 40% of reported capacity and is mixed-generation. Colossus 2 is homogeneous Blackwell and is the more relevant cluster for xAI training.

Source: Mirae Asset excerpt/user note: xAI ~550k+ H100-equivalent GPUs; Colossus 1 ~220k units.

Lease Economics: Revenue vs. Burn

At roughly $2.60 per GPU-hour, 220k GPUs imply about $5.0B of annualized gross lease revenue at full occupancy. Mirae frames the range as $5–6B, roughly matching xAI's annualized 1Q26 net loss. Anthropic takes the other side because it expects the incremental compute to support higher API and subscription capacity.

Math: 220,000 GPUs × $2.60/GPU-hour × 8,760 hours = ~$5.0B/year before downtime/discounts; Mirae/user note frames the range as $5–6B.

Strategic Context: OpenAI Pressure

The deal also has a competitive dimension. Musk is suing OpenAI while supplying a 220k+ GPU, 300MW inference block to Anthropic. The legal track keeps pressure on OpenAI's governance; the capacity track gives Anthropic more near-term serving capacity.

1. Legal front

The Musk–Altman dispute keeps attention on OpenAI's governance structure.

2. Capacity front

Anthropic gets near-term capacity rather than only a long-dated commitment.

3. IPO front

SpaceXAI gets a single-tenant revenue contract that can be underwritten separately from model-development losses.
Source: Mirae Asset Securities excerpt dated 8 May 2026; YK Research interpretation.

Deal Implications: What This Actually Says

The transaction says three specific things: Anthropic needed near-term inference capacity enough to take the entire Colossus 1 block; SpaceXAI monetized a mixed cluster that was less efficient for xAI training; and OpenAI now faces a competitor with more near-term serving capacity.

Anthropic

The deal is about deliverability. 300MW and 220k+ GPUs within the month is more useful than a larger roadmap that arrives years later. It directly supports Claude rate limits and inference availability.

SpaceXAI / xAI

The deal turns Colossus 1 from low-MFU training capacity into contracted infrastructure revenue. That supports the financing case before a possible IPO.

OpenAI

OpenAI still has the larger Stargate roadmap, but Anthropic narrows the near-term capacity gap if Colossus 1 is delivered on schedule.
Source: Anthropic SpaceX compute announcement; Anthropic AWS/Google/Microsoft compute announcements; Mirae Asset Securities excerpt for the transaction interpretation.

What Breaks the Thesis

1. Colossus 1 underdelivers for inference

The thesis assumes the mixed H100/H200/GB200 cluster is much more usable for inference than training. If reliability, networking, or power behavior still limits serving throughput, the lease economics weaken.

2. Deliverability slips

The Anthropic thesis depends on fast capacity conversion. If Colossus 1 does not come online this month, or if 13.8GW of commitments turn into slow permitting/power delays, the practical gap with OpenAI widens again.

3. Contract economics are worse than estimated

The $2.60/GPU-hour and $5–6B annual revenue figures are estimates. If pricing is lower, utilization is not full, or SpaceXAI carries heavy operating costs, the break-even hedge is less powerful.

4. IPO assumptions are too aggressive

If the $1.75T SpaceXAI valuation/timing is real, investors will underwrite recurring lease cash flow. Any sign the Anthropic revenue is low-margin, short-duration, or operationally unstable weakens that case.

Positioning

Track three items after the deal: whether Colossus 1 comes online on schedule, whether Anthropic raises usage limits further, and whether SpaceXAI uses the contract as evidence of recurring infrastructure revenue.

  • Watch #1: whether Colossus 1 actually comes online within the month and whether Claude rate limits improve materially.
  • Watch #2: whether Anthropic discloses more capacity economics, usage growth, or enterprise/API demand after the SpaceX block is live.
  • Watch #3: whether SpaceXAI uses the lease as explicit pre-IPO proof of recurring infrastructure revenue.
  • Watch #4: whether OpenAI responds with nearer-term capacity announcements rather than only long-dated Stargate roadmap numbers.

The core question is whether the leased GPUs become usable inference capacity on the stated timeline. If they do, Anthropic improves near-term product capacity and SpaceXAI improves its financing case. If not, the economic impact is lower than the headline capacity implies.

Not financial advice. Research note for process and positioning only. Past performance does not imply future results; manage risk first.