Starcloud Raises $170M to Build Space Data Centers

▼ Summary
– Starcloud reached a $1.1 billion valuation after its Series A funding round, led by Benchmark and EQT Ventures, making it a fast-growing unicorn.
– The company has launched its first satellite with an Nvidia H100 GPU and plans to deploy more powerful versions, including a spacecraft designed for SpaceX’s Starship.
– Its business model aims to make orbital data centers cost-competitive with Earth-based ones, but this depends on unproven technology and lower launch costs from rockets like Starship.
– The company faces significant industry challenges, including the limited number of advanced GPUs in orbit and technical hurdles like power generation, cooling, and synchronizing multiple spacecraft.
– Starcloud competes with other space data center ventures and sees potential coexistence with SpaceX, which is pursuing a different primary use case for its own orbital compute plans.
A recent $170 million investment has propelled the space computing startup Starcloud to a $1.1 billion valuation, cementing its status as one of the fastest companies to achieve unicorn status after its Y Combinator debut. This Series A round, led by Benchmark and EQT Ventures, underscores growing investor confidence in orbital data centers as a solution to the resource and political constraints hampering terrestrial expansion. The company’s ambitious vision, however, hinges on unproven technology and immense capital outlays.
With total funding now at $200 million, Starcloud has already launched its initial satellite, equipped with an Nvidia H100 GPU, in November 2025. A more advanced model, Starcloud 2, is slated for launch later this year. This upgraded satellite will feature multiple GPUs, including an Nvidia Blackwell chip and an AWS server blade, alongside a bitcoin mining computer. Looking further ahead, the company plans to develop a dedicated data center spacecraft, Starcloud 3, designed for launch aboard SpaceX’s Starship rocket.
CEO Philip Johnston describes Starcloud 3 as a three-ton, 200-kilowatt craft engineered to fit within the “pez dispenser” deployment system used for Starlink satellites. He believes this could become the first orbital data center with costs competitive against Earth-based facilities, potentially achieving rates near $.05 per kilowatt-hour. This projection assumes commercial launch costs will fall to approximately $500 per kilogram. The central challenge is timing, as Starship is not yet operational. Johnston anticipates commercial access opening in 2028 or 2029, acknowledging that truly cost-competitive power in space awaits a new generation of high-cadence rocket launches, likely in the 2030s.
“If it ends up being delayed, we’ll just carry on launching the smaller versions on Falcon 9,” Johnston stated. “We’re not going to be competitive on energy costs until Starship is flying frequently.”
The company is pursuing a dual business model. Initially, it sells processing power to other spacecraft, such as analyzing data for Capella Space’s radar satellites. The long-term vision involves using future, more powerful distributed orbital data centers to capture workloads from terrestrial counterparts once launch economics improve. This highlights the nascent state of the entire industry. For instance, when Nvidia CEO Jensen Huang recently presented the Vera Rubin Space-1 chip modules, he did not mention that none have yet been produced or distributed to partners.
The scale disparity is stark. While dozens of advanced GPUs are in orbit, Nvidia sold nearly 4 million to Earth-based hyperscalers in 2025. Furthermore, SpaceX’s vast Starlink network of 10,000 satellites generates roughly 200 megawatts of power. In contrast, over 25 gigawatts of data center capacity is under construction in the U. S. alone.
Johnston contends Starcloud holds a significant lead, having deployed the first terrestrial-grade GPU in orbit. This milestone allowed the company to train an AI model and run a version of Gemini in space, a claimed first. Beyond performance, the mission yielded crucial data on operating high-power chips in the space environment. “An H100 is probably not the best chip for space, to be honest, but the reason we did it is we wanted to prove that we could run state of the art terrestrial chips in space,” Johnston explained. This hard-won knowledge, gained after another GPU failed during launch, will inform future designs.
Substantial technical challenges remain, including efficient power generation and thermal management for high-performance chips. The Starcloud-2 satellite will carry the largest deployable radiator ever flown on a private satellite, with at least two more iterations planned. Synchronization presents another major hurdle. The largest data center workloads, like AI training, require hundreds or thousands of GPUs working in unison. Achieving this in orbit demands either massive single spacecraft or extremely reliable laser links between formations of smaller satellites. Most industry players expect these complex training workloads will follow simpler inference tasks in the developmental timeline.
Starcloud is not alone in this frontier. Competitors like Aetherflux, Google’s Project Suncatcher, and Aethero, which launched Nvidia’s first space-based Jetson GPU in 2025, are all developing space data center ventures. The most formidable potential entrant is SpaceX itself, which has sought government permission to operate up to a million satellites for distributed space computing.
Confronting SpaceX is a daunting prospect, but Johnston sees a path for coexistence. “They are building for a slightly different use case than us,” he noted. “They’re mainly planning on serving Grok and Tesla workloads. It may be at some point that they offer a third party cloud service, but what I think they are unlikely to do is what we’re doing, an energy and infrastructure player.”
(Source: TechCrunch)




