Artificial IntelligenceNewswireStartupsTechnology

Reflection raises $2B to become America’s open AI lab

▼ Summary

– Reflection raised $2 billion at an $8 billion valuation, a 15x increase from seven months ago, and positions itself as an open-source alternative to closed AI labs and a Western equivalent to Chinese AI firms.
– The startup was founded in March 2024 by former Google DeepMind researchers Misha Laskin and Ioannis Antonoglou, who have backgrounds in developing advanced AI systems like AlphaGo and Gemini.
– Reflection has recruited top talent from DeepMind and OpenAI, built an advanced AI training stack, and plans to release a frontier language model next year trained on tens of trillions of tokens.
– The company defines “open” as releasing model weights for public use while keeping datasets and training pipelines proprietary, targeting revenue from enterprises and governments for sovereign AI systems.
– Investors in the latest funding round include Nvidia, Sequoia, and Eric Schmidt, with the funds to be used for compute resources to train and release models starting early next year.

Reflection has secured a massive $2 billion investment, catapulting its valuation to $8 billion and marking a dramatic 15-fold increase from just seven months ago. Founded in March 2024 by former Google DeepMind researchers Misha Laskin and Ioannis Antonoglou, the startup is now positioning itself as a major player in the global AI landscape. Originally focused on autonomous coding agents, Reflection aims to serve as an open-source alternative to closed AI labs like OpenAI and Anthropic, while also positioning itself as a Western counterpart to leading Chinese AI firms such as DeepSeek.

The founders bring impressive credentials to the table. Laskin previously led reward modeling for DeepMind’s Gemini project, while Antonoglou co-created AlphaGo, the historic AI system that defeated a world champion in the complex board game Go. Their experience building advanced AI systems forms the core of their argument: that elite AI talent can develop cutting-edge models outside the traditional tech giants.

Alongside the funding announcement, Reflection revealed it has assembled a team of top researchers from DeepMind and OpenAI. The company has also constructed an advanced AI training infrastructure that it plans to make openly available. Crucially, Reflection claims to have “identified a scalable commercial model that aligns with our open intelligence strategy.”

Currently employing around 60 people, primarily AI researchers and engineers specializing in infrastructure, data training, and algorithm development, Reflection has already secured a compute cluster. According to CEO Laskin, the company aims to release a frontier-scale language model next year, trained on what he describes as “tens of trillions of tokens.”

In a public statement, Reflection announced, “We built something once thought possible only inside the world’s top labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale. We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we’re now bringing these methods to general agentic reasoning.”

The Mixture-of-Experts architecture represents a sophisticated framework that powers today’s most advanced large language models. Until recently, only major, closed AI laboratories possessed the resources to train these systems at scale. Chinese firm DeepSeek achieved a breakthrough by demonstrating how to train such models openly, with others like Qwen and Kimi following suit.

Laskin expressed concern about this development, stating, “DeepSeek and Qwen and all these models are our wake up call because if we don’t do anything about it, then effectively, the global standard of intelligence will be built by someone else. It won’t be built by America.” He further noted that this situation places the United States and its allies at a competitive disadvantage, since many enterprises and sovereign nations avoid using Chinese models due to potential legal complications.

“So you can either choose to live at a competitive disadvantage or rise to the occasion,” Laskin remarked.

The response from the American technology community has been largely positive. David Sacks, the White House AI and Crypto Czar, publicly endorsed the initiative, writing, “It’s great to see more American open source AI models. A meaningful segment of the global market will prefer the cost, customizability, and control that open source offers. We want the U.S. to win this category too.”

Clem Delangue, co-founder and CEO of collaborative AI platform Hugging Face, described the funding as “great news for American open-source AI.” He added, “Now the challenge will be to show high velocity of sharing of open AI models and datasets (similar to what we’re seeing from the labs dominating in open-source AI).”

Reflection’s interpretation of “open” appears to emphasize accessibility over complete transparency, similar to approaches taken by Meta with Llama or Mistral. Laskin explained that while the company will release model weights, the essential parameters that define an AI system’s functionality, it will keep datasets and comprehensive training pipelines largely proprietary.

“In reality, the most impactful thing is the model weights, because the model weights anyone can use and start tinkering with them,” Laskin noted. “The infrastructure stack, only a select handful of companies can actually use that.”

This balanced approach forms the foundation of Reflection’s business strategy. While researchers will enjoy free access to the models, revenue will come from large enterprises building products using Reflection’s technology and from governments developing what’s known as “sovereign AI” systems, AI models created and managed by individual nations.

“Once you get into that territory where you’re a large enterprise, by default you want an open model,” Laskin explained. “You want something you will have ownership over. You can run it on your infrastructure. You can control its costs. You can customize it for various workloads. Because you’re paying some ungodly amount of money for AI, you want to be able to optimize it as much as much as possible, and really that’s the market that we’re serving.”

Reflection has not yet launched its inaugural model, which will initially focus on text-based capabilities with multimodal features planned for future versions. The newly acquired funding will support the substantial compute resources required to train these new models, with the first release targeted for early next year.

The investment round attracted participation from prominent backers including Nvidia, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, and CRV, among others.

(Source: TechCrunch)

Topics

startup funding 95% ai research 90% open source 88% ai training 85% Global Competition 82% talent recruitment 80% enterprise solutions 78% autonomous coding 75% model weights 72% mixture-of-experts 70%