Artificial IntelligenceNewswireStartupsTechnology

Ex-Cohere AI Lead Bets Against the Scaling Race

▼ Summary

– AI labs are building massive data centers costing billions and consuming city-level energy to pursue scaling as the path to superintelligent AI systems.
– Researchers argue scaling large language models is reaching its limits, requiring new breakthroughs for meaningful AI performance improvements.
– Sara Hooker’s startup Adaption Labs focuses on creating AI that continuously learns from real-world experiences efficiently, challenging scaling approaches.
– Current reinforcement learning methods fail to let production AI systems learn from real-time mistakes, maintaining inefficiency in adaptation.
– Industry skepticism about scaling grows as studies show diminishing returns and experts question long-term potential without true experiential learning.

The artificial intelligence sector currently finds itself locked in an intense competition centered on constructing massive data centers, with some facilities rivaling the size of entire urban districts. These colossal projects demand investments reaching into the billions of dollars and consume electrical power on a scale comparable to small cities. This trend is fueled by a widespread conviction in the principle of scaling, the notion that continually increasing computational resources for training existing AI architectures will inevitably lead to the creation of superintelligent systems.

However, a growing number of voices within the research community are questioning this approach, suggesting that the strategy of simply enlarging large language models may be approaching a point of diminishing returns. Sara Hooker, the former Vice President of AI Research at Cohere and a Google Brain alumna, is placing a significant bet on this very premise. Her new venture, Adaption Labs, co-founded with fellow industry veteran Sudip Roy, operates on the belief that the relentless scaling of LLMs has become an inefficient method for extracting greater performance from artificial intelligence.

Hooker announced her departure from Cohere to focus on what she describes as the most critical challenge: constructing thinking machines capable of genuine adaptation and continuous learning. In a recent statement, she began recruiting for her “incredibly talent dense” founding team, signaling a serious commitment to her new direction.

During an interview, Hooker explained that Adaption Labs is developing AI systems designed to adapt and learn continuously from their real-world interactions, doing so with remarkable efficiency. She chose not to disclose the specific technical methods or underlying architectures the company is employing. She articulated a clear critique of the current paradigm, stating, “There is a turning point now where it’s very clear that the formula of just scaling these models , scaling-pilled approaches, which are attractive but extremely boring , hasn’t produced intelligence that is able to navigate or interact with the world.”

For Hooker, the core of genuine learning lies in adaptation. She offers a simple analogy: if you stub your toe on a table, you learn to walk more carefully around it in the future. While AI labs have attempted to replicate this concept using reinforcement learning, which lets models learn from errors in simulated environments, these methods fall short for systems already deployed with customers. Such production models cannot learn from their mistakes in real-time; they essentially continue to stub their toe repeatedly.

The current alternative for enterprises is costly consulting services from major AI labs for custom fine-tuning, with reports suggesting companies like OpenAI require commitments of over ten million dollars. Hooker argues this creates an inaccessible ecosystem. “We have a handful of frontier labs that determine this set of AI models that are served the same way to everyone, and they’re very expensive to adapt,” she noted, expressing her belief that AI systems can learn efficiently from their environment, a development that would radically alter who controls and benefits from AI technology.

This skepticism toward scaling is gaining traction. A recent study from MIT indicated that the largest AI models might soon exhibit significantly reduced performance gains. The sentiment in tech hubs like San Francisco appears to be shifting as well, with prominent figures in the field publicly expressing doubts. Turing Award winner Richard Sutton, often called the father of reinforcement learning, recently stated that LLMs cannot achieve true scaling because they lack real-world learning. Similarly, early OpenAI member Andrej Karpathy voiced reservations about the long-term potential of RL.

These concerns are not entirely new. Towards the end of 2024, researchers were already warning that scaling models through pre-training on vast datasets was yielding smaller improvements. The industry responded by pivoting to new frontiers, such as AI reasoning models. These systems, which require extra time and computation to “think” through problems before responding, have pushed AI capabilities forward in 2025.

Major labs now appear convinced that scaling up reinforcement learning and reasoning models represents the next frontier. OpenAI researchers have stated they developed their first reasoning model, o1, precisely because they believed it would scale effectively. A recent collaborative study from Meta and Periodic Labs exploring how RL could enhance performance reportedly cost more than four million dollars, highlighting the extraordinary expense of current methodologies.

In stark contrast, Adaption Labs aims to demonstrate that learning from experience can be achieved at a fraction of the cost. The startup was reportedly in discussions earlier this fall to secure a seed funding round between twenty and forty million dollars, and according to sources familiar with the matter, that round has since closed. Hooker declined to comment on the financing but affirmed that the company is “set up to be very ambitious.”

Hooker’s background lends credibility to her new endeavor. She previously led Cohere Labs, where she specialized in training compact AI models for enterprise applications. This focus on smaller, more efficient systems is part of a broader trend where they are increasingly outperforming their larger counterparts on benchmarks for coding, mathematics, and reasoning, a trend Hooker intends to advance further.

She has also built a strong reputation for promoting global diversity in AI research, actively recruiting talent from underrepresented regions, including Africa. While Adaption Labs will establish a base in San Francisco, Hooker has confirmed her plans to build a worldwide team.

If Hooker and her team are correct about the inherent limitations of scaling, the implications for the AI industry would be profound. Billions of dollars have been poured into scaling large language models under the assumption that sheer size is the direct path to artificial general intelligence. It is increasingly plausible that genuine, adaptive learning could emerge as not only a more powerful approach but a vastly more efficient one, potentially reshaping the entire technological landscape.

(Source: TechCrunch)

Topics

ai scaling 95% adaptive learning 93% startup innovation 90% Model Efficiency 88% ai research 87% reinforcement learning 85% diminishing returns 85% model performance 83% industry trends 82% computational resources 80%