AI & TechArtificial IntelligenceNewswireStartupsTechnology

When AI Starts Building Itself: What Happens Next

▼ Summary

– Richard Socher, founder of You.com, launched Recursive Superintelligence, a San Francisco startup with $650 million in funding, aiming to create a recursively self-improving AI model.
– The startup’s unique approach uses open-endedness to achieve recursive self-improvement, where AI autonomously identifies and fixes its own weaknesses without human involvement.
– Socher is joined by co-founders including Peter Norvig, Tim Shi, and Tim Rocktäschel, who have research experience in open-endedness and self-improvement at Google DeepMind.
– Recursive Superintelligence plans to ship its first product in quarters, not years, and Socher sees the company as a viable product-focused business, not just a research lab.
– Socher believes compute will become a critical resource for deciding which problems to solve with AI, such as cancer or viruses, as recursive self-improvement accelerates.

Richard Socher has long been a recognizable name in artificial intelligence, best known for founding the early chatbot startup You.com and for his prior work on Imagenet. Now, he is stepping into the latest wave of research-driven AI ventures with Recursive Superintelligence, a San Francisco-based startup that emerged from stealth on Wednesday backed by $650 million in funding.

Socher is joined in this new endeavor by a distinguished group of AI researchers, including Peter Norvig and Cresta co-founder Tim Shi. Their collective mission is to build a recursively self-improving AI model , one that can autonomously identify its own weaknesses and redesign itself to fix them, without any human intervention. This concept has long been considered a holy grail in modern AI research.

I spoke with Socher over Zoom after the launch, exploring Recursive’s distinct technical strategy and why he doesn’t view this new project as a “neolab” , the informal term for a new breed of AI startups that prioritize research over product development.

This interview has been edited for clarity and length.

Recursion is a hot topic these days. Many labs seem to be chasing it. What makes your approach different?

Our core differentiator is using open-endedness to achieve recursive self-improvement , something no one has accomplished yet. It’s a notoriously elusive goal. Many people assume it happens automatically when you ask an AI to improve something, like a machine learning system or a piece of writing. But that’s just improvement, not recursive self-improvement.

Our primary focus is building truly recursive, self-improving superintelligence at scale. That means the entire cycle of ideation, implementation, and validation of research ideas would be automated. Initially, that applies to AI research ideas, and eventually to any kind of research , even in physical domains. It becomes especially powerful when AI works on itself, developing a new kind of self-awareness of its own shortcomings.

You mentioned open-endedness. Does that have a specific technical meaning?

Yes, absolutely. Tim Rocktäschel, one of our co-founders, previously led the open-endedness and self-improvement teams at Google DeepMind. He worked on Genie 3, a world model that’s a great example of open-endedness. You can describe any concept, any world, any agent, and it generates an interactive version.

Think about biological evolution. Animals adapt to their environment, and then others counter-adapt. This process can continue for billions of years, and interesting developments keep emerging. That’s how we developed eyes.

Another example is rainbow teaming, from another paper by Tim. You’re probably familiar with red teaming in cybersecurity.

Right, in cybersecurity, red teaming involves testing for vulnerabilities.

Exactly. In the LLM context, red teaming means trying to get the model to do something harmful, like explain how to build a bomb, and ensuring it refuses. Humans can spend a long time crafting these examples. But what if you pitted a second AI against the first, tasking it with finding every possible way to make the first AI break its rules? They can iterate millions of times.

You essentially allow two AIs to co-evolve. One keeps attacking, exploring multiple angles , hence the “rainbow” analogy. The first AI gets inoculated, becoming safer over time. This idea from Tim Rocktäschel is now used in all major labs.

How do you know when it’s done? I suppose it’s never truly finished.

Some of these processes are never finished. You can always become more intelligent, better at programming and math, and so on. There are theoretical bounds on intelligence , I’m working on formalizing those now , but they’re astronomical. We’re very far from those limits.

As a neolab, you’re expected to do something the major labs aren’t. Does that mean you don’t think they’ll reach recursive self-improvement with their current approaches?

I can’t really comment on what they’re doing, but I do believe we’re approaching it differently. We fully embrace open-endedness, and our entire team is laser-focused on that vision. Our researchers have been studying this and publishing papers in this space for the past decade. They have a track record of pushing the field forward and shipping real products. Tim Shi built Cresta into a unicorn. Josh Tobin was one of the first people at OpenAI and eventually led their Codex and deep research teams.

I actually struggle a bit with the “neolab” label. I don’t see us as just a lab. I want us to become a viable company with amazing products that people love and that have a positive impact on humanity.

So when can we expect your first product?

I’ve thought about that a lot. The team has made so much progress that we might actually pull up our timelines from what we initially assumed. But yes, there will be products. You’ll have to wait quarters, not years.

One idea about recursive self-improvement is that once such a system exists, compute becomes the only limiting resource. The faster you run it, the faster it improves, and human input becomes irrelevant. Do you think that’s where we’re headed?

Compute is not to be underestimated. In the future, a critical question will be: how much compute does humanity want to allocate to which problems? Here’s this cancer, here’s that virus , which do you solve first? How much processing power do you give it? It becomes a matter of resource allocation. That will be one of the biggest questions in the world.

(Source: TechCrunch)

Topics

recursive self-improvement 98% open-endedness 95% ai research startups 92% compute resource allocation 88% ai safety 85% red teaming 83% superintelligence 81% product development 79% evolutionary algorithms 76% ai research leadership 74%