Artificial IntelligenceBusinessNewswireStartups

LMArena Hits $1.7B Valuation Just Four Months After Launch

Originally published on: January 6, 2026
▼ Summary

– LMArena, a startup originating from a UC Berkeley research project, raised a $150 million Series A at a $1.7 billion valuation, led by Felicis and UC Investments.
– The company is known for its crowdsourced AI model performance leaderboards, which are fueled by user comparisons from over 5 million monthly users across 150 countries.
– Its platform tests and ranks a wide variety of AI models, including those from OpenAI, Google, Anthropic, and others, across tasks like text, vision, and reasoning.
– LMArena launched a commercial AI Evaluations service in September, achieving an annualized revenue run rate of $30 million by December.
– The startup’s rapid growth and popularity attracted significant venture capital, with its Series A including participation from firms like Andreessen Horowitz and Kleiner Perkins.

The rapid ascent of LMArena to a $1.7 billion valuation just months after its commercial debut underscores the intense market demand for independent AI benchmarking. The startup, which originated as a UC Berkeley research project called Chatbot Arena, announced a $150 million Series A funding round led by Felicis and UC Investments. This massive injection of capital arrives a mere four months after the company secured a $100 million seed round at a $600 million valuation, bringing its total funding to $250 million in under seven months.

LMArena has carved out a critical niche by operating crowdsourced AI model performance leaderboards. Its consumer-facing platform allows users to submit a prompt, which is then processed by two different AI models. The user then selects which model provided the superior response. These human preferences, gathered from over 5 million monthly users across 150 countries engaging in roughly 60 million conversations each month, form the backbone of its influential rankings. The leaderboards evaluate a wide array of models, including various iterations of OpenAI’s GPT, Google’s Gemini, Anthropic’s Claude, and xAI’s Grok, on tasks ranging from text generation and coding to vision and image creation.

The company’s roots trace back to an open research initiative built by UC Berkeley researchers Anastasios Angelopoulos and Wei-Lin Chiang, initially supported by grants and donations. Its leaderboards quickly became an essential reference point, even an obsession, for AI developers. As LMArena transitioned to a revenue-generating business, it formed partnerships with major model creators like OpenAI, Google, and Anthropic to feature their flagship models for community evaluation. This move, however, sparked controversy; a group of competitors published a paper in April alleging these partnerships allowed model makers to manipulate the benchmarks, a claim the startup has strongly rejected.

A significant step in its commercial evolution came in September with the launch of AI Evaluations, a service where enterprises, AI labs, and developers can commission the company to perform model testing using its vast community. This offering has proven remarkably successful, generating an annualized consumption rate, the company’s term for annual recurring revenue, of $30 million as of December. This impressive financial trajectory and the platform’s widespread popularity attracted a prestigious roster of venture capital firms for the Series A round. Participants included Andreessen Horowitz, The House Fund, LDVP, Kleiner Perkins, Lightspeed Venture Partners, and Laude Ventures.

(Source: TechCrunch)

Topics

startup funding 95% ai evaluation 93% company valuation 90% ai leaderboards 88% venture capital 87% ai models 85% company history 80% revenue model 78% user engagement 75% benchmark controversy 72%