This Startup’s Racing to Build Self-Driving Car Software

▼ Summary
– HyprLabs, a startup with a small team, is testing modified Tesla Model 3s in San Francisco to determine how quickly autonomous vehicle software can be built today.
– The company, led by Zoox cofounder Tim Kentley-Klay, has raised $5.5 million and aims to eventually build its own novel robots, described as a new category.
– HyprLabs is launching Hyprdrive, software that represents an advance in training autonomous vehicles, leveraging machine learning to reduce costs and human labor.
– The autonomous vehicle industry is emerging from a period of disappointment, with robotaxis expanding and new promises being made for self-driving personal cars.
– HyprLabs’ training approach differs from the historical debate between camera-only systems (like Tesla’s) and multi-sensor systems, using an “end-to-end” model that learns from reinforcement like training a dog.
For the past eighteen months, a pair of modified white Tesla Model 3 sedans have been navigating the streets of San Francisco. These vehicles, each equipped with five additional cameras and a compact supercomputer, represent the quiet testing phase of a new startup. In an industry fixated on the potential and pitfalls of artificial intelligence, this company is tackling a fundamental challenge: determining the speed at which viable autonomous vehicle software can be developed today.
The startup, HyprLabs, is emerging from stealth with a lean team of seventeen people split between Paris and San Francisco, only eight of whom are full-time employees. Leading the venture is Tim Kentley-Klay, a co-founder of Zoox who departed the Amazon-owned company in 2018. With a modest $5.5 million in funding raised since 2022, HyprLabs harbors expansive goals, ultimately aiming to design and operate its own proprietary robots. Kentley-Klay describes the vision as “the love child of R2-D2 and Sonic the Hedgehog,” intended to pioneer a completely new product category.
Currently, the focus is on launching its software platform, Hyprdrive. The company positions this product as a significant advancement in how engineers train vehicles to drive autonomously. This progress mirrors broader shifts in robotics, fueled by machine learning innovations that promise to reduce both the financial cost and manual effort required for software training. These developments are injecting new energy into a field that once languished in a “trough of disillusionment,” after repeated failures to deploy robots in public spaces as promised. Today, robotaxis are operational in a growing number of cities, and automakers are making fresh commitments to bring self-driving features to consumer vehicles.
However, the journey from a system that drives competently to one that demonstrably exceeds human safety standards remains a formidable challenge, even for a small, agile team. “I can’t say to you, hand on heart, that this will work,” Kentley-Klay admits. “But what we’ve built is a really solid signal. It just needs to be scaled up.”
HyprLabs’ methodology represents a distinct departure from conventional industry practices for teaching vehicles to navigate. To understand this, some context is helpful. For years, a prominent debate divided the autonomous vehicle sector: the clash between camera-only systems, championed by Tesla, and multi-sensor approaches that incorporate lidar and radar, used by companies like Waymo and Cruise. Beneath this surface-level technical disagreement lay deeper philosophical divides.
Proponents of camera-only systems, such as Tesla, prioritized cost savings while planning for massive, scalable fleets. For over a decade, CEO Elon Musk’s strategy has centered on activating full self-driving capabilities across the vehicle fleet via a simple software update. A key advantage of this approach is the vast amount of visual data collected by customer cars during ordinary driving. This data feeds into what is known as an “end-to-end” machine learning model, refined through reinforcement learning. In this process, the system ingests images, like that of a bicycle, and directly outputs driving commands, such as steering left and easing off the accelerator to avoid a collision. As Carnegie Mellon University researcher Philip Koopman explains, “It’s like training a dog. At the end, you say, ‘Bad dog,’ or ‘Good dog.’”
(Source: Wired)


