Speedata Secures $44M to Challenge Nvidia in AI Chips

▼ Summary
– Speedata, a Tel Aviv-based startup, raised $44M in Series B funding, bringing its total funding to $114M for its analytics processing unit (APU) designed for big data and AI workloads.
– The APU is purpose-built for data processing, aiming to replace racks of servers with a single chip for faster performance, unlike GPUs which were repurposed for analytics.
– Founded in 2019 by experts in Coarse-Grained Reconfigurable Architecture (CGRA), Speedata targets inefficiencies in general-purpose processors handling complex data analytics workloads.
– Speedata’s APU currently supports Apache Spark but plans to expand to all major data analytics platforms, with a public showcase planned at Databricks’ Data & AI Summit in June.
– The startup claims its APU completed a pharmaceutical workload 280x faster than non-specialized processors and has finalized its first APU design for manufacturing in late 2024.
Speedata, an emerging player in the semiconductor space, has secured $44 million in Series B funding to advance its specialized analytics processing unit (APU) designed for big data and AI workloads. The latest investment brings the Tel Aviv-based startup’s total funding to $114 million, signaling strong investor confidence in its mission to challenge industry giants like Nvidia.
The funding round was spearheaded by existing backers, including Walden Catalyst Ventures, 83North, Koch Disruptive Technologies, Pitango First, and Viola Ventures. Notable strategic investors also joined, such as Lip-Bu Tan, Intel’s CEO and Walden Catalyst’s Managing Partner, and Eyal Waldman, co-founder of Mellanox Technologies.
Unlike traditional GPUs, which were originally built for graphics and later adapted for AI tasks, Speedata’s APU is engineered specifically for data analytics. The company argues that general-purpose processors—or even repurposed GPUs—fail to address the unique demands of modern data workloads. Adi Gelvan, Speedata’s CEO, emphasizes that their APU can outperform entire server racks while consuming far less energy.
Founded in 2019 by a team of six experts—including pioneers in Coarse-Grained Reconfigurable Architecture (CGRA)—Speedata set out to solve a critical inefficiency. Complex data analytics often require sprawling server farms, but the startup’s APU consolidates these operations into a single, high-performance chip. Gelvan describes this as the culmination of decades of research, now poised to redefine how enterprises process data.
Currently, the APU is optimized for Apache Spark, but Speedata plans to expand compatibility across all major analytics platforms. The company is already engaging with undisclosed enterprise clients and plans to unveil its technology at Databricks’ Data & AI Summit in June. Early benchmarks are promising: in one pharmaceutical use case, the APU completed a task in 19 minutes—a staggering 280x faster than conventional processors.
With its first APU design finalized and manufacturing underway, Speedata is transitioning from development to commercialization. Gelvan highlights growing demand from enterprises eager to adopt the technology, positioning the startup for rapid market expansion. As the race for AI-optimized hardware heats up, Speedata aims to establish its APU as the new standard for data analytics—mirroring the dominance GPUs hold in AI training.
(Source: TechCrunch)