Laude Institute Unveils First ‘Slingshots’ AI Grant Recipients

▼ Summary
– The Laude Institute launched its first Slingshots grants to advance AI science and practice through an accelerator program for researchers.
– The program provides resources like funding, compute power, and engineering support in exchange for a final work product such as a startup or open-source code.
– The initial cohort includes 15 projects, with a strong focus on addressing the challenging problem of AI evaluation.
– Several projects introduce new approaches to AI evaluation, including benchmarks for code optimization and white-collar AI agents.
– John Boda Yang leads the CodeClash project, which uses a dynamic competition-based framework to assess code and emphasizes the importance of third-party benchmarks.
The Laude Institute has officially revealed the inaugural recipients of its Slingshots AI grant program, a significant initiative designed to accelerate the science and practical application of artificial intelligence. This program functions as a specialized accelerator, offering researchers access to essential resources that are often scarce in traditional academic environments. These include financial backing, substantial computational power, and dedicated product and engineering assistance. In return for this support, grant awardees commit to delivering a tangible final product, which could range from a new startup venture and an open-source code repository to other innovative artifacts.
This first group consists of fifteen distinct projects, with a pronounced emphasis on the challenging domain of AI evaluation. Several of these initiatives will be recognizable to those who follow the tech industry, such as the command-line coding assessment tool known as Terminal Bench and the most recent iteration of the enduring ARC-AGI project.
Other projects within the cohort are introducing novel perspectives to long-standing evaluation challenges. For instance, Formula Code, developed by researchers from Caltech and the University of Texas at Austin, seeks to create a method for evaluating how effectively AI agents can optimize pre-existing code. Meanwhile, the Columbia University-based BizBench is putting forward a comprehensive benchmark specifically for “white-collar AI agents.” Additional grants are supporting explorations into new reinforcement learning architectures and advanced model compression techniques.
John Boda Yang, a co-founder of the well-known SWE-Bench, is also a member of this inaugural group. He is leading a new endeavor called CodeClash. Drawing inspiration from the achievements of SWE-Bench, the CodeClash project will evaluate coding capabilities using a dynamic, competition-based framework. Yang expressed his belief that consistent evaluation using independent, third-party benchmarks is a powerful driver of technological advancement. He also voiced a concern about a potential future where such benchmarks become proprietary and confined within individual companies, a development he views as limiting for the field’s overall progress.
(Source: TechCrunch)
