Artificial IntelligenceNewswireStartupsTechnology

Distributed Storage Startup Challenges Cloud Giants

▼ Summary

AI companies’ demand for computing power has boosted specialized providers like CoreWeave, Together AI, and Lambda Labs, which offer distributed compute capacity.
– Most companies still store data with AWS, Google Cloud, and Microsoft Azure, whose systems keep data close to their own compute, not distributed across clouds.
– Tigris Data is building a distributed, AI-native storage platform to support modern AI workloads by replicating data to where GPUs are and providing low-latency access.
– Tigris addresses issues with big cloud providers, including high egress fees and latency, which can slow AI model performance and increase costs.
– The startup raised $25 million in Series A funding to expand its data centers globally, supporting growth driven by demand from AI startups and regulated industries.

The soaring demand for artificial intelligence has created an unprecedented need for computational resources, fueling the rise of specialized providers like CoreWeave, Together AI, and Lambda Labs. These companies have attracted significant investment by offering distributed computing power tailored for intensive AI applications. Despite this shift toward decentralized computing, the vast majority of businesses continue to rely on the three dominant cloud providers, AWS, Google Cloud, and Microsoft Azure, for their data storage needs. These legacy storage systems were originally designed to keep data tightly coupled with their own compute resources, not to support workloads spread across multiple clouds or geographic regions.

Ovais Tariq, co-founder and CEO of Tigris Data, observes a clear trend. “Modern AI workloads and the infrastructure supporting them are increasingly opting for distributed computing over the large, centralized cloud model,” he explained. “Our goal is to deliver that same choice for storage, recognizing that without flexible storage, compute resources are essentially useless.”

Tigris, established by the engineering team behind Uber’s internal storage platform, is constructing a network of localized data centers. The startup claims its platform is built specifically for AI, dynamically moving with a company’s compute resources. Tariq detailed its capabilities: “Our system automatically replicates data to wherever GPUs are located, efficiently handles billions of small files, and delivers the low-latency access required for model training, inference, and agentic workloads.”

To accelerate this vision, Tigris has secured a $25 million Series A funding round. The investment was led by Spark Capital, with continued participation from existing backers, including Andreessen Horowitz. This financial backing empowers the startup to challenge the established players, whom Tariq collectively refers to as “Big Cloud.”

Tariq argues that the incumbent providers not only charge more but also deliver a less efficient service. A significant point of contention is the egress fee, often called a “cloud tax,” which customers must pay to migrate their data to a different provider or to download it for use elsewhere. This creates a financial barrier for companies seeking to leverage cheaper GPUs or run simultaneous training jobs in different parts of the world. Batuhan Taskaya, head of engineering at Fal.ai, a Tigris customer, confirmed the impact, noting that these fees once constituted the bulk of his company’s cloud expenditure.

However, Tariq believes egress fees are merely a symptom of a more fundamental issue. “The deeper problem is a centralized storage architecture that cannot keep pace with a decentralized, high-speed AI ecosystem,” he stated. This often results in problematic latency, especially for the generative AI startups that form the core of Tigris’s 4,000-plus customer base. These companies work with massive, latency-sensitive datasets for image, video, and voice models.

“Consider interacting with a local AI agent processing audio,” Tariq illustrated. “You need the absolute lowest latency possible. That requires both your compute and your storage to be local.” He added that large cloud platforms are not optimized for modern AI tasks, where streaming huge datasets for training or performing real-time inference across regions can create performance bottlenecks. Localized storage ensures data is retrieved faster, enabling developers to run AI workloads more reliably and cost-effectively on decentralized clouds.

Taskaya from Fal.ai confirmed the benefit: “Tigris allows us to scale workloads across any cloud by providing a consistent filesystem view of our data from all locations, all without any egress charges.”

Beyond performance and cost, there are other compelling reasons for companies to keep data near their distributed compute options. In heavily regulated sectors such as finance and healthcare, a major obstacle to AI adoption is the stringent requirement for data security and governance. Furthermore, Tariq points to a growing corporate desire for data ownership, citing Salesforce’s recent move to block AI competitors from using Slack data as a prime example. “Businesses are becoming acutely aware of how valuable their data is, it’s the fuel for large language models and AI systems,” he said. “They want greater control and are increasingly reluctant to let another entity manage it.”

With its new capital, Tigris plans to continue expanding its network of data storage centers to meet rising demand. The company has experienced remarkable growth, scaling eightfold annually since its founding in late 2021. Currently operating data centers in Virginia, Chicago, and San Jose, Tigris aims to extend its footprint further across the United States and into key international markets, including London, Frankfurt, and Singapore.

(Source: TechCrunch)

Topics

ai companies 95% data storage 92% computing power 90% distributed compute 88% ai workloads 87% market competition 85% cloud providers 85% egress fees 83% Generative AI 82% cost efficiency 80%