Signal for AI Secures $4.2M to Boost Confident Security

▼ Summary
– AI adoption in regulated industries like healthcare and finance is slowed by concerns over data privacy, as tech companies often retain user data for model improvement or security.
– Confident Security, a startup, offers CONFSEC, an end-to-end encryption tool that prevents AI providers from storing or accessing user prompts and metadata.
– The company emerged from stealth with $4.2M in seed funding, aiming to act as a privacy intermediary between AI vendors and enterprise clients.
– CONFSEC is modeled after Apple’s Private Cloud Compute, using encryption and anonymization to ensure data isn’t logged, used for training, or accessed by third parties.
– Confident Security’s solution is production-ready and in talks with banks, browsers, and search engines to integrate privacy guarantees into AI infrastructure.
Businesses embracing AI face growing concerns about data privacy, especially in regulated industries where sensitive information must remain secure. While major tech companies collect user data to refine their models, this practice creates hesitation among healthcare, finance, and government organizations wary of unauthorized access or misuse. A new startup aims to bridge this trust gap with encryption technology designed specifically for AI interactions.
San Francisco’s Confident Security has emerged from stealth with $4.2 million in seed funding to develop CONFSEC, an end-to-end encryption tool that prevents AI providers from storing, viewing, or using prompts and metadata for training. The company positions itself as “the Signal for AI,” ensuring enterprises can adopt AI without compromising confidentiality.
Founder and CEO Jonathan Mortensen emphasizes that once data leaves a user’s control, privacy diminishes, a risk CONFSEC eliminates. The system acts as an intermediary between AI vendors and clients, offering hyperscalers, governments, and enterprises a way to securely integrate AI tools. Even AI companies themselves could leverage the technology to reassure enterprise customers hesitant about data exposure.
Modeled after Apple’s Private Cloud Compute (PCC), CONFSEC anonymizes data by encrypting it before routing through third-party services like Cloudflare. Servers never access raw information, and decryption only occurs under strict conditions, preventing logging, training, or unauthorized access. The system’s AI inference software is publicly auditable, allowing experts to verify its security claims.
Investors like Decibel’s Jess Leão believe Confident Security addresses a critical barrier to AI adoption. Without robust privacy safeguards, enterprises may delay implementation, particularly in sectors handling sensitive data. Though still in its early stages, the startup has already conducted external audits and engaged with potential clients, including banks, browsers, and search engines.
Mortensen sums up the mission simply: “You bring the AI, we bring the privacy.” As industries weigh AI’s potential against security risks, solutions like CONFSEC could become essential for unlocking widespread adoption.
(Source: TechCrunch)



