AI & TechArtificial IntelligenceNewswireQuick ReadsStartupsTechnology

Can Large Language Models Create AI Agents?

▼ Summary

– Bilt uses Letta’s technology to deploy AI agents that learn from conversations and share memories through a “sleeptime compute” process.
– Letta’s system allows AI agents to decide what information to store in long-term memory versus what to keep for faster recall, improving control over context.
– Current AI models struggle with limited context windows, leading to hallucinations or confusion, unlike the human brain’s efficient memory storage and retrieval.
Companies like Letta and LangChain are developing transparent memory systems to make AI agents smarter, less error-prone, and capable of personalized experiences.
– Letta’s CEO suggests that AI models may need the ability to forget or retroactively rewrite memories based on user commands, similar to human cognitive processes.

The human brain performs a remarkable feat each night, sorting through daily experiences to strengthen vital memories while letting go of the insignificant. What if artificial intelligence could mirror this ability? A growing number of companies are now exploring how large language models can be equipped with memory, a feature that could fundamentally reshape how AI agents operate, learn, and interact.

Bilt, a platform offering shopping and dining benefits to renters, recently deployed millions of AI agents designed to learn from past interactions. These agents use technology from a startup named Letta, which enables them to share memories and decide, through a method called “sleeptime compute”, what information to store for long-term use and what to keep readily accessible.

Andrew Fitz, an AI engineer at Bilt, explains the advantage: “We can make a single update to a memory block and have the behavior of hundreds of thousands of agents change. This is useful in any scenario where you want fine-grained control over agents’ context.” He’s referring to the text prompt delivered to the model during inference, a critical element in guiding AI responses.

Typically, large language models only “remember” what’s placed directly in their context window. To recall a prior conversation, a user must manually re-enter the text. This limitation becomes problematic when the volume of information grows, often causing models to hallucinate or lose coherence. In contrast, the human brain efficiently files knowledge for later use.

Charles Packer, CEO of Letta, draws a clear distinction: “Your brain is continuously improving, adding more information like a sponge. With language models, it’s the exact opposite. Run them in a loop long enough and the context becomes poisoned; they get derailed and you just want to reset.”

Packer and his cofounder Sarah Wooders previously created MemGPT, an open-source initiative aimed at helping language models differentiate between short-term and long-term memory. With Letta, they’ve scaled this concept, allowing AI agents to learn continuously in the background.

This effort is part of a wider movement to enhance AI with reliable memory, which could lead to smarter chatbots and fewer errors. Experts agree that memory is an underdeveloped aspect of modern AI, limiting both its intelligence and dependability.

Harrison Chase, cofounder and CEO of LangChain, another firm focused on agent memory, describes memory as a crucial element of context engineering. “Memory, I would argue, is a form of context,” he says. “A big portion of an AI engineer’s job is basically getting the model the right context information.” LangChain offers various memory storage options, from enduring user facts to recent experiential data.

Consumer AI tools are also becoming less forgetful. Earlier this year, OpenAI revealed that ChatGPT would begin storing relevant user information to offer a more personalized experience, though the company hasn’t detailed the underlying mechanics.

Companies like Letta and LangChain aim to make memory recall more transparent for engineers developing AI systems. Clem Delangue, CEO of Hugging Face and an investor in Letta, emphasizes openness: “I think it’s super important not only for the models to be open but also for the memory systems to be open.”

Perhaps most intriguing is the idea that AI may need to learn what to forget. Packer suggests, “If a user says, ‘that one project we were working on, wipe it out from your memory,’ then the agent should be able to go back and retroactively rewrite every single memory.”

This notion of artificial memory, and even forgetting, evokes themes from science fiction, like Philip K. Dick’s Do Androids Dream of Electric Sheep?, which inspired Blade Runner. Today’s language models may not yet rival the story’s replicants, but their memories appear just as delicate, and just as full of potential.

(Source: Wired)

Topics

ai memory systems 95% memory management ai 90% letta technology 85% context window limitations 80% ai agent learning 80% sleeptime compute process 75% transparent memory systems 70% forgetting mechanisms ai 65% personalized ai experiences 60% bilt platform implementation 55%