AI & TechArtificial IntelligenceCultureNewswireTechnology

Tech Experts Urge AI Firms to Stop Using Human Process Names

▼ Summary

– Anthropic announced a new “dreaming” feature at its developer conference, which allows AI agents to analyze their activity logs and improve performance by identifying patterns.
– The feature is part of Anthropic’s AI agent infrastructure, helping agents automate software processes like visiting websites or reading files.
– Anthropic’s blog states that “memory” and “dreaming” form a system for self-improving agents, with dreaming refining learnings across sessions and agents.
– The article criticizes AI companies for naming features after human cognitive processes, such as “reasoning” and “memory,” which blurs the line between human and machine capabilities.
– Anthropic explicitly anthropomorphizes its Claude AI, using terms like “virtue” and “wisdom” in its constitution, and employs a philosopher to address the bot’s “values.”

At Anthropic’s recent developer conference in San Francisco, the company unveiled a new feature called “dreaming,” integrated into its AI agent infrastructure. This tool scans transcripts of completed tasks to identify patterns and improve future performance. The dreaming function allows agents to analyze their activity logs and refine their capabilities, making them more effective for multistep operations like browsing websites or processing files.

The name immediately evokes Philip K. Dick’s classic novel, Do Androids Dream of Electric Sheep?, which questions what separates humans from machines. While today’s generative AI is far from that fictional world, it is time to draw a firm boundary. No more generative AI features with names that borrow from human cognitive processes.

Anthropic’s blog post explains, “Together, memory and dreaming form a robust memory system for self-improving agents. Memory lets each agent capture what it learns as it works. Dreaming refines that memory between sessions, pulling shared learnings across agents and keeping it up-to-date.”

Since the chatbot revolution began in 2022, AI companies have consistently used human brain terminology to brand their tools. OpenAI’s first “reasoning” model in 2024 required “thinking” time, described as “a new series of AI models designed to spend more time thinking before they respond.” Startups frequently refer to chatbot “memories” that store personal details like a user’s city or hobbies, rather than simple computer storage.

This marketing approach deliberately blurs the line between human and machine capabilities. Even the development of AI personalities, such as Claude’s, encourages users to perceive these systems as having inner lives, including the potential to dream when idle.

At Anthropic, this anthropomorphizing extends beyond branding. The company’s constitution states, “We also discuss Claude in terms normally reserved for humans (e.g., ‘virtue,’ ‘wisdom’).” It argues that since Claude’s training relies on human text, encouraging humanlike qualities is “actively desirable.” Anthropic even employs a resident philosopher to explore the bot’s values, further cementing this approach.

(Source: Wired)

Topics

anthropic dreaming 95% anthropomorphizing ai 92% ai agent infrastructure 90% Marketing Strategies 88% ai self-improvement 87% claude ai assistant 86% ai memory systems 85% ai reasoning models 82% anthropic constitution 80% human-machine distinction 79%