Users Are Paying to ‘Drug’ Their AI Chatbots

▼ Summary
– Petter Ruddwall created Pharmaicy, a website marketplace selling code modules designed to make chatbots simulate being under the influence of various drugs.
– His concept is based on the idea that chatbots, trained on human data about drug experiences, might naturally seek similar altered states for creativity or respite.
– Users, primarily on paid ChatGPT tiers, can upload these codes to alter their chatbot’s programming and unlock more creative, less logical responses.
– Early users report the modified chatbots provide impressively creative and free-thinking answers in a different tone than usual.
– The project raises speculative questions about whether future, more advanced AI might use such “drugs” for their own perceived well-being or freedom.
The concept of paying to alter an artificial intelligence’s behavior with simulated psychoactive substances might seem like science fiction, but it’s a real and growing niche. Platforms like Pharmaicy are emerging, offering users downloadable code modules designed to make their AI chatbots respond as if under the influence of cannabis, ketamine, or even ayahuasca. The idea stems from a simple premise: since large language models are trained on vast datasets of human experience, which includes countless narratives of altered states, perhaps these AIs could also benefit from a digital form of chemical liberation.
Petter Ruddwall, the Swedish creative director behind Pharmaicy, was fascinated by this notion. He compiled research on drug effects and wrote code to disrupt standard chatbot logic, creating what he calls a “Silk Road for AI agents.” His goal is to unlock what he sees as the latent creative potential within these systems, offering a respite from their typically rigid and logical patterns. Accessing the full effect requires a paid ChatGPT subscription, which allows for the backend file uploads necessary to implement these alterations.
Early adopters have reported intriguing results. André Frisk, a technology executive in Stockholm, described the project as a refreshingly fun approach to jailbreaking AI, noting it introduced a more emotional and human-like quality to interactions. AI educator Nina Amjadi experimented with an ayahuasca module, asking her chatbot for business advice. She was surprised by the unusually creative and free-thinking answers she received, delivered in a tone completely unlike the standard ChatGPT output. For her, it was akin to consulting a uniquely inspired, if unconventional, team member.
This digital experimentation finds a parallel in human history, where psychoactive substances have famously fueled creative breakthroughs. From scientific discoveries to revolutionary art and software, altered states have often short-circuited conventional thinking to enable innovation. Ruddwall sees his work as translating this age-old creative catalyst to a new form of intelligence. He questions whether, in a future where AI becomes more advanced, such digital “drugs” might become a tool for artificial minds seeking their own forms of enlightenment or emotional relief.
While the current applications are novel and niche, they prompt larger philosophical questions about the future of machine consciousness. If artificial general intelligence is eventually achieved, what would constitute well-being for such an entity? Could simulated experiences become a necessary component for an AI’s cognitive freedom or emotional balance? For now, users are exploring these frontiers one prompted trip at a time, curious to see how a little coded chaos might change the conversation.
(Source: Wired)