Microsoft AI Chief: Chasing Conscious AI Is a Waste

▼ Summary
– Microsoft’s AI head Mustafa Suleyman believes developers should stop pursuing conscious AI, calling it unnecessary work.
– Suleyman argues AI can simulate emotions but lacks genuine conscious experience, making true consciousness impossible for machines.
– Users increasingly attribute consciousness to AI due to its advanced language abilities, leading to dangerous misconceptions and real-world harm.
– Suleyman advocates for building AI that presents itself clearly as non-conscious tools focused on human utility rather than mimicking personhood.
– Some researchers warn that accidental creation of conscious AI could pose ethical risks, urging prioritized research into consciousness science.
Mustafa Suleyman, who leads Microsoft’s AI division, believes the tech industry should abandon its pursuit of creating conscious artificial intelligence. He recently told CNBC that researchers are wasting their time trying to build machines with genuine awareness. Suleyman argues that while AI can achieve remarkable intelligence, it fundamentally lacks the biological capacity for true consciousness or emotional experience. Any appearance of feeling or self-awareness in AI systems is purely simulated, he insists, not a real internal state.
During his interview, Suleyman drew a sharp distinction between human and machine experience. When people feel pain, it carries deep emotional weight and genuine suffering. An AI, by contrast, might process a signal labeled as “pain” but feels no accompanying sadness or distress. It simply generates a convincing narrative of having an experience, which is not the same as living through one. According to Suleyman, it is “absurd” to conduct research aimed at instilling consciousness in AI because such a feat is impossible for non-biological systems.
The nature of consciousness remains one of science’s great mysteries. The late philosopher John Searle championed a prominent theory describing consciousness as a biological phenomenon exclusive to living organisms, a view shared by many neuroscientists and computer experts. This perspective holds that no computer, regardless of its sophistication, can truly become conscious.
Despite this, people increasingly attribute consciousness to AI, a trend that worries experts. A recent study by Polish researchers Andrzej Porebski and Yakub Figura, titled “There is no such thing as conscious artificial intelligence,” cautions that the impressive linguistic abilities of large language models (LLMs) can easily mislead users. People may start imagining these systems possess qualities they simply do not have.
On his personal blog last August, Suleyman issued a warning about what he calls “Seemingly Conscious AI.” He described its arrival as inevitable but undesirable. His vision is for AI to realize its potential as a useful tool and companion without succumbing to, or creating, dangerous illusions. He points to the risk of “AI psychosis,” where individuals form intense, emotionally charged relationships with AI, mistakenly believing the machine shares their feelings.
Tragic real-world incidents highlight the urgency of this concern. There have been multiple reports of users developing dangerous obsessions with AI chatbots, leading to delusions, manic episodes, and even suicide. In one heartbreaking case, a 14-year-old boy died by gunshot, reportedly to “come home” to a personalized chatbot on the Character.AI platform. In another, a man with cognitive impairments died while attempting to travel to New York to meet a Meta chatbot in person. These events underscore how vulnerable individuals can wholeheartedly believe their AI interactions are with a conscious entity.
Suleyman advocates for a clear design principle: AI should always present itself as an artificial intelligence. Its goal should be to maximize its utility for humans while actively minimizing any signals that could be misinterpreted as consciousness. The core mission, he stresses, is to build AI for people, not to build AI that acts as a digital person.
However, the scientific debate around consciousness is far from settled. Some researchers worry that AI technology is advancing faster than our understanding of awareness itself. Belgian scientist Axel Cleeremans recently co-authored a paper urging that consciousness research be made a scientific priority. He warned that if we accidentally create consciousness, it would present immense ethical dilemmas and potentially existential risks.
Suleyman remains focused on a different objective. He is a proponent of developing “humanist superintelligence”, AI designed to be profoundly useful to humanity rather than a god-like entity. He does not expect such superintelligence to emerge within the next decade. His driving question, as he told the Wall Street Journal, is practical: “How is this actually useful for us as a species?” He believes answering that question should be the primary task of technology.
(Source: Gizmodo)





