AI Pioneer Reveals Key Safety Difference Between OpenAI and Google

▼ Summary
– Geoffrey Hinton, the “Godfather of AI,” believes OpenAI moved faster in AI development because it had no reputation to lose, unlike Google, which hesitated due to reputational risks.
– Google delayed releasing its AI chatbot (later named Bard) to avoid damaging its reputation, while OpenAI launched ChatGPT earlier with fewer concerns.
– Google’s AI leadership expressed concerns about long-term AI risks and advocated for regulatory oversight, while its chatbot Gemini faced criticism for biases and errors post-launch.
– Hinton stated Google encouraged him to work on AI safety without censorship, but he felt self-censorship was inevitable when working for a large company.
– OpenAI has faced scrutiny for shifting its safety approach, with CEO Sam Altman defending its framework while acknowledging loosening some restrictions based on user feedback.
The race to dominate artificial intelligence has revealed striking differences in how major tech companies approach safety and risk management. Geoffrey Hinton, often called the “Godfather of AI” for his foundational work in neural networks, recently highlighted why OpenAI and Google took divergent paths when launching their chatbots.
During a podcast appearance, Hinton explained that Google hesitated to release its AI chatbot initially due to concerns about reputational damage. The tech giant, known for its cautious approach, waited until March 2023 to introduce Bard, later rebranded as Gemini, while OpenAI had already launched ChatGPT months earlier. According to Hinton, OpenAI’s lack of an established reputation allowed it to take bolder risks, moving faster without the same level of corporate caution.
Internal discussions at Google reportedly reflected this conservative stance. Former AI leaders at the company acknowledged avoiding immediate chatbot releases to mitigate potential backlash. Demis Hassabis, CEO of Google DeepMind, later emphasized the need for regulatory oversight, warning that advanced AI systems could spiral beyond human control. Despite these precautions, Google’s Gemini faced criticism for biases in responses and image generation, prompting CEO Sundar Pichai to publicly admit mistakes and promise improvements.
Hinton, who left Google in 2023 to speak openly about AI risks, noted that the company never pressured him to stay silent. Instead, he suggested that employees often self-censor to protect corporate interests. While praising Google’s responsible behavior overall, he remained cautious about OpenAI’s long-term commitment to safety. When questioned about CEO Sam Altman’s ethical judgment, Hinton offered a measured response, stating only, “We’ll see.”
OpenAI has recently shifted its safety policies, focusing on cybersecurity, chemical threats, and autonomous AI improvements. Altman defended these changes, arguing that user feedback influenced some relaxed restrictions while maintaining core safeguards. Yet skepticism persists as competition intensifies, documents show Google even used ChatGPT internally while developing its rival models.
The contrasting strategies of these AI leaders underscore a critical debate: whether speed or caution better serves technological progress and public trust. As development accelerates, the industry’s approach to balancing innovation with accountability will shape the future of artificial intelligence.
(Source: Business Insider)