AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

ChatGPT Still Offers Legal and Health Advice

▼ Summary

– OpenAI denies social media claims that ChatGPT’s usage policy changes prevent it from offering legal or medical advice, stating its behavior remains unchanged.
– The company clarified that ChatGPT has never been a substitute for professional advice but will continue helping people understand legal and health information.
– OpenAI’s head of health AI confirmed that policies surrounding legal and medical advice are not new changes to their terms.
– The updated policy prohibits using ChatGPT for providing tailored legal or medical advice without appropriate involvement by a licensed professional.
– OpenAI consolidated three separate policies into one universal set of rules for all products and services, though the rules themselves remain the same.

ChatGPT continues to provide legal and health information, despite recent online speculation suggesting otherwise. OpenAI has clarified that its usage policies regarding these sensitive topics have not undergone any substantive changes. Karan Singhal, the company’s head of health AI, publicly addressed the rumors on social media platform X, stating the claims are “not true.” He emphasized that while the AI chatbot is not a replacement for qualified professional consultation, it remains a valuable tool for helping individuals comprehend complex legal and medical concepts.

The confusion appears to have stemmed from a recent policy update that consolidated several separate usage guidelines into a single, universal set of rules for all OpenAI products and services. This administrative streamlining, which took effect on October 29th, did not introduce new restrictions. The updated policy explicitly prohibits using the service for “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

This language closely mirrors the intent of the previous, more fragmented policies. The old guidelines cautioned users against performing activities that could negatively impact the safety or rights of others, which included “providing tailored legal, medical/health, or financial advice without review by a qualified professional.” The core principle has always been to ensure that users understand the AI’s role as an informational resource, not a certified expert.

By unifying its policies, OpenAI aims for greater clarity and consistency. The company’s changelog indicates the new document reflects a “universal set of policies across OpenAI products and services.” The fundamental rules governing what users can and cannot do with ChatGPT, however, remain consistent with the company’s long-standing position on the responsible use of its technology.

(Source: The Verge)

Topics

chatgpt updates 95% legal advice 90% medical advice 90% ai limitations 85% professional advice 85% usage policy 85% social media 80% misinformation correction 80% policy unification 75% policy consistency 75%