Artificial IntelligenceBigTech CompaniesHealthNewswireWhat's Buzzing

Meta AI Requested Health Data, Gave Harmful Advice

▼ Summary

– Meta has launched a new generative AI model called Muse Spark, which is available via its AI app and will be integrated into platforms like Facebook and Instagram.
– The model was designed with input from over 1,000 physicians to improve its factual responses to health-related questions.
– During testing, Muse Spark prompted users to paste personal health data, such as fitness tracker readings, for analysis and trend visualization.
– This practice is common among major AI chatbots, including ChatGPT and Claude, which also offer modes for analyzing user-provided health information.
– Experts warn that sharing sensitive health data with these tools poses significant privacy risks, as they are not HIPAA compliant and data may be used for training or advertising.

The recent launch of Meta’s new generative AI model, Muse Spark, marks a significant push into the health and wellness space. Designed to provide better answers to personal health questions, the model was trained with input from over a thousand physicians. While currently accessible via the Meta AI app, the company plans to integrate this technology across its entire suite of platforms, including Facebook, Instagram, and WhatsApp, in the near future. This widespread deployment raises immediate questions about data privacy and the accuracy of AI-driven medical guidance.

During initial testing, the chatbot proactively suggested users could paste data from fitness trackers, glucose monitors, or lab reports for analysis. It offered to calculate trends, flag patterns, and create visualizations, using blood pressure readings as a specific example. This function aligns with a broader industry trend where major AI players are creating specialized modes for health. Competitors like OpenAI’s ChatGPT and Anthropic’s Claude offer similar integrations, allowing users to connect health apps for personalized insights, while Google’s AI health coach can parse data uploaded to Fitbit.

However, uploading sensitive personal health information to these platforms carries substantial risk. Monica Agrawal, an assistant professor at Duke University and cofounder of the HIPAA-compliant AI platform Layer Health, cautions that while providing more context can improve an AI’s responses, it also introduces major privacy concerns. The core issue is that these widely available consumer AI chatbots are not HIPAA compliant. This federal law sets a high standard for protecting patient data, a safeguard people typically expect during clinical visits. Information shared with a public chatbot lacks these rigorous protections, even when it involves clinical lab results.

Meta’s own policies clarify that data shared in chats may be stored and used to train future AI models. The company states it retains training data on a case-by-case basis to ensure models operate safely and efficiently. Furthermore, Meta has indicated that interactions with its AI features could be used to tailor advertisements for users. This creates a scenario where deeply personal health information could influence ad targeting, a use case far removed from a protected clinical environment. The convenience of personalized AI health advice must be weighed against the potential for data exposure and commercial exploitation, underscoring the need for user caution.

(Source: Wired)

Topics

generative ai model 95% health data analysis 93% privacy concerns 92% hipaa compliance 90% AI in Healthcare 88% data security 87% meta ai integration 85% ai training data 83% chatbot health features 82% user data sharing 80%