BigTech CompaniesCybersecurityNewswireTechnology

Chinese Smart Home Apps: Apple Privacy Labels Often Misleading

▼ Summary

– The study found that smart home apps in China collect extensive sensitive data, including financial, biometric, and health information, but none explicitly disclose how they handle such data for bystanders captured by devices.
– Research revealed a complete absence of privacy protections, notifications, or consent mechanisms for visiting and uninvolved bystanders, with all controls designed solely for the primary account holder.
– There were significant inconsistencies between the apps’ privacy policies and their user interfaces, particularly regarding user controls for data sharing and marketing, and all apps lacked transparency about sharing data with authorities.
– Privacy policies guarantee user deletion rights are limited by mandatory data retention and access provisions for Chinese public security and national security authorities, restricting true data control.
– Apple App Store privacy labels for these apps were frequently inaccurate, underreporting data collection and contradicting both the apps’ own privacy policies and observed user interface behavior.

The privacy practices of smart home applications, particularly those developed in China, are coming under increased scrutiny. A recent investigation reveals significant concerns regarding how these apps handle the data of individuals beyond the primary account holder, alongside widespread inconsistencies between Apple’s App Store privacy labels, the apps’ own privacy policies, and their actual user interface behavior. This disconnect creates a misleading picture for consumers trying to understand how their information, and the information of others around them, is collected and used.

These applications gather a vast array of sensitive information. Every app examined required registration with a mobile phone number, aligning with national real-name verification rules. Beyond this, data collection is extensive. It includes direct inputs like usernames and optional profile details, as well as indirect harvesting of device identifiers, network information, and precise location data. To enable features like device pairing, voice assistants, and automation, apps routinely request permissions for the camera, microphone, photo library, contacts, and Bluetooth.

Perhaps more concerning is the collection of highly sensitive personal data as disclosed in privacy policies. The majority of apps reported gathering financial information related to payments. Every single privacy policy stated the collection of biometric data, such as facial recognition information. Many also noted the gathering of health-related metrics, including sleep patterns, heart rate, blood pressure, and menstrual cycle data. Critically, these apps and their connected devices, through cameras and microphones, inevitably capture information about people who are not users, such as guests or neighbors. The research found a complete absence of disclosure regarding how this bystander data is managed.

Protections for individuals who are not the account holder are virtually non-existent. The study categorized bystanders into three groups: those living in the home, visitors, and uninvolved parties like passersby. Privacy controls are designed exclusively for the primary user. While some apps allow device sharing with family members, the policies do not explain how consent is obtained from these secondary users or how their data is handled independently. For visitors and uninvolved bystanders, there were no explicit mentions or user interface features found in any of the apps. These individuals receive no notification, have no consent mechanisms, and their privacy is entirely dependent on the decisions of the account owner.

Further analysis exposed frequent mismatches between what privacy policies state and what users actually experience. While areas like location data collection were generally consistent, other critical disclosures were not. For instance, every app’s policy referenced sharing data with law enforcement as required by law, yet none provided any user interface element to notify individuals when such sharing occurs. Usability issues were also common, with confusing account deletion processes and cases where connected devices remained active even after a user account was supposedly erased.

These deletion rights are themselves limited by broader legal mandates. Privacy policies uniformly include clauses stating that companies must provide technical support and assistance to public security and national security authorities when required. This creates a structural condition where user deletion mechanisms exist alongside broad, authority-driven data access for state purposes, fundamentally constraining any guarantee of complete data removal.

The investigation into Apple’s App Store privacy labels revealed they are often unreliable and conflict with other evidence. Over half of the apps claimed in their labels that they did not collect data used to track users, despite their privacy policies clearly disclosing the use of third-party software development kits for analytics and monitoring. Nearly half of the apps declared they collected no data linked to the user, a claim directly contradicted by their mandatory phone-number registration. Most strikingly, six apps asserted they collected no data at all, while their published privacy policies outlined detailed data collection and sharing practices. These discrepancies highlight a significant transparency failure that undermines the utility of privacy labels for informed consumer choice.

(Source: HelpNet Security)

Topics

bystander privacy 95% data collection 93% research findings 92% privacy policies 90% smart home apps 90% app store labels 88% sensitive information 87% privacy controls 85% user permissions 85% government access 83%