1 in 5 Teens Have Dated AI or Know Someone Who Has

▼ Summary
– Nearly 1 in 5 high schoolers report that they or someone they know has had a romantic relationship with AI, and 42% have used AI for companionship.
– Higher levels of AI use in schools correlate with increased exposure to data breaches, troubling student-AI interactions, and AI-generated deepfakes used for harassment and bullying.
– Teachers with extensive school-related AI use are more likely to report data breaches, system failures, and damage to community trust, such as false alarms from monitoring software.
– Students in schools with high AI usage are more likely to use AI for mental health support, companionship, and escaping reality, often on school-provided devices.
– While educators see benefits like time savings and individualized learning, students express concerns about feeling less connected to teachers and lack adequate AI literacy training.
A recent study reveals that one in five high school students reports either having a romantic relationship with artificial intelligence or knowing someone who has. This eye-opening data comes from new research conducted by the Center for Democracy and Technology, a nonprofit organization focused on civil liberties and responsible technology use. The survey also found that 42% of students indicated they or someone they know has turned to AI for companionship, highlighting a significant shift in how young people interact with technology.
The comprehensive study surveyed approximately 800 public school teachers from grades six through twelve, along with 1,000 high school students and 1,000 parents. An overwhelming majority, 86% of students, 85% of educators, and 75% of parents, confirmed they had used AI during the previous academic year. Elizabeth Laird from CDT, who co-authored the report, noted clear patterns emerging from the data. She explained that students attending schools with extensive AI integration were significantly more likely to know peers who consider AI a friend or romantic partner.
However, this increased AI adoption comes with serious concerns. The research identified troubling connections between frequent AI use in educational settings and various risks. Schools implementing AI across multiple functions reported higher instances of data breaches, problematic student-AI interactions, and the emergence of AI-generated deepfakes. These manipulated images and videos have been weaponized for sexual harassment and bullying, creating what Laird describes as “a new vector for harassment that exacerbates existing problems.”
The data shows striking differences between high-use and low-use AI environments. Among teachers heavily incorporating AI into their work, 28% reported experiencing large-scale data breaches at their schools, compared to just 18% of teachers who used AI minimally or not at all. Laird, drawing on her background as a data privacy officer for Washington D.C.’s education system, emphasized that schools sharing more data with AI systems face greater vulnerability to security breaches. She noted that AI platforms both consume and generate substantial amounts of information, potentially increasing exposure risks.
Technical failures and erosion of trust emerged as additional concerns. Educators relying heavily on AI were more likely to report system malfunctions during classroom use. They also noted that AI implementation sometimes damaged community confidence in schools, particularly when monitoring software on school-issued devices generated false alerts leading to student disciplinary actions. This creates equity concerns, as Laird pointed out that “students who can’t afford personal devices have no alternative to monitored school equipment.”
The research also uncovered implications for student wellbeing. Students in AI-intensive schools reported higher rates of using AI for mental health support, companionship, escapism, and romantic relationships. Concerningly, 31% of students who engaged in personal conversations with AI systems did so using school-provided technology. Laird expressed concern about students potentially misunderstanding the nature of these interactions, stating, “Students should recognize they’re communicating with a tool having known limitations, not a person.”
Both students and teachers appear underprepared for these challenges. The study found that only 11% of educators received training on identifying when AI use might harm student wellbeing. While teachers reported benefits including time savings, teaching enhancements, and personalized learning, students in high-AI environments expressed concerns about feeling less connected to their instructors. Laird concluded that “to realize AI’s benefits, we must seriously consider what students are telling us about the negative consequences accompanying this technology.”
(Source: NPR)




