Recent survey findings indicate a notable trend among high school students, with approximately one-fifth reporting that they or someone they know has engaged in a romantic relationship with artificial intelligence. Furthermore, nearly half of all students surveyed admitted to utilizing AI for companionship. These statistics emerge from a comprehensive study conducted by the Center for Democracy and Technology (CDT), an organization dedicated to safeguarding civil liberties and promoting ethical technology use. The research involved thousands of students, teachers, and parents, revealing a widespread adoption of AI in schools, with over 80% across all groups having interacted with AI in the previous academic year.
Elizabeth Laird, a key contributor to the CDT report, points out a clear correlation: as schools integrate AI into more aspects of education, students are increasingly likely to view AI as a friend or even a romantic partner. This suggests that the prevalence of AI in academic environments directly influences students' personal interactions with the technology. The study underscores the complex and evolving nature of these relationships, indicating that the more exposure students have to AI through school, the deeper their personal engagement tends to be.
The widespread adoption of AI in schools, particularly in diverse applications, is linked to an elevated risk of data breaches, unsettling student-AI interactions, and the emergence of AI-generated deepfakes. These manipulated digital content forms can be exploited for sexual harassment and bullying, amplifying pre-existing issues. Laird highlights that AI introduces a novel avenue for such harmful activities. The report further reveals that a higher proportion of teachers extensively using AI in their classrooms reported experiencing large-scale data breaches compared to those with limited AI use. Laird, drawing on her background in data privacy, suggests that the increased volume of data shared with AI systems by schools contributes to this heightened risk of security incidents. Moreover, frequent AI users among educators were more likely to observe instances where AI tools failed to perform as expected, and some also noted a decline in community trust in schools due to AI integration. An example cited is the use of AI-powered monitoring software on school-issued devices, which can lead to false alarms and, in extreme cases, even student arrests, disproportionately affecting students without personal computing alternatives.
Students attending schools with extensive AI integration showed a greater propensity to turn to AI for mental health support, companionship, escapism, and romantic engagement. Notably, a significant portion of these personal AI interactions occurred on school-provided devices or software. Laird stresses the importance of students understanding that they are interacting with a tool, not a human, and that these tools have inherent limitations. The research indicates that current AI literacy and training provided to students are often rudimentary. Furthermore, only a small percentage of teachers have received training on how to address situations where a student's AI use might negatively impact their well-being. While many educators recognize AI's potential to enhance teaching, save time, and personalize learning, students in highly AI-integrated environments express concerns about feeling less connected to their teachers. Laird concludes that while AI offers benefits, it is crucial to acknowledge and address the negative consequences by actively listening to students' experiences.