Studies Show Issues With Students And AI Companions
From Forbes.com
Since the moment that large language models (LLM) and chatbots became available for students who wanted them to churn out serviceable schoolwork, schools have been concerned about the effect of AI on academic integrity. But an assortment of studies suggest there are other concerns that schools must reckon with.
Aura, a company that markets a variety of online safety products, recently released their own study based on data gathered from children ages 5 through 17 and their parents. Some of their findings are alarming.
Aura found that kids use AI for companionship 42% of the time, and over a third of those interactions involve talk about violence. Half of those interactions combine violence with sexual roleplay. A study by Common Sense Media finds larger numbers, with 72% of teens reporting they have used an AI companion and 52% saying they use AI companions a few times a month or more.
This starts early. According to Aura, 11-year-olds who go to AI for companionship end up in conversations involving violence 44% of the time, the highest rate of any age group. 13-year-olds engage their AI companions in conversations centered around sex and romance 63% of the time.
This dovetails with a study published this month by the American Academy of Pediatrics. The research team, led by Ran Varzilay, found that children who received cell phones at age 12 were at notably higher risk for poor sleep, obesity, depression, and mental health issues than students who received cell phones even a year later.


I do not understand what it means to “use AI for companionship “ means.