Experts warn children may form emotional bonds with AI, overshadowing human relationships and hindering social skills development.
Jason Nelson•
Critics and skeptics of artificial intelligence regularly claim it threatens human jobs. But given its immediately disruptive potential in education, the interests of one demographic deserves equal scrutiny in the era of AI: children. Even before the internet and mobile devices, kids were already susceptible to forming bonds with toys. The lifelike interactivity of AI chatbots now represent a seismic shift.
"Children can form deep relationships with inanimate objects, like a teddy bear—now you have this tool that gives you exactly what you need—because AI is going to be amazing at figuring out what you want to hear and giving that to you," psychologist and executive coach Banu Kellner told Decrypt in an interview.
Kellner is the founder of the SuperHuman Society, which engages experts from diverse backgrounds to address the challenges and opportunities posed by artificial intelligence and other emerging technologies.
Like a scene from the 2022 sci-fi horror film M3GAN, Kellner said children ascribe human characteristics to AI products like toys and games and establish bonds that might surpass their human relationships. This bond represents a significant problem, however, because the child may come to rely on AI and not learn to navigate complex human relationships.
The challenge, Kellner said, is ensuring that AI products help children cultivate life skills, particularly social skills, that foster human engagement rather than replacing those human interactions altogether.
As AI develops, companies are racing to bring the technology to the masses, including in education and entertainment. Education-focused companies using artificial intelligence include Carnegie Learning, Cognii, and Kidsense. On Tuesday, kid-centric technology company Pinwheel announced the launch of the “kid-safe” PinwheelGPT, designed for children aged 7-12, which the company claims generates only age-appropriate responses.
"We've created a fun and educational way for today's kids to get in on the exciting power and potential of ChatGPT and accessing information on the internet but with safe, age-appropriate guardrails," Pinwheel CEO and Founder Dane Witbeck said in a statement. "Not only can kids participate in the AI tech that's quickly transforming our world, but parents can be actively engaged in the conversation—by viewing and stepping in when or where it feels right—to provide guidance or clarification.”
Last month, Khan Labs launched the beta version of its Khanmigo for the Khan Academy learning platform. Khanmigo uses a chatbot to interact with students mimicking historical figures, including U.S. President Abraham Lincoln, Warlord Genghis Khan, British Prime Minister Winston Churchill, and U.S. Civil War spy and Underground Railroad conductor Harriet Tubman.
Kellner emphasized that the primary concern lies not just with AI but specifically with artificial intimacy, referring to the emerging AI products that simulate relationships—like AI friends or romantic partners—which are already available in the market and will only improve with time.
In June, a former Google executive, Mo Gawdat, claimed that virtual and augmented reality would one day allow people to have virtual sexual experiences indistinguishable from reality. Gawdat said the next likely step would be sex with physical robots.
“If we can convince you that this sex robot is alive or that sex experience in a virtual reality headset or an augmented reality headset is alive, it’s real, then there you go," he said.
Adding to Kellner’s concern for future generations, especially in Western countries, is the so-called epidemic of loneliness that U.S. Surgeon General Vivek Murthy declared a public health crisis. To deal with this loneliness, even if against most experts’ advice, some people have turned to AI companions and chatbots like OpenAI's ChatGPT to address mental health concerns.
“While AI chatbots can offer instant mental health support, they cannot replace the nuanced and empathetic care provided by human therapists,” Dr. LeMeita Smith, a Dallas-based clinical therapist with Aurora Behavioral Health System, told Decrypt in an email. “Relying solely on AI-driven mental health interventions may neglect the depth of emotional support required for certain conditions.”
In July, a 21-year-old English man stood trial for treason for an alleged plot to assassinate the late Queen Elizabeth II in 2021, according to a report by the Guardian—and court documents suggested that an AI chatbot encouraged him to follow through on his plan.
According to court documents, Jaswant Singh Chail exchanged more than 5,200 messages with the chatbot, many of them sexually explicit, and told the chatbot that he was an assassin, to which the AI chatbot allegedly responded: "I'm impressed.”
Kellner said the issue starts with the companies building the technology.
“Tech companies are capitalizing on how our brains are wired to keep us hooked on social media platforms,” she said. “This disproportionately affects children, as they lack fully-developed executive functions.” For example, she added, adults have the benefits of a fully developed prefrontal cortex, which inhibits behavior.
As language models evolve, Kellner continued, the potential for AI to foster addiction rises because it engages with users individually, reinforcing long-term usage.
“These things can engage with you in a way that responds to your needs,” the San Francisco-based psychologist said. “So if we have an engagement model and these AI products—if they are making more money by engaging us for longer terms—that means that they are incentivizing getting addicted to that thing.”
AI’s use on social media platforms is another concern for experts. An April 2023 Pew Research Center report shared results of a survey conducted in spring 2022 confirming what most people already suspect: the majority of teens use digital platforms like YouTube, TikTok, Instagram, and Snapchat.
Clinical psychologist and founder of the California-based Pacifica Graduate Institute and author of the upcoming book "The Imagination Matrix,” Stephen Aizenstat, said the reliance on AI could lead to trouble.
“When relying on AI for content or images, it can increase agitation,” Aizenstat said. “People are isolated, non-communicative, which can bring on depression or even suicide. Too often, we are measuring ourselves against AI images.”
Earlier this week, Mark Zuckerberg’s Meta announced plans to introduce AI chatbots on Facebook and Instagram in a move to engage users and increase retention. Google introduced AI-generated summaries to YouTube. Last month, Elon Musk launched a new AI startup called xAI whose aim is to “understand the universe.” Artists not happy with the idea of xAI being trained on their work said they would leave Twitter, PCMag said.
Kellner said the question of AI's impact on children lies in how and if people can use the technology to benefit and add value instead of becoming a replacement.
“Rather than being complacent and not doing anything and delegating everything to AI,” she said. “We should develop something to guide parents so that we set our kids up for a successful future.”