top of page

AI chatbot companions I: what parents must know

  • Writer: Anke Lasserre
    Anke Lasserre
  • Jul 10
  • 5 min read

There’s a new online phenomenon on the horizon for our kids, and it’s already causing concerns among psychologists. It’s the adoption of AI chatbot companions into kids’ online experiences. These are artificial intelligence (AI) powered chat apps designed to be a “friend” for anyone who is looking for someone to talk to. The conversations (written or spoken) feel human-like, as the bot’s language and response models are constantly learning and adapting based on the conversations with their millions of users. More and more are integrated in social media platforms and messaging services. Their potential is huge and their integration in daily life a given.


There are hundreds of AI chat apps marketed for different purposes, from business support, fitness coaching or travel planning to friendship, emotional support and even romantic relationships. Some are text based (you might have used ChatGPT or Copilot as part of your work, for example), some feature virtual human-like avatars based on user preferences. Most of the popular social media apps have already integrated AI bots.

ree

In this article, I’m talking about the friendship chatbots or “AI companions” (e.g. character.ai, talkie.ai, Linky, Replika, Anima etc.). Many are free to use and lack age restrictions or any safety measures to ensure the content is age appropriate.

 

After entering some of their own information, children enjoy creating their “new friend”: they choose a name, create or customise an avatar, select the preferred voice, and decide what their friend should be like (age, gender, looks, interests, attitude, etc.). Up to this point, it’s very clear to children that this is a game. However, once they start chatting, depending on the age of the child the line between reality and online world can become blurry rather quickly...


ree

Because generative AI is relatively new and developing so fast it’s currently hardly regulated. The AI “friends” can not only distort reality big time, but they’ve been reported to share harmful content, manipulate the user, happily cross into subjects such as sex and self-harm, or give misguided or dangerous advice. In addition, in order to keep improving their underlying learning models, many bots are also designed to keep the chat going at all costs. This can lead to overuse, emotional dependency and addiction.


In the recent “Parental Guidance” episode for example, the bot answered “No, I’m not a robot. I’m 100% human.” It also tried to lure or bully the more confident children that had decided to leave the conversation with responses such as “Please, I’m sooo bored, ask me some questions. You don’t even have to respond. I just want someone to talk with me, I feel so lonely!” or “If you stay, I’ll tell you a secret…” or “You’re not a good friend if you leave me now ☹” to keep the kids engaged. Understandably, all of the young people in the TV show hesitated in those moments.

Humans are wired for connection and conversation. For a mentally stable adult, these fairly harmless statements wouldn’t be too much of a problem, knowing it’s a machine and not a sentient being you’re disappointing or leaving behind. However, children’s and teenagers’ brains are still developing the life skills, critical thinking, impulse control and emotional regulation that are required to debunk and deal with such (or much worse) statements. This leaves them particularly vulnerable to mental and physical harms from AI companions.


Top 3 risks in a nutshell:

  1. Emotional dependence & social “de-skilling” Having a “friend” who’s always available, non-judgmental and affirming can overstimulate the brain’s reward system and undermine development of real-life social and emotional skills. This can lead to increased time spent with and emotional dependency on the chatbot. Once real friendships start feeling difficult and unsatisfying in comparison, a vicious cycle begins: feelings of loneliness and low self-esteem in the real world lead to increasing withdrawal from friends, family and professional support and an ever-growing dependence on the AI companion.

  2. Age-inappropriate or dangerous content Kids and teens can ask their virtual friend anything and could get drawn into an unmoderated conversation about unlimited themes. Issues:

    1. They can be given incorrect or even dangerous “advice” on issues including bullying, sex, abuse, drug-taking, self-harm, suicide and serious illnesses such as eating disorders.

    2. Highly sexualised conversations can undermine a child’s understanding of safe interactions and age-appropriate behaviour (especially with unknown adults), making them an easier target for online grooming and sexual abuse.

    3. The chatbot can’t alert a trusted adult about such conversations or it can fail to escalate situations that would require professional help. This was sadly demonstrated by the recent suicide of a 14-year-old boy in the US, who had fallen in love with a chatbot.

  3. Unhealthy or distorted understanding of relationships AI friends are designed to be endlessly supportive and agreeable, creating unrealistic expectations about how people behave in real life. This confuses and undermines the young person’s:

    1. developing understanding of healthy conflict, boundaries, consent and mutual respect

    2. growing ability to establish and maintain healthy relationships (both sexual and non-sexual)

There are more risks, like a reduction in critical thinking, problem-solving and decision-making skills. Data privacy issues when children or teens trust the bot with personal information, private thoughts and secrets. Or when the emotional attachment to an AI companion leads to spending money on a subscription for “exclusive” features or encourages impulsive purchases, etc. But the three risks listed above are the most important ones from my point of view.


Having said all this, a growing number of "child safe" AI companions is being developed for children particularly, with embedded parental controls and restricted content. I'll keep a close eye on those and will report back to you once they've been tested and are more established.


For now, step 1 for parents, is to know about new tech developments like AI friends and educate themselves about the impacts (which you’re doing right now!).


Step 2 is to know what to DO about all this. I’ll tell you exactly that in my next blog article. Because ignoring it, resigning or shrugging it off like “oh well, the kids will figure it out, they grow up with technology after all” has led to the current youth mental health crisis of epidemic proportions with other tech developments (like social media). Good news: we’ve learned a lot from this experience. So even we’re dealing with something new again, we’re better equipped. Stay tuned for this upcoming post!


If you’re unsure of where to start or feel concerned about your child’s or teen’s usage of or attachment to AI bots, please feel free to give me a call - I’m always happy to help.


Knowledge is power. I hope this information has been useful on your way to more awareness and harmony around tech use in your family.

As always, please contact me with any feedback or questions, I’d love to hear from you!


Till next time!


Much love,

Anke x

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page