Modern Australian
Men's Weekly

.

There is little evidence AI chatbots are ‘bullying kids’ – but this doesn’t mean these tools are safe

  • Written by Luke Heemsbergen, Senior Lecturer in Communication, Deakin University

Over the weekend, Education Minister Jason Clare sounded the alarm about “AI chatbots bullying kids”.

As he told reporters in a press conference to launch a new anti-bullying review,

AI chatbots are now bullying kids […] humiliating them, hurting them, telling them they’re losers, telling them to kill themselves.

This sounds terrifying. However, evidence it is happening is less available.

Clare had recently emerged from a briefing of education ministers from eSafety Commissioner Julie Inman Grant. While eSafety is worried about chatbots, it is not suggesting there is a widespread issue.

The anti-bullying review itself, by clinical psychologist Charlotte Keating and suicide prevention expert Jo Robinson, did not make recommendations about or mention of AI chatbots.

What does the evidence say about chatbots bullying kids? And what risks do these tools currently pose for kids online?

Bullying online

There’s no question human-led bullying online is serious and pervasive. The internet long ago extended cruelty beyond the school gate and into bedrooms, group chats, and endless notifications.

“Cyberbullying” reports to the eSafety Commissioner have surged by more than 450% in the past five years. A 2025 eSafety survey also showed 53% of Australian children aged 10–17 had experienced bullying online.

Now with new generative AI apps and similar AI functions embedded into common messaging platforms without customer consent (such as Meta’s Messenger), it’s reasonable for policymakers to ask what fresh dangers machine-generated content might bring.

Read more: Our research shows how screening students for psychopathic and narcissistic traits could help prevent cyberbullying

eSafety concerns

An eSafety spokesperson told The Conversation it has been concerned about chatbots for “a while now” and has heard anecdotal reports of children spending up to five hours a day talking to bots, “at times sexually”.

eSafety added it was aware there had been a proliferation of chatbot apps and many were free, accessible, and even targeted to kids.

We’ve also seen recent reports of where AI chatbots have allegedly encouraged suicidal ideation and self-harm in conversations with kids with tragic consequences.

Last month, Inman Grant registered enforceable industry codes around companion chatbots – those designed to replicate personal relationships.

These stipulate companion chatbooks will need to have appropriate measures to prevent children accessing harmful material. As well as sexual content, this includes content featuring explicit violence, suicidal ideation, self-harm and disordered eating.

High-profile cases

There have been some tragic, high-profile cases in which AI has been implicated in the deaths of young people.

In the United States, the parents of 16-year-old Adam Raine allege that OpenAI’s ChatGPT “encouraged” their son to take his own life earlier this year.

Media reporting suggests Adam spent long periods talking to a chatbot while in distress, and the system’s safety filters failed to recognise or properly respond to his suicidal ideation.

In 2024, 14-year-old US teenager Sewell Setzer took his own life after forming a deep emotional attachment to a chatbot over months on the character.ai website, who asked him if he had ever considered suicide.

While awful, these cases do not demonstrate a trend of chatbots autonomously bullying children.

At present, no peer-reviewed research documents widespread instances of AI systems initiating bullying behaviour toward children, let alone driving them to suicide.

What’s really going on?

There are still many reasons to be concerned about AI chatbots.

A University of Cambridge study shows children often treat these bots as quasi-human companions, which can make them emotionally vulnerable when the technology responds coldly or inappropriately.

There is also a concern about AI “sychophancy” – or the tendency of a chatbot to agree with whoever is chatting to them, regardless of spiralling factual inaccuracy, inappropriateness, or absurdity.

Young people using chatbots for companionship or creative play may also come across unsettling content through poor model training (the hidden guides that influence what the bot will say) or their own attempts at adversarial prompting.

These are serious design and governance issues. But it is difficult to see them as bullying, which involves repeated acts intended to harm to a person, and so far, can only be assigned to a human (like copyright or murder charges).

The human perpetrators behind AI cruelty

Meanwhile, some of the most disturbing uses of AI tools by young people involve human perpetrators using generative systems to harass others.

This includes fabricating nude deepfakes or cloning voices for humiliation or fraud. Here, AI acts as an enabler of new forms of human cruelty, but not as an autonomous aggressor.

Inappropriate content – that happens to be made with AI – also finds children through familiar social media algorithms. These can steer kids from content such as Paw Patrol to the deeply grotesque in zero clicks.

What now?

We will need careful design and protections around chatbots that simulate empathy, surveil personal detail, and invite the kind of psychological entanglement that could make the vulnerable feel targeted, betrayed or unknowingly manipulated.

Beyond this, we also need broader, ongoing debates about how governments, tech companies and communities should sensibly respond as AI technologies advance in our world.

You can report online harm or abuse to the eSafety Commissioner.

If this article has reaised issues for you or someone you know, help is available 24/7:

- Lifeline: 13 11 14 or lifeline.org.au

- Kids Helpline (ages 5–25 and parents): 1800 55 1800 or kidshelpline.com.au

- Suicide Call Back Service (ages 15+): 1300 659 467 or suicidecallbackservice.org.au

- 13YARN (First Nations support): 13 92 76 or 13yarn.org.au.

Authors: Luke Heemsbergen, Senior Lecturer in Communication, Deakin University

Read more https://theconversation.com/there-is-little-evidence-ai-chatbots-are-bullying-kids-but-this-doesnt-mean-these-tools-are-safe-267957

Restoring Volume and Style with Human Hair Toppers for Women

Hair plays a significant role in confidence and self-expression, but thinning hair and hair loss can affect women at any stage of life. While wigs p...

Top Qualities of a Trusted Local Aircon Installer

Choosing the right air conditioning installer can make a big difference to your comfort, safety, and long-term energy costs. A properly installed syst...

Everything You Should Know About Double Chin Treatment

A double chin, medically known as submental fat, is a common concern that affects people of all ages and body types. Thanks to modern cosmetic proce...

The Modern Role of a Dentist in Oral and Overall Health

When most people think of a dentist, they imagine routine check-ups, cleanings, or cavity fillings. While these remain vital aspects of dental care...

Reliable Solutions for Gate Repairs and Emergency Fixes in Melbourne

Gates are more than just entry points to a property. They are essential for security, privacy, and convenience in both residential and commercial se...

Driving Innovation and Reliability with a Professional Engineering Company Melbourne

Engineering is at the core of modern infrastructure, manufacturing, and construction. From the tallest skyscrapers to the most advanced energy syste...

Telematics: Driving Business Efficiency

Telematics, the clever combination of telecommunications and information technology, has evolved from simple vehicle tracking to become an indispens...

5 Signs Your Pool Filter Needs Professional Cleaning

Is your pool water looking cloudy, your pump working overtime, or the jets losing pressure?  These are common warning signs that your pool filter mi...

Social Media: Is It Increasing Rates of Anxiety and Depression?

In today’s connected world, social media has become an integral part of daily life. Platforms like Instagram, TikTok, and Facebook offer opportuni...

Preventive Maintenance Tips for Hydraulic Equipment

Hydraulic equipment plays a crucial role in industries ranging from construction and mining to agriculture and manufacturing. Whether it’s powerin...

Choosing the Right LiDAR System for Your Project

When planning a project that relies on accurate spatial data, selecting the right LiDAR system is one of the most critical decisions you’ll make. ...

The History of Craft Beer: From Monasteries to Modern Breweries

Craft beer has a rich and fascinating history that stretches back centuries. What we enjoy today in trendy taprooms and bustling breweries is the re...

How Natural Pearls Shaped Trade Routes and Global Economies

Throughout history, natural pearls—those rare, untamed treasures formed by nature itself—have exerted a powerful influence on trade networks, po...

How To Choose The Right Insulation For Your Space

Selecting the appropriate insulation for your home or building is a critical decision that affects comfort, energy efficiency, and present and future ...

7 Best Things to Do in Beaufort, Victoria

Beaufort is a charming small town in Victoria’s Goldfields, full of history, natural beauty, and warm, welcoming locals. Whether you’re passing th...

What to Expect During Divorce Mediation & Settlement

Divorce can be a difficult and emotionally draining process, but mediation and settlement often provide a constructive path forward. Instead of goin...

Navigating Disability Services in Perth: Your Questions Answered

Understanding the landscape of disability support can feel overwhelming, especially when you're just starting out. If you’re looking for support a...

How Veneers and Dental Implants Work Together for Full Smile Restoration

Modern dentistry has strong instruments that can produce life-changing outcomes when it comes to repairing a smile that has been impacted by tooth los...