Published: 12:16, September 9, 2025
AI chatbots are ‘clear danger’ to kids, Australian watchdog says
By Bloomberg

Artificial intelligence-powered chatbots that encourage suicide or hold sexually explicit conversations pose a “clear and present danger” to children, Australia’s online safety regulator said, as it rolled out new rules governing the services.

The measures are the latest in a series of stringent digital restrictions in Australia, including a world-first social media ban for under-16s. The law takes effect in December and covers a range of services including Facebook and Instagram, both owned by Meta Platforms Inc, and YouTube.

ALSO READ: Australia's social media curbs set to protect kids

In a statement on Tuesday, Australia’s eSafety Commissioner Julie Inman Grant said children are being exposed to “awful” age-inappropriate content at an increasingly young age. Inman Grant said she’d heard of 10-year-olds engaging sexually with the artificial companions, which are mostly unregulated.

Under new rules, sites that display or distribute pornography, or other “high-impact content,” will have to apply age-checking technology to stop children accessing the material. App stores must ensure that apps are appropriately rated, and that age-assurance measures are in place for any downloads.

READ MORE: Australian government to restrict access to abusive technologies

The rules apply to a range of online services including social media platforms, gaming sites and AI services. Breaches can be punished by penalties of as much as A$50 million ($33 million)

“I do not want Australian children and young people serving as casualties of powerful technologies thrust onto the market without guardrails,” Inman Grant said.