From 10 December 2025, social media companies in Australia must ensure that children under 16 cannot create accounts on their platforms. Existing accounts for under-16s will also need to be deactivated or removed.
The government calls this a world-first policy, designed to protect children from online pressures and harmful content. Studies show that 96% of Australian kids aged 10-15 use social media, with 70% exposed to harmful material, including misogyny, violent videos, eating disorders, and suicide content. One in seven have experienced grooming attempts, and over half have faced cyberbullying.
The ban currently targets Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Kick, and Twitch. Authorities are also considering extending restrictions to online gaming platforms, prompting Roblox and Discord to introduce stricter age checks.
The government evaluates platforms based on three factors:
- Does the platform’s main purpose involve online social interaction?
- Can users interact with others?
- Can users post content?
Platforms like YouTube Kids, Google Classroom, and WhatsApp are excluded. Kids will still be able to watch content on platforms like YouTube without an account.
Children and parents won’t face penalties. The responsibility lies with social media companies, which could be fined up to $49.5 million AUD for serious or repeated violations.
Companies must take “reasonable steps” using age verification technologies, which may include:
- Government-issued IDs
- Face or voice recognition
- Age inference based on online behavior
Self-declared ages or parental verification will not be accepted. Meta (Facebook, Instagram, Threads) has announced it will start closing teen accounts on 4 December, allowing mistaken removals to be corrected with ID or a video selfie. Other platforms have yet to announce their compliance strategies.
#SocialMediaBan #AustraliaNews #KidsOnlineSafety #DigitalWellbeing #TechNews #CyberSafety #AgeVerification
