Media personality Jojo Siwa at a TikTok event in Australia

Analysis
Why the world is rethinking kids’ access to social media

Bans, age verification, and growing concerns over mental health risks spark a global reckoning.

More and more countries and cities around the world are working to restrict the use of social media, particularly TikTok, in response to studies highlighting its dangers for children and documented cases of violence linked to these platforms. Australia has implemented the world’s first nationwide ban on social media use by individuals under 16, with technology companies expected to introduce a code in February to verify users' ages.
This week, Albania announced it would ban TikTok for children under 16 for a year. In Florida, legislation passed in March will take effect on January 1, banning children under 14 from having social media accounts. Meanwhile, Quebec is considering legislation similar to Australia’s. These countries join France, which enacted a law in 2023 prohibiting children under 15 from signing up for social media accounts without parental consent, though enforcement has yet to begin. In October, Norway announced plans to raise the minimum age for social media use from 13 to 15.
1 View gallery
אושיית הרשת ג'וג'ו סיווה באירוע טיקטוק באוסטרליה
אושיית הרשת ג'וג'ו סיווה באירוע טיקטוק באוסטרליה
Media personality Jojo Siwa at a TikTok event in Australia
(Photo: Getty Images)
These restrictions come as tech giants continue to claim their platforms are safe. For instance, in a 2022 congressional hearing, Facebook founder Mark Zuckerberg argued that social networks are "generally positive" for teenagers’ mental health. However, this assertion contradicts recent studies, which have linked social media use among children to increased anxiety and depression, body image issues, disrupted sleep, and eating disorders.
Ironically, many social media executives limit their own children’s exposure to these platforms. Zuckerberg monitors his children’s screen time, TikTok CEO Shou Zi Chew doesn’t allow his children to use TikTok, Bill Gates only permitted his children to have phones at age 14, Snap CEO Evan Spiegel limits his child’s tech use to 90 minutes a week, and Google CEO Sundar Pichai avoided giving his child a phone altogether.
Governments worldwide have argued that the vulnerabilities revealed by research, combined with risks like cyberbullying, scams, and exposure to inappropriate content, warrant intervention. While many parents welcome these legislative steps, opposition comes from various quarters.
Technology companies, led by Meta (the owner of Facebook and Instagram), are among the most vocal critics. Meta is not particularly concerned about Australia's specific move but worries it could inspire similar legislation globally, threatening their growth by limiting younger users. These companies advocate for a more flexible approach, allowing young users to join platforms with parental consent and providing enhanced parental control tools.
However, even with tools to enforce age limits, their effectiveness remains questionable. A 2022 report by the UK Department of Communications found that 60% of 8–11-year-olds maintain social media profiles despite the platforms’ minimum age requirement of 13. Many children even use their parents' profiles to bypass restrictions.
Governments expect technology companies to develop mechanisms for age verification during the login process. Current proposals include facial recognition technology, credit card verification, or third-party platforms confirming identity with government documents. However, Australian legislation explicitly prohibits reliance on government documents. These solutions also raise privacy concerns, as they involve collecting additional personal information, whether biometric or AI-based.
While researchers agree on the potential dangers social media poses to children, blanket bans and strict age restrictions fail to address the deeper problems inherent to the platforms. Instead, technology companies should be pressured to adopt transparent standards and strengthen their reporting systems. For example, platforms should be designed to prevent advertisers from manipulating users’ emotions and selling harmful products. Social media should also promote healthy expression and idea exchange, rather than relying on algorithms that prioritize attention-grabbing and polarizing content.
Legislation requiring child-friendly versions of social media could help. Such versions might include bans on targeted advertising for children and default feeds consisting of posts from accounts the user follows, rather than algorithmic suggestions. Additionally, platforms could be required to reduce risks such as cyberbullying and sexual exploitation.
Despite these recommendations, social media companies have long resisted such reforms. In 2023, an industry lobby group argued against child-focused legislation in the U.S. House of Representatives, claiming it could “reduce the availability of free services for children, disproportionately impacting low-income families.”