The regulatory landscape has changed faster than anyone expected

Two years ago, age assurance was a niche topic discussed mostly by child safety advocates and a handful of regulators. Today it is the subject of binding legislation across four continents, with fines that would make even the largest platforms pay attention.

So what happened? And more importantly, are platforms actually ready for what is coming?

What the new laws actually require

The UK’s Online Safety Act is now in force, and it does not mince words. Platforms that host content harmful to children must take steps to prevent children from encountering it. That means age assurance, not a tick box, not a terms of service clause, but a meaningful, demonstrable mechanism for knowing whether a user is old enough. Ofcom has the power to enforce this, and the fines run up to 10% of global turnover.

The EU’s Digital Services Act takes a different but complementary approach. It focuses on systemic risk, and age-inappropriate content reaching minors is explicitly named as one of those risks. Very large online platforms are required to assess and mitigate it. The practical implication is the same: you need to know how old your users are.

In the US, the picture is fragmented but accelerating. State-level laws in Utah, Texas, Louisiana, and others are mandating age verification for specific categories of content. Federal proposals are moving through Congress. The direction of travel is unmistakable—even if the specifics vary by jurisdiction.

Australia has gone furthest. The Online Safety Amendment (Social Media Minimum Age) Act sets a minimum age of 16 for social media, with platforms required to take reasonable steps to verify compliance. The penalty regime is substantial, and the political will behind enforcement is strong.

The pattern across all of these is the same. Governments are no longer asking platforms to self-regulate. They are telling them what to do, and attaching real consequences for failure.

Why most platforms are not ready

Most platforms relied on self-declaration. “Are you over X age? Yes or no.” That was never age assurance. It was always a legal fig leaf, and regulators have now said so explicitly.

The platforms that have moved beyond self-declaration have mostly turned to one of two approaches: ID upload or facial age estimation. Both work, in the narrow sense that they can determine a user’s age with reasonable accuracy. But both create new problems.

ID upload means collecting and storing government-issued identity documents. That is a data protection risk, a honeypot for attackers, and a significant source of user friction. It also excludes people who do not have ID, which in many countries is a substantial portion of the population. We saw this when Discord suffered a data breach where ID photos of 70,000 users were exposed. See BBC coverage.

Facial age estimation means capturing biometric data. Even if the image is processed and deleted immediately, the act of capturing a face to access a website creates a trust issue that many users are uncomfortable with. This was something I found during my PhD, where I carried out sentiment analysis across the UK public to gather their views and opinions on mandatory age verification.

The platforms know this. They are caught between a legal obligation to verify age and the practical reality that they can only use the tools currently available to them.

The gap that regulation has exposed

What the new laws have done, perhaps unintentionally, is expose a gap in the market. Regulation says you must know how old your users are. Privacy law says you must not collect more data than necessary. Users say they will not have their face scanned to watch a video.

Those three requirements are in tension with each other, and the current generation of age assurance tools does not resolve that tension. They satisfy one requirement at the expense of another.

This is why the conversation is shifting towards privacy-native approaches. Methods that can assess age without collecting identity. Ways of maintaining confidence in a user’s age over time without identity-grade record-keeping. Technology that treats privacy as an architectural principle rather than a compliance checkbox.

The platforms that move first on this will have a significant advantage, not just in compliance, but in user trust. The ones that wait will find themselves scrambling to retrofit solutions under regulatory pressure, which is never the best time to make good technology decisions.

What comes next

The regulatory direction is set. There is no scenario in which governments decide age assurance does not matter after all. The only questions are how quickly enforcement ramps up and how high the bar gets set.

For platforms, the time to act is now, not when the first major fine lands. The technology exists to do this in a way that respects privacy, reduces friction, and satisfies regulators. The question is whether platforms will adopt it proactively or be forced into it reactively.

If your compliance strategy still relies on asking users to confirm their own age, that strategy has an expiry date. And depending on which jurisdiction you operate in, it may already have expired.

Are you interested in experiencing the next generation of age assurance?

We provide continuous, privacy-native age intelligence that runs in the background of your existing platform. No ID checks. No facial scans.

Talk to us →