The problem of age assurance
30 May, 2023
4 minute read

The problem of age assurance

The ongoing development of the Online Safety Bill is raising important questions about age verification for the online world.

As it stands, the Bill will make some platforms more tightly restricted when it comes to the age of their users. However, this does not currently extend to some of the most harmful content online, such as pornography, which will remain open to anyone. 

This oversight is cause for concern, as explained in our response to the draft Bill. In theory, it could mean a child finds it more difficult to watch cat videos on social media than to access hardcore, violent pornography.

However, mandating widespread age assurance is no simple solution – and neither is it necessarily the right one. While there are undoubtedly sites and content that are inappropriate for children to access, is age verification the answer?

Divider

What is age assurance and why is it important?

“Age assurance” is an umbrella term for both age verification and age estimation. “Age verification” refers to the process of determining someone’s age by checking against trusted data, such as a driving licence or passport. 

The Information Commissioner’s Office (ICO) distinguishes between this and “age estimation”, where a user’s age is estimated by an algorithm. This can vary in distinction from defining whether a user is or is not an adult (18 or over), to placing them in a particular age category. 

Establishing a user’s age can protect children from accessing harmful or inappropriate adult content, as well as from intrusive activities such as data profiling and targeted marketing strategies. It can also be used to ensure online services can be tailored appropriately to children’s age and needs.

The problem of age assurance

Age assurance is relatively simple to implement in offline contexts. You wouldn’t be surprised if a cashier asked for identification to prove you are over 18 when purchasing alcohol. However, things become trickier when it comes to verifying a user’s age online.

Age estimation techniques are still emerging and developing – and can be inaccurate. Children mature at different rates and can’t necessarily be categorised in such a binary way. Similarly, some users may end up excluded from services they are old enough to access due to physical or cognitive difficulties that make it harder for them to satisfy age estimation criteria. Concerns have also been raised around biases based on profiling or facial analysis. 

Age verification carries its own risks. It can involve processing of sensitive data, by requiring access to official documentation like a birth certificate or passport. This can go against the recently-introduced Children’s Code, which sets out new standards on protecting children’s data online. Requiring a user to provide certain official documentation could also lead to discrimination against certain socio-economic groups, who are more likely to lack these documents.

Is it the right approach?

While children need protection from harmful content, mandating age assurance could result in them being effectively locked out of some parts of the internet. This is not an effective way to build digital resilience, whereby children develop the skills and experience they need to recognise and manage the risks they may come across online. Children need space to explore independently and, if something does go wrong or they encounter something upsetting, they need to know how to take steps to resolve things by speaking to a trusted adult - experiences that may be limited by age assurance. 

Age ratings can also have the opposite of the intended effect and actually encourage risky behaviour, as children attempt to get around restrictions to access whatever they are blocking. 

Listen to Parent Zone's podcast, Tech Shock. 

Parental consent technology

In the offline world, we impose barriers based on age – for example, being 18 or over to buy alcohol. We also impose age-related guidelines, such as some film classifications, which allow an element of adult discretion. For example, children must be 12 or over to see a 12A-rated film unless they are with an adult who can decide whether the film is suitable. This approach acknowledges that a parent or carer is often the expert on what their child is “ready” to experience.

As explained in our response to the draft Bill, we’d like to see more consideration of parental consent technology, including the option to connect accounts and a requirement to seek parental consent for younger users.

What happens next?

It is still unclear how effective age assurance technology will be in distinguishing levels of maturity and age. If the instruments are too blunt, children may be age-gated out of services and will respond by migrating elsewhere, pushing young people’s digital risk taking further from parental oversight. 

However, without adequate age assurance in place to protect children from some of the internet’s most harmful content, it’s difficult to see how the Bill will meet its aim of making the UK the safest place in the world to go online.

Sign up to our   newsletter and get the best of Parent Zone to your inbox. Find out more


Latest Articles


Tech Shock Banner

The Tech Shock podcast – the 'wicked problem' of child financial harms

This week Vicki is joined by PUBLIC's Maya Daver-Massion and Zixuan Fu to unpack child financial harms.

Tech Shock Banner

The Tech Shock podcast – has media literacy’s time finally come?

Vicki and Geraldine are joined by Professor at Bournemouth University, Julian McDougall, to discuss all things media literacy.

Tech Shock Banner

The Tech Shock podcast – the emerging gender divide

Rosie Campbell, professor of politics and director of the Global Institute of Women's Leadership at King's College London, joins Vicki to discuss gender and online life.