Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
It has become a rite of passage for young people: their first access to a smartphone that opens up the world of whizzy apps and social media. But this can also be a portal to pernicious material, from violence and pornography to content promoting suicide, self-harm or eating disorders. Britain is using a new law to press tech companies to take steps the government claims will “make the UK the safest place in the world to be online”. This is a bold effort that many other countries are watching closely. But it has taken a long time to get this far — and its success is far from assured.
The UK’s groundbreaking Online Safety Act became law last October, five years after being initiated, and goes further than most efforts in global democracies to date — including the EU’s Digital Services Act — to safeguard under-18s. It handed greater powers to the communications regulator, Ofcom, to hold tech companies accountable for legal violations, including criminal liability for named executives if they fail to take steps demanded by the regulator to protect children. Offending companies can, in theory, be fined up to 10 per cent of global revenues.
Ofcom has now published more than 40 draft measures that online platform and search services should follow to protect the young. All services that do not ban harmful content are expected to implement “highly effective” age checks — using ID documents or facial verification — to block children from seeing it. Companies must “tame” any algorithms that push content into children’s personal feeds to filter out deleterious material. Services must have efficient content moderation and move fast to remove offending matter.
The government says time has been taken on crafting the rules to guard against loopholes. But some will be hard to close. Mandatory checks of documents such as passports to verify age would be preferable to facial verification software, since teenagers might game this by getting older friends or siblings to stand in. Teens are already adept at using mechanisms such as virtual private networks to evade existing controls and may find similar ways to circumvent age checks altogether.
The online world, meanwhile, is constantly evolving; since the UK bill was first proposed, TikTok has gone from fringe player to social media behemoth. The act creates some flexibility to use secondary legislation to address evolving risks. But legislators and regulators will have to do a better job not just of keeping up with tech innovation such as artificial intelligence, but getting ahead of it.
Adequate staff and resources will be vital. By January this year, Ofcom had hired nearly 350 people for online safety, including some — in poacher-turned-gamekeeper fashion — from senior roles at Meta, Google and Microsoft; 100 more are to be added in 2024. It plans to levy fees on companies, with the cost of implementing the act estimated at £166mn by April 2025. Yet as well as monitoring and enforcement, it will also have to be ready to square up in legal cases against some of the world’s wealthiest companies.
Beyond the EU, countries including Australia and New Zealand have already introduced online safety regulations aimed at protecting the young. In the absence of congressional US legislation, several states have attempted to follow UK-style measures — but are running up against the US First Amendment right to free speech. A federal judge last September blocked California from enforcing a 2022 law based on UK rules after a suit by a trade group whose members include Amazon, Google and Meta. Under-18s have a right, like adults, to enjoy the fruits of the online world. But safeguarding that right must be balanced against the need to protect vulnerable young minds from sometimes tragic harm.