Regulating content won't make the internet safer
- Written by Julia Hörnle, Professor of Internet Law, Queen Mary University of London
An upheaval of the law governing what can be published online is taking place in the shape of the online safety bill[1]. The bill[2], which is currently making its way through parliament, has the hyperbolic ambition “to make the UK the safest place in the world to be online”, and proposes to do this through a complex system of regulation.
It calls for platforms, search engines and social media to regularly assess the risks of harms stemming from their services and take measures to mitigate them. The regulator, Ofcom, will carry out its own risk assessments, establish risk profiles for different platforms (such as YouTube, Instagram or Tinder), and publish guidance in the form of “codes of practice”.
The act will apply even where the platform provider is abroad[3]. This means that all platforms taking part in potentially harmful activities are caught, no matter where in the world they are based.
Ofcom has, of course, longstanding experience in regulating audio-visual content on television. But online safety is about so much more than taking down or blocking harmful content. It is really about changing the business models of companies like Meta (the parent company of Facebook, Instagram and WhatsApp). These companies’ profits depend on keeping users engaged no matter how harmful the content they are engaging with may be.
Harmful business model
Harm online is caused by business models that rely on exploiting the value of users’ data trails, mainly through targeted advertising. The best example of this is social media platforms, which make money by making it easy to share content, and selling information gleaned from that content-sharing to advertisers.
While these platforms are mostly free to use, they are paid for with the sale of user data – essentially brokers monitoring our online behaviour and selling information on what makes us engage. This has created an incentive for companies to keep users active so they can be further subjected to advertising.
The goal of continuous engagement is reached through platform design features like endless scrolling and autoplay videos. Another aspect is the algorithms that decide what content users see – this is where harm often comes in, as keeping users engaged means presenting more extreme content[4] or encouraging[5] users down a rabbit hole.
Social media platforms continuously nudge us to react to content. This speed of response makes us reactive and unreflective, leading to problems like pile-on harassment. It also leads to the vast amplification of disinformation and hate posts, which can be shared by millions within minutes, with destructive effects. For example, see the riots on Capitol Hill in January 2021 or the attacks on Rohingya people following hate speech on Facebook, which have now led to lawsuits[6] in the UK and the US.
Can it be changed?
The parliamentary committee scrutinising the bill rightly, in my view, raised the importance of safe platform design. However, it is difficult to reinvent an established model, especially one that has been hugely profitable for tech companies. Platforms are likely to resist change to their business operations, and it will be difficult for Ofcom to implement.
ARVD73 / Shutterstock[7]Less dependency on engagement, sharing and advertising revenue might be the best way to actually reduce online harms, but it could also be the end to “free” social media as we know it.
Whether the eventual act lives up to the promise of “online safety” will largely depend on how it deals with the failure to make platforms safe. Content regulation as in the online safety bill needs to be complemented by regulation of the technology itself.
A step in this direction is the EU’s Artificial Intelligence Act[8], which applies different levels of regulation, depending on the severity of the risks posed by the technology.
Competition law must also be employed to curb risky business practices of dominant operators in order to prevent harm. The EU’s proposal for a Digital Markets Act[9] is an example of a law which, once enacted, would curb abusive business practices by requiring large platforms to comply with several obligations, for example, restrictions on targeted advertising without consent.
While making the internet safer is a formidable challenge, efforts are helped by growing anger about online abuse and awareness about the exploitation of our data. This anger may well lead to a political will to effectively regulate powerful tech companies, not just in the UK, but globally.
References
- ^ online safety bill (www.gov.uk)
- ^ bill (bills.parliament.uk)
- ^ platform provider is abroad (papers.ssrn.com)
- ^ extreme content (www.rev.com)
- ^ encouraging (www.sciencedirect.com)
- ^ lawsuits (www.theguardian.com)
- ^ ARVD73 / Shutterstock (www.shutterstock.com)
- ^ Artificial Intelligence Act (digital-strategy.ec.europa.eu)
- ^ Digital Markets Act (oeil.secure.europarl.europa.eu)