These Are Some Of Our Major Concerns With The New Online Safety Bill

It doesn't address anonymous online harm and there's still a long way to go, say Seyi Akiwowo, Founder & Executive Director, Glitch, and Danny Stone, Chief Executive of Antisemitism Policy Trust.

online-safety-bill

by Seyi Akiwowo and Danny Stone |
Updated on

We’ve shouted, pushed, cajoled, consulted and encouraged political leaders for some five years, but finally on 11 May, the Online Safety Bill was announced in the Queen’s Speech. The question now is: was it worth the wait?

That question is particularly pertinent in the wake of increased antisemitic and anti-Muslim attacks over the last few days, following an escalation of violence in the Middle East. While the online space can positively facilitate discussion, debate and diplomacy, it has also coarsened and toxified debate - to the point of offline world harm.

Social media posts which state 'the world today needs a Hitler', and the online co-ordination of convoys through areas with high Jewish populations, with calls for the rape of Jewish women, are but two shocking examples that exemplify a wider hateful trend. Violence online is an extension of violence offline. Yet to date there has been no effective response to online harms.

This is also part of a long-term trend: in the years since the Government’s ‘Digital Charter’ was published, hate online has become far, far worse. The organisations we run have reported on appalling abuse, not least during the pandemic. Glitch’s report into online abuse during the Covid pandemic found that 50% of Black and minoritised women and non-binary people reported online abuse.

Meanwhile, the Antisemitism Policy Trust has detailed specific online antisemitic attacks, including the frightening calls to attack Jewish people in a ‘Holocough’. Only a fortnight ago, a YouGov survey for BT found 1,800,000 people had suffered threatening behaviour online in the past year.

The rationale for action is clear. Our job now is to ensure the legislative response is effective in keeping everyone, especially women and minoritised communities, safe online.

For us grown-ups, most of the risks on these services are to be managed through Terms and Conditions set by companies, for which there is no minimum standard, and which in some cases are so minimalist that they have enabled hate to flourish.

Certainly, there were some welcome elements of the Online Safety Bill. Ofcom, the communications regulator, will oversee a Duty of Care, underpinned by Codes of Practice, on those services housing user-generated content. It will issue fines to those breaching the Duty of Care, and demand transparency reports about their services; and they will pay for the privilege. So far, so good.

Furthermore, Ofcom will consult with those of us representing people suffering harm from online sources when producing its Codes, and impacts on people with a certain characteristic (or combination of characteristics) will also be taken into account. This recognition of intersectional harms was not a given, and we consider this a major recognition of the cause we have championed with the government.

However, there is still a long way to go. For example, whilst regulated services are to ‘mitigate’ or ‘prevent’ children accessing illegal content, it is only to be ‘minimised’ for adults. For us grown-ups, most of the risks on these services are to be managed through Terms and Conditions set by companies, for which there is no minimum standard, and which in some cases are so minimalist that they have enabled hate to flourish.

There are also areas in which the appropriate protections for free speech may give perverse results. Under the proposed draft, there could feasibly be a case in which a user complains that legal racist or sexist content has been removed and should be re-uploaded to a platform. In addition, newspaper comment boards are exempt from the duty of care - despite unregulated comment boards driving social media abuse, and the fact that failed self-regulation across social media is one of the main drivers of this very legislation.

Another major concern is that when services are divided into categories, as is proposed, it will only be the larger companies, like Facebook, which have additional duties to look after their users. Meanwhile, services known to inspire hatred and lead to offline harm like 4Chan, 8Chan, Bitchute or Gab, escape what are necessary safeguards.

However, the most obvious and disappointing gap, besides the lack of any proper definition of categories of harm in the primary legislation, is the complete absence of any measures to address anonymous online harm. Despite parliamentarians, the footballing world, NGOs from across the world and many more clearly setting out, in excruciating detail, the ways in which online anonymous abuse has begun to unravel our civil discourse, this has been put in the ‘too hard’ box.

We are confident that MPs from all sides of the Commons will not accept this omission and look forward to seeing appropriate measures introduced to safeguard those that want or need their identities protected, whilst protecting us all from anonymous hate.

The wait for this Bill has not exhausted us. We are hungry for change and indefatigable when it comes to challenging online hate.This Bill is but the start of a process and one we intend to ensure fixes the glitch.

Seyi Akiwowo, Founder & Executive Director, Glitch__; and Danny Stone, Chief Executive, Antisemitism Policy Trust

Read More: As Emily Atack Speaks Out, More Needs To Be Done To Prevent Sexual Harassment Online

Just so you know, whilst we may receive a commission or other compensation from the links on this website, we never allow this to influence product selections - read why you should trust us