Many users of social media have different views about what the rules should be and our own criticisms of what the major platforms could do differently. But Elon Musk is the world’s richest man. So when he thought that Twitter had got its rules wrong, his solution was to buy the platform. He has had his offer of $45 billion accepted by the Twitter board.
Musk has criticised those who have responded critically to this move. He tweeted “sunlight is the best disinfectant,” to encourage greater scrutiny of those who had protested his bid for the platform. Musk may also need to apply that principle to cast more light on how to draw the line between those champions of free speech who he welcomes as allies and the vocal support for his agenda from some of the worst racist trolls on the internet, which he may wish to disown and discourage.
Musk will not own Twitter for several months. But his intended bid has already changed behaviour on the platform among some of its most dangerous and persistent trolls. There is a new sense of excitement among those banned most often from Twitter for hateful conduct but who persistently create new, ‘respawn’ accounts to try to evade these bans.
I have been working with a dedicated group of volunteers who track the most egregious ‘racist respawners’ on the internet. They report that the days after the takeover saw a major spike in activity among 120 of the worst racist respawners.
This group of persistent ban violators – some of whom go by the group name ‘The Shed’ or ‘The rape room’ – tends to generate 25-30 more ‘respawn’ accounts every day. That rose to 44 respawn accounts on the day after the announcement, and then to 86 and 100 respawn accounts on the following days, as the Press Association reports today.
The main theme of the messages was “we’re back” with promises that “full-on racist mode” could be resumed.
The common hope from these users is that the Elon Musk era will be one in which anything goes. Some express the hope that, under new ownership, all of their previously banned accounts will the available.
Musk’s takeover bid has also been celebrated by those who feel that the only meaningful litmus test of whether free speech exists is being able to use openly the n-word. This has led to a surge in the use of the n-word on Twitter, with several users explicitly arguing that they now have Musk’s permission and protection to be as racist as they like. Musk will one day have to disappoint those people to protect Twitter’s reputation and licence to operate.
My own involvement with monitoring and reporting online hatred on Twitter arose from the experience of being targeted by networks of racist users in the summer of 2019. I found a very patchy response when trying to report vicious racist harassment through the reporting system.
To Twitter’s credit, it has continued to enforce its rules. Indeed, it has begun to police this network much more effectively in the last three months than it had done so over the previous three years, thanks to the tracking activities of volunteers.
A good sense of the nature of racist respawners’ activities can be captured by the response of one of its ring-leaders after I wrote about the failure to keep banned users off the platform. This user, ‘Shlomo’, was a committed antisemite, whose user names were a series of variants on ‘Smelly Jew’. The voluntary anti-hate network had got this user banned 20 times from accounts including ‘Noxious Jew’, ‘Fetid Jew’, ‘Pungent Jew, ‘Malodorous Jew’ and so on over the previous six months.
Yet his response to my article was to tweet: ‘I would like to thank the Shed and the Rape Room for all of their support over the years. I couldn’t have done it without you guys’.
It is Twitter’s policy to remove accounts that evade suspension. The problem was that they were doing so much too slowly. It was taking an average of 15 days to remove such egregious reoffenders after their evasions of previous bans were reported. This gave ample opportunity for the network to harass other users and to rebuild the network. And there was a significant deterioration in enforcement during the Covid pandemic. The trough came in the Autumn of 2020 when, on average, it was taking Twitter 63 days from a report to suspend one of the Shed/Rape Room accounts.
Twitter agreed to improve its enforcement of this group, after it was handed a dossier of the scale of impunity enjoyed by the Shed and Rape Room networks six months after the Euro 2020 final. That constructive challenge to the platform’s performance with banned users appears to have had an effect.
Since then, the average time from report to suspension has fallen from a month to around a day. The average time to suspension was 0.8 days in March. The surge in activity since the Elon Musk news has created capacity pressure – the average has risen to 1.5 days – but the continued enforcement appears to be taking ‘respawns’ back towards where they were before the announcement.
While Twitter’s improved action this year has certainly improved my experience of the platform, it ought to be the basis for wider change. There are important ‘what works’ lessons for how effective scrutiny and pressure could be scaled up. And the platforms can do much better if they have the will and capacity to act. Creating the right dashboard of indicators, such as the time it takes to suspend banned users, would help. So would transparent reporting and scrutiny mechanisms by regulatory forums. Parliamentary oversight committee hearings could institutionalise the pressure for improved performance.
Another example is a user called Dean from Plymouth, who uses the online identity “British Dean”. He was one of the primary instigators of the online racism against the England footballers at Euro 2020. He has left Twitter for Gab since Twitter committed to suspending his accounts on the day that they appear. He used the Gab platform to declare: “If Elon takes over twitter and hit the reset button on all suspended accounts, there will be no more Mr Nice Guy. I will choose violence”. Despite this user breaching earlier sanctions for online harassment, the Devon and Cornwall police show no signs of acting.
If and when the Elon Musk Twitter sale goes through, it may take the debate about free speech and hate speech back to first principles.
“If people want less free speech, they will ask government to pass laws to that effect”, says Musk, though this rather ducks the primary question of what should be legal.
Opposing threats of violence should be common ground – but what about racist speech that dehumanises whole groups? Not many people are aware that it has only been against the Twitter rules to tweet something as repellent as “the Jews are maggots who have no place in our country” since the summer of 2019 – and this prohibition on dehumanising speech was only extended to ethnic groups in December 2020.
If Musk says that this type of extremely racist speech will only be banned if law-makers pass a law, then it will be an invitation for stronger regulation in Britain and Europe, and lead to different Twitter rules here than in the United States.
Elon Musk is currently the champion not just of advocates of free speech but of the worst racist trolls on the internet. If he wants to buy the platform and to retain its legitimacy– with users, advertisers and law-makers – to operate, he may need to think more clearly about where exactly to draw the line between the free speech we want to defend and the toxic hate speech that almost everybody would agree to exclude.