We need to talk about Twitter

In early December, 2017, executives from Twitter Google / YouTube and Facebook gave evidence to the Home Affairs Committee, chaired by Labour MP Yvette Cooper, on extremism, trolling, racist and sexist abuse spread via their platforms.

Cooper asked Twitter why she had to use her position as a member of parliament to demand that certain extremist content be removed from Twitter, and what it says about Twitter’s failure to respond to these requests that this content still remained visible on the profile of a user who had been found in repeat violation of Twitter’s Terms of Service.

She went on to ask why this user not only remained free to use the platform, but how it was that by viewing this user’s profile Twitter was actively recommending other users also posting similarly extremist content.

Twitter responded to this by insisting that they aim to remove users found in breach of their guidelines within 48 hours.


The below screenshots show just two of the many users I’ve personally reported to Twitter in the last month.  Despite receiving notification from Twitter, that these users were found in violation of the Terms of Service, these screenshots also show these accounts are still being used to post vile racist content.


This user was found in breach of Twitters rules against hateful content, yet was free to continue posting vile racist slurs, in this case against London Mayor, Sadiq Khan


The claim that, “a dog born in a barn isn’t a horse” is a popular trope among far-right white nationalists.  It is used in an attempt to counter a fact uncomfortable to racists that people of Asian heritage are also British.  One glance at the rest of this user’s similarly disgusting profile, by an actual human operator at Twitter, would show that this user repeatedly uses the platform to share and post content from other far-right racists.

Astonishingly, what was also revealed in Twitter’s evidence to the Home Affairs Committee, was that Twitter currently employ ZERO people to proactively monitor their platform for extremist content, much less employ someone to ensure accounts already found in violation of their guidelines on hate speech do not continue to post offending content after they’ve already been identified.

Even more troubling, it also transpired that just three days previous to appearing before the select committee, Twitter had begun taking into account for the first time reports of abuse from so-called bystander accounts.  Which means users who saw another person being abused, and who reported this to Twitter prior to December 19th, simply weren’t being listened to AT ALL by any human operator at Twitter.

Which begs the question, what have they been doing this whole time?  As long time users of Twitter who have had cause to use the “someone else” complaints procedure before will know, oftentimes a few days after sending a report of abuse a notification will appear saying that the account being reported was found to be in breach of the rules — which leads to the troubling conclusion that these notifications were simply being sent out automatically, perhaps on a delay to give the impression that the offending messages had been looked at by someone at Twitter, when in fact they were simply being ignored.

So far @Twitter and @TwitterSupport have also ignored my request for comment on this and many other reports of users free to continue using the platform to post abusive content after being found in breach of Twitter’s Terms of Service.  I will update this post if they decide to answer any of my emails or tweets about this serious issue.

Meanwhile, it seems likely that in the face of their inability or unwillingness to address these problems, the select committee will recommend the big three social media companies should face heavy fines for failing to meet their own pledge to remove extremist content within 48 hours.

It should also be noted that the argument which says it is technologically very difficult for platforms with millions of users around the world to monitor every single user 24 hours a day holds very little water, as any user who has inadvertently used copyrighted music or other content in the background of a YouTube video will attest.  Most of the time, even before an offending video goes live, users receive a message saying that the video is in breach of the community guidelines.

If, then, the technology exists to censor free-speech in vlogs, simply because a song owned by Sony happens to be playing in the background, it must be asked why the same technology can’t also be used to listen for dog-whistle politics in the posts of users already known to the social media companies for sharing hateful content?

I do not have control over the content of advertising which may appear below this line.