DIVERSIFY NOW

Twitter doesn’t need more policies, it needs diverse moderators

Twitter CEO Dick Costolo (R) celebrates the Twitter IPO with Twitter founders Jack Dorsey (L), Biz Stone (2nd L) and Evan Williams on the floor of the New York Stock Exchange in New York, November 7, 2013.     REUTERS/Brendan McDermid (UNITED STATES  – Tags: BUSINESS SCIENCE TECHNOLOGY)   – TB3E9B71DHH40
Twitter CEO Dick Costolo (R) celebrates the Twitter IPO with Twitter founders Jack Dorsey (L), Biz Stone (2nd L) and Evan Williams on the floor of the New York Stock Exchange in New York, November 7, 2013. REUTERS/Brendan McDermid (UNITED STATES – Tags: BUSINESS SCIENCE TECHNOLOGY) – TB3E9B71DHH40

Last week, when actress Rose McGowan’s Twitter account was suspended, a boycott of the social media platform was called. Thousands people refrained from using Twitter for one day in protest of her suspension. But the boycott, which was backed by prominent celebrities, was criticized by some for being self selective in its outrage: where was the boycott for the countless women of color who were harassed? In any case, Twitter seemingly took the criticism to heart and responded by announcing changes to its terms of service.  These changes include new policies to prevent “hate symbols and imagery, violent groups, and tweets that glorify violence.” Twitter plans on rolling out these changes gradually over the next few months and has released a calendar on when people can expect them. On the surface, new policies may seem like a good idea, but in reality, they might make the situation worse.

Harassment from bigots and trolls isn’t the only thing that marginalized people have to face on Twitter. The report button is both a tool to push back against harassment but it is also weaponized against the very people it’s designed to protect. TV writer Bess Bell found herself suspended for tweeting a joke about “killing all men,” academic Anthony Oliveria had his account suspended for speaking out against a #HeterosexualPrideDay hashtag, and many others have reported that they have been suspended temporarily for pushing back against those who harass them.

It is easy to see where these policies can fail. Will Twitter suspend the account of Wolfenstein II, for promoting Nazi punching? Will they go after a black teenager who tweets “stop white people” after yet another display of appropriation goes viral? The problem with these policy changes is that, on paper, they ignore power dynamics. It acts as if there are two sides that are equally problematic towards each other and therefore deserve an equal response. This is not how hate and bigotry works however; it’s not a two way street. Hate is always directed from the oppressor to those who are oppressed.

So, how can Twitter make these policies better and give them some teeth? Simple. Give it a contextual framework from which to operate on; introduce anti-oppression training for moderators so they know the difference between harassment and a person defending themselves, for example. But that’s not all; for these policies to truly work, Twitter needs a diverse moderating team that reflects the users who use its platform. This is important because of the nature of how bias operates.

Women and people of color have always been subject to biases that cause their behavior to be interpreted differently by others. Women have always been seen as being more ‘emotional’ and ‘unstable’ compared to their male counterparts. People of color, in particular black people, have always had their behavior assessed as being more “aggressive” than it actually is. A study published by the American Psychological Association found that people are more likely to perceive black men as larger and more threatening than white men of a similar size. Another study (paywall) found that adults see black girls as “less innocent than white girls.” These biases and judgments can come to play when moderating behavior on Twitter, and is conceivably how a black woman defending herself on Twitter against a white supremacist can somehow come across as being aggressive and threatening to a moderator.

A diverse moderating team with more women of color, in particular, will not solve all of Twitter’s problems. What it will do is give the moderating team the perspective of those who have these lived experiences and as a result, are better equipped to navigate the differences between what is and isn’t considered harassment. As of 2016, Twitter’s workforce was comprised of a team where women account for 37% of its staff and underrepresented minorities represent 11%—4% of its workforce is Hispanic and 3% are African American. Of course, increasing diversity should not just be a priority for the safety team, but should be coupled with an overall increase company wide.

There are those who will cry crocodile tears and claim that such a change would result in the disproportionate silencing of one side, and as such, Twitter would no longer be a platform for free and open discourse. That is exactly what needs to happen–Nazis and bigots do not deserve a platform to harass others; they may be entitled to their bigoted views, but Twitter does not have to host them on their servers. Reddit took action against some racist subreddits in 2015, and a study published earlier this year found that this was effective at reducing hate speech across the entire platform.

Hiring a team of diverse moderators will be a costly investment; the work they will do will be emotionally taxing and exhausting, and they need to be properly compensated for that labor. But if Twitter is serious about making its platform a safer place for all, then it’s a necessary step, and probably one that will be integral to its survival.