Westend61/Getty Images
Karen Fratti
March 26, 2018 9:43 am

Online bullying and harassment is a problem that all social media platforms have to grapple with. But since the 2016 election and the #MeToo movement, bullying and harassment have become particularly toxic on Twitter, and it’s been harder for higher ups at the company to simply ignore, as they have largely seemed to do for years. If you’re anything other than a white supremacist male, famous or not, you already know that Twitter has a troll problem. So what are some things Twitter can do to stop harassment of women and other marginalized groups of people on its platform?

This week, Amnesty International, a global human rights organization of all things, released a well-researched report about how Twitter is a “toxic” place for women and actually a danger to our human rights. Well, duh, right? We’ve been saying this since forever, and the company has made little headway into fixing the issue (although the company’s CEO Jack Dorsey has attempted to address the problem since Rose McGowan’s account was suspended late last year and people clamored for a response from the company).

In a series of October tweets, Dorsey promised that the company was taking bullying and harassment seriously and promised that it was working diligently to make Twitter a safer place for people. But his responses to bullying and harassment always seem to come after the fact and only after someone famous is at the center of it. What about the rest of us “normal” people who are attacked by our ex-boyfriends and random avatars who seem get off on sending racist, sexist, violent, and scary messages to strangers? Surely a tech giant with tons of talented coders and community managers working in its storied San Francisco headquarters could come up with some out-of-the-box ways to tackle the problem over a game of beer pong, no?

The problem is so consistently ignored that it feels like the company doesn’t want to clean up its harassment problem, or that it thinks it’s not that big of a deal. Maybe it doesn’t. According toThe Guardian, one of Twitter’s own diversity reports states, “70 percent  of Twitter’s employees are male. That number moves to 90 percent when you just count Twitter’s tech employees, and 79 percent if you’re looking only at its leadership. Fifty-nine percent of Twitter’s employees are white [and] 29 percent are Asian.” Only 2 percent are black and 3 percent Hispanic or Latino, which is not representative of its user base at all. More like a “lack of diversity report,” huh?

Research shows that men tend to think that women overreact to online harassment in general, so reports like Amnesty International’s might seem obvious, but we need more of them maybe to convince people (especially the very white and male Twitter employees) that this a Real Problem. The platform’s a great place for them, but a professional and personal, albeit toxic, necessity for the rest of us.

Listen, we’re not the ones creating the algorithms over there or saying that creating even more intense filters and ways to report people is an easy fix. But there are some missions that Twitter could set out to accomplish the best they could instead of giving us a collective shrug, yet again, the next time a famous white woman or man is attacked on its platform. Sinead McSweeney, Twitter’s Vice President of Public Policy and Communications, responded to this week’s report in a statement saying:

“While we deeply respect Amnesty International’s mission and work around the world, we cannot help but feel disappointed at the tenor of the findings you have shared with us. Twitter is an open platform and often holds a mirror up to human behaviors — both the good and the bad. Twitter cannot delete hatred and prejudice from society, however we do remain committed every day to building on the major steps we have already taken to make Twitter safer.”

But bad takes are different than actual threats and racist, violent messages targeted at human beings who use the platform. If someone was following you home, threatening you with violence, you’d be able to make a report at a local police station. By not addressing the problem aggressively, Twitter is complicit in the harassment and bullying that goes on on its platform. This stuff happens all the time on our timelines, and some serious work under the hood of how Twitter operates, even if it pisses some of its users off, is necessary at this point.

Here are just a few ideas of what Twitter could do:

1Do its own research and reporting.

A little transparency can go a long way. Back in 2016, Twitter set up a “Trust and Safety Council,” that “provides input on our safety products, policies, and programs. Twitter works with safety advocates, academics, and researchers; grassroots advocacy organizations that rely on Twitter to build movements; and community groups working to prevent abuse.”

Well, fine. What are these groups doing? Amnesty International (which provided some solutions in its report, but isn’t listed as a member of the council) just threw together a report that went farther into admitting that Twitter has a problem than Twitter’s own little council. The first step to fixing a problem is admitting it exists.

There’s absolutely no reason Twitter couldn’t provide an annual (or bi-annual; there are enough terrible tweets to analyze at any cadence TBH) report, just like the one released this week, about how many problem accounts they’ve handled, what they were doing with them, a breakdown of what it knew about the demographics of those accounts and what triggered the violent behavior… literally anything conveying to its users that it cares about the hate and violence that’s spewed at them sometimes relentlessly. Think of all the data visualizations the company could make about harassment on its own site.

It doesn’t have to be a meaningless act of pretending to care about abuse and harassment — the more Twitter educates its users about abuse and harassment and how to handle it, the more people might be deterred from engaging in bad behavior. It would also provide people with a point of reference when they want to see how far Twitter’s come in handling this problem, which could also work to its own benefit if it wants to sell itself one day.

2Change how users can report abuse or harassment.

Right now, there just aren’t enough ways to report people on Twitter. Like, there’s not even a box to check for “unsolicited nudes.” In fact, Twitter often returns a little message that a person who just sent you a message didn’t disobey the terms of service and therefore you should just block or mute them. That’s like a police officer telling you to walk on the other side of the street if someone is following you home at night. It changes nothing, since people can create multiple handles to attack you after you block them. And it allows them to abuse other people while you’ve managed to weed them out of your timeline and DMs.

The internet knows what pair of shoes we just bid on on StockX and shows us targeted ads across our mobile and desktop browsers for months afterwards, yet Twitter’s telling us that it just doesn’t know how to figure out who’s making more than one account or calling us names? Come on, bros. Put down the ping pong paddles. Where there’s a will, there’s a way. The longer it takes them to crack down on reporting and filtering processes, the more we’re starting to think it just doesn’t care that much about our mental and physical health after using its platform.

3Switch up what “verified” means.

Right now, a blue check next to someone’s name means that they are famous, work in media, went viral once, or have a cousin who works at Twitter. A pretty radical way to change how Twitter fights abuse would be to get away from that system and think about what a verified user really could mean and only let those people tweet, as tech columnist Farhad Manjoo suggested in the New York Times. It’s complicated — especially since a lot of us don’t have tons of followers but are authentic, well-meaning users of the system. Maybe being verified doesn’t mean you’re “important” — it just means you’re a real person.

But there are a lot of trolls who don’t tweet and bots who would be unable to provide some sort of two-step verification prove they are actually human and have real followers and actual things they want to tweet, so that plan could get rid of a lot of useless accounts. Twitter is reportedly working on revamping its verification process, but their method still seems to be based in old thinking about what being a “verified user” means.

Really changing the verification process involves a complete cartwheel of the brain to imagine and implement, since you don’t want to leave good people out and our current social media belief system boils down to “followers = worthwhile,” even though we know that’s not true. But maybe it’s time to do away with the fact that just because someone worked for a certain media company once, wrote a book, has a bunch of followers, or is the president of a country, they get some sort of badge.

Twitter could also start flagging accounts that have been suspended and then reinstated. Or accounts that use white supremacist symbols in its bios or header pictures. Or even accounts that have a certain number of reports against them that the company didn’t ban, so we could know if we should let them follow us. There a lot of ways this could go.

It’s a start, at least. Right now, it seems like the company and a lot of its very white, male employees and users don’t seem to think Twitter is all that dangerous. The company should start calling out toxic behavior instead of blaming the rest of us for censoring their bad behavior.

You May Like