Tinder Asks ‘Does This Bother You’?

The dating app says its new machine-learning tool can help flag potentially offensive messages and encourage more users to report inappropriate behavior. 
offensive messages denied entry into phone
Illustration: Casey Chin

On Tinder, an opening line can go south pretty quickly. Conversations can easily devolve into negging, harassment, cruelty—or worse. And while there are plenty of Instagram accounts dedicated to exposing these “Tinder nightmares,” when the company looked at its numbers, it found that users reported only a fraction of behavior that violated its community standards.

Now, Tinder is turning to artificial intelligence to help people dealing with grossness in the DMs. The popular online dating app will use machine learning to automatically screen for potentially offensive messages. If a message gets flagged in the system, Tinder will ask its recipient: “Does this bother you?” If the answer is yes, Tinder will direct them to its report form. The new feature is available in 11 countries and nine languages currently, with plans to eventually expand to every language and country where the app is used.

Major social media platforms like Facebook and Google have enlisted AI for years to help flag and remove violating content. It’s a necessary tactic to moderate the millions of things posted every day. Lately, companies have also started using AI to stage more direct interventions with potentially toxic users. Instagram, for example, recently introduced a feature that detects bullying language and asks users, “Are you sure you want to post this?”

Tinder’s approach to trust and safety differs slightly because of the nature of the platform. The language that, in another context, might seem vulgar or offensive can be welcome in a dating context. “One person’s flirtation can very easily become another person’s offense, and context matters a lot,” says Rory Kozoll, Tinder’s head of trust and safety products.

That can make it difficult for an algorithm (or a human) to detect when someone crosses a line. Tinder approached the challenge by training its machine-learning model on a trove of messages that users had already reported as inappropriate. Based on that initial data set, the algorithm works to find keywords and patterns that suggest a new message might also be offensive. As it’s exposed to more DMs, in theory, it gets better at predicting which ones are harmful—and which ones are not.

The success of machine-learning models like this can be measured in two ways: recall, or how much the algorithm can catch; and precision, or how accurate it is at catching the right things. In Tinder’s case, where the context matters a lot, Kozoll says the algorithm has struggled with precision. Tinder tried coming up with a list of keywords to flag potentially inappropriate messages but found that it didn’t account for the ways certain words can mean different things—like a difference between a message that says, “You must be freezing your butt off in Chicago,” and another message that contains the phrase “your butt.”

Photograph: Tinder

Still, Tinder hopes to err on the side of asking if a message is bothersome, even if the answer is no. Kozoll says that the same message might be offensive to one person but totally innocuous to another—so it would rather surface anything that’s potentially problematic. (Plus, the algorithm can learn over time which messages are universally harmless from repeated no's.) Ultimately, Kozoll says, Tinder’s goal is to be able to personalize the algorithm, so that each Tinder user will have “a model that is custom built to her tolerances and her preferences.”

Online dating in general—not just Tinder—can come with a lot of creepiness, especially for women. In a 2016 Consumers’ Research survey of dating app users, more than half of women reported experiencing harassment, compared to 20 percent of men. And studies have consistently found that women are more likely than men to face sexual harassment on any online platform. In a 2017 Pew survey, 21 percent of women aged 18 to 29 reported being sexually harassed online, versus 9 percent of men in the same age group.

It’s enough of an issue that newer dating apps like Bumble have found success in part by marketing itself as a friendlier platform for women, with features like a messaging system where women have to make the first move. (Bumble’s CEO is a former Tinder executive who sued the company for sexual harassment in 2014. The lawsuit was settled without any admission of wrongdoing.) A report by Bloomberg earlier this month, however, questioned whether Bumble’s features actually make online dating any better for women.

If women are more commonly the targets of sexual harassment and other unwanted behavior online, they’re also often the ones tasked with cleaning up the problem. Even with AI assistance, social media companies like Twitter and Facebook still struggle with harassment campaigns, hate speech, and other behavior that’s against the rules but perhaps trickier to flag with an algorithm. Critics of these systems argue that the onus falls on victims—of any gender—to report and contain abuse, when the companies ought to take a more active approach to enforcing community standards.

Tinder has also followed that pattern. The company provides tools for users to report inappropriate interactions, whether that occurs in messages on the app or if something bad happens offline. (A team of human moderators handle each report on a case-by-case basis. If the same user is reported multiple times, Tinder may ban them from the platform.) At the same time, Tinder does not screen for sex offenders, although its parent company, the Match Group, does for Match.com. A report from Columbia Journalism Investigations in December found that the “lack of a uniform policy allows convicted and accused perpetrators to access Match Group apps and leaves users vulnerable to sexual assault.”

Tinder has rolled out other tools to help women, albeit with mixed results. In 2017 the app launched Reactions, which allowed users to respond to DMs with animated emojis; an offensive message might garner an eye roll or a virtual martini glass thrown at the screen. It was announced by “the women of Tinder” as part of its “Menprovement Initiative,” aimed at minimizing harassment. “In our fast-paced world, what woman has time to respond to every act of douchery she encounters?” they wrote. “With Reactions, you can call it out with a single tap. It’s simple. It’s sassy. It’s satisfying." TechCrunch called this framing “a bit lackluster” at the time. The initiative didn’t move the needle much—and worse, it seemed to send the message that it was women’s responsibility to teach men not to harass them.

Tinder’s latest feature would at first seem to continue the trend by focusing on message recipients again. But the company is now working on a second anti-harassment feature, called Undo, which is meant to discourage people from sending gross messages in the first place. It also uses machine learning to detect potentially offensive messages and then gives users a chance to undo them before sending. “If ‘Does This Bother You’ is about making sure you’re OK, Undo is about asking, ‘Are you sure?’” says Kozoll. Tinder hopes to roll out Undo later this year.

Tinder maintains that very few of the interactions on the platform are unsavory, but the company wouldn’t specify how many reports it sees. Kozoll says that so far, prompting people with the “Does this bother you?” message has increased the number of reports by 37 percent. “The volume of inappropriate messages hasn’t changed,” he says. “The goal is that as people become familiar with the fact that we care about this, we hope that it makes the messages go away.”

These features come in lockstep with a number of other tools focused on safety. Tinder announced, last week, a new in-app Safety Center that provides educational resources about dating and consent; a more robust photo verification to cut down on bots and catfishing; and an integration with Noonlight, a service that provides real-time tracking and emergency services in the case of a date gone wrong. Users who connect their Tinder profile to Noonlight will have the option to press an emergency button while on a date and will have a security badge that appears in their profile. Elie Seidman, Tinder’s CEO, has compared it to a lawn sign from a security system.

None of those initiatives, nor the recent AI tools, will be a silver bullet. And it will be difficult to measure whether or not the new reporting prompts change the behavior on the platform, beyond simply increasing the number of reported messages. Kozoll believes that if people know they can get reported, it will encourage people to think twice about what they type. For now, he says, the point is just to improve the quality of respect and consent on the platform. “Make sure the person you’re talking to wants to be spoken to that way,” he says. “As long as two consenting adults are talking in a way that’s respectful between the two of them, we’re good with it.”


More Great WIRED Stories