Why Robots Could Soon Be Sexist

Google the word “doctor,” and you’ll see thousands of pictures of men. If you’re a woman looking for a job, you’re less likely to see targeted ads for high-paying roles than your male counterparts. And, if you were in the position of asking Siri when it first launched, “Where can I find emergency contraception?” she wouldn’t have known what to tell you.

All of these results are powered, in one form or another, by what we call artificial intelligence—complex algorithms that learn from huge data sets, then produce their own conclusions. An aura of objectivity and neutrality has traditionally surrounded AI. But the reality is that it’s built and programmed by humans, who definitely aren’t perfect, and it “learns” from human behavior. When that community of human programmers is predominantly male (and, more specifically, predominantly white and male), we can wind up, whether intentionally or not, with a system that can replicate unconscious bias.

There are countless examples of how bias has infected AI, with unfortunate results: chatbots that turned anti-semitic within 24 hours of launching; crime prevention software that turned out to be biased against African-Americans; Nikon’s camera having trouble auto-detecting Asian skin tones.

And this is just the beginning. In the years ahead, AI will grow in sophistication and expand across industries, becoming nearly ubiquitous. The tech industry has slowly begun to recognize the impact of lack of diversity inside its offices. It’s time to acknowledge these very same influences in our software.

So how can we begin to correct course? It can start with a name. It may seem innocuous, but how we name AI—or choosing to name it at all—matters. It seems standard to give virtual assistants female names or voices (just look at Alexa, Cortana, Bixby, or Siri in North America), but there’s no practical reason to do so, as it just perpetuates stereotypes about women as the chipper, helpful assistant. Fortunately, it looks like the tide is starting to change. Google declined to give its “OK Google” virtual assistant a name at all.

Equally important is ensuring a diverse data set from day one of programming. AI learns how to use its algorithm from a training set—a batch of photos, a database, or collection of relevant numbers that lays the groundwork for its functionality. But if that training set is skewed in some way, that’s what the AI learns is normal: What it spits out is a reflection of what data has been put in. One real-world example that we’re already struggling with is health care AI systems that incorrectly diagnose medical problems based on a standard of white male symptoms.

Vigilance by consumers is also critical. A number of “watchdog” organizations like AI Now are already popping up to start the fight. In the future, a community policing model could make a difference on a grassroots level—giving users creative ways to find problems and report them—as could internal auditing. In fact, special positions like bias detectors and algorithm analysts might one day be a standard at every company.

Ultimately, however, reducing bias in AI comes down to something as obvious as it is hard to achieve: having a diverse team building AI. Yes, there’s currently an underrepresentation of women in AI (and in STEM and IT in general), but it’s certainly possible to cultivate diverse teams, provided the right strategies are put in place.

While this may require more upfront energy during recruiting, the payoff is enormous (culturally, financially, and otherwise). With a diverse representation of gender (and, ideally, education, age, race, and other factors), it’s possible to naturally neutralize biases that you might not even know to look for, and bring a critical eye to the rest.

Michael Litt is cofounder and CEO of the video marketing platform Vidyard. Follow him on Twitter at @michaellitt.