As algorithms play an increasingly widespread role in society, automating—or at least influencing—decisions that impact whether someone gets a job or how someone perceives her identity, some researchers and product developers are raising alarms that data-powered products are not nearly as neutral as scientific rhetoric leads us to believe. And this is a problem of optics, too: When the myth that science is objective combines with the much-hyped advances that AI portends, the view of AI threatens to swing the other way, leading some to believe that it’s the AI technology itself that is acting maliciously.
AI
As Google AI researcher accused of harassment, female data scientists speak of ‘broken system’
Sam Levin /
The Guardian
Some said misconduct was common – especially at conferences that blend professional work with socializing – and that serial harassers rarely face consequences. In some cases, sexual misconduct has pushed women out of the field altogether. Beyond the personal devastation, there is long-term damage for machine learning and AI, a sector that is dramatically reshaping society, sometimes with powerful technology plagued by harmful biases.
Ikea Asked 12,000 People What They Want From AI
Katharine Schwab /
Fast Company
Of the approximately 8,000 men who responded, 27% think AI should be female and 36% think it should be male–that’s 63% who believe AI should have a gender. Meanwhile, 36% think AI should be gender neutral. In contrast, 62% of women think AI should be gender neutral, while 11% think it should be male and 27% think it should be female.
Robots With Artificial Intelligence Become Racist and Sexist—Scientists Think They’ve Found a Way to Change Their Minds
Anthony Cuthbertson /
Newsweek
Earlier this year, a separate team of researchers from Princeton University and Bath University in the UK warned of artificial intelligence replicating the racial and gender prejudices of humans. “Don’t think that AI is some fairy godmother,” said study co-author Joanna Bryson. “AI is just an extension of our existing culture.”
The Queer Latina Trying to Build Bias-Free AI
Zara Stone /
OZY
“AI tech is a direct reflection of the people who are engineering it, so any bias by these individuals will be reflected in the products they create,” Montoya tells OZY — something she’s seen many times with “tech bros” in Silicon Valley. Looking for examples? In 2009 HP’s imaging software couldn’t recognize Asian faces, and Harvard’s Project Implicit discovered that people automatically assign positive or negative behavior to different skin tones. That’s the impetus behind Accel.AI: to make sure that diverse people have a say in tech of the future.
Why Robots Could Soon Be Sexist
Michael Litt /
Fortune
Ultimately, however, reducing bias in AI comes down to something as obvious as it is hard to achieve: having a diverse team building AI. Yes, there’s currently an underrepresentation of women in AI (and in STEM and IT in general), but it’s certainly possible to cultivate diverse teams, provided the right strategies are put in place.
The Data Scientist Putting Ethics into AI
Poornima Apte /
OZY
I truly believe that data science, AI, all this technology, especially with education, is intended to close gaps and be a great equalizer,” she says.
"They found that both data sets reinforced gender stereotypes in their depiction of activities such as cooking and sports. Pictures of shopping and washing were correlated to women, for example, while coaching and shooting were linked to men."