As algorithms play an increasingly widespread role in society, automating—or at least influencing—decisions that impact whether someone gets a job or how someone perceives her identity, some researchers and product developers are raising alarms that data-powered products are not nearly as neutral as scientific rhetoric leads us to believe. And this is a problem of optics, too: When the myth that science is objective combines with the much-hyped advances that AI portends, the view of AI threatens to swing the other way, leading some to believe that it’s the AI technology itself that is acting maliciously.
Some said misconduct was common – especially at conferences that blend professional work with socializing – and that serial harassers rarely face consequences. In some cases, sexual misconduct has pushed women out of the field altogether. Beyond the personal devastation, there is long-term damage for machine learning and AI, a sector that is dramatically reshaping society, sometimes with powerful technology plagued by harmful biases.
Of the approximately 8,000 men who responded, 27% think AI should be female and 36% think it should be male–that’s 63% who believe AI should have a gender. Meanwhile, 36% think AI should be gender neutral. In contrast, 62% of women think AI should be gender neutral, while 11% think it should be male and 27% think it should be female.
Earlier this year, a separate team of researchers from Princeton University and Bath University in the UK warned of artificial intelligence replicating the racial and gender prejudices of humans. “Don’t think that AI is some fairy godmother,” said study co-author Joanna Bryson. “AI is just an extension of our existing culture.”
“AI tech is a direct reflection of the people who are engineering it, so any bias by these individuals will be reflected in the products they create,” Montoya tells OZY — something she’s seen many times with “tech bros” in Silicon Valley. Looking for examples? In 2009 HP’s imaging software couldn’t recognize Asian faces, and Harvard’s Project Implicit discovered that people automatically assign positive or negative behavior to different skin tones. That’s the impetus behind Accel.AI: to make sure that diverse people have a say in tech of the future.
Ultimately, however, reducing bias in AI comes down to something as obvious as it is hard to achieve: having a diverse team building AI. Yes, there’s currently an underrepresentation of women in AI (and in STEM and IT in general), but it’s certainly possible to cultivate diverse teams, provided the right strategies are put in place.