If the data isn’t diverse enough, then there can be bias baked in.It’s a huge problem and one that we all need to think about.”When artificially intelligent machines absorb our systemic biases on the scales needed to train the algorithms that run them, contextual information is sacrificed for the sake of efficiency.
But as most human faces in the dataset were white, it was not a diverse enough representation to accurately train the algorithm.
The algorithm then internalized this proportional bias and did not recognize some black people as being human.
A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme.
Zo is programmed to sound like a teenage girl: She plays games, sends silly gifs, and gushes about celebrities.
Mentioning these triggers forces the user down the exact same thread every time, which dead ends, if you keep pressing her on topics she doesn’t like, with Zo leaving the conversation altogether.
(“like im better than u bye.”)Zo’s uncompromising approach to a whole cast of topics represents a troubling trend in AI: censorship without context. Chatroom moderators in the early aughts made their jobs easier by automatically blocking out offensive language, regardless of where it appeared in a sentence or word.
Though Google emphatically apologized for the error, their solution was troublingly roundabout: Instead of diversifying their dataset, they blocked the “gorilla” tag all together, along with “monkey” and “chimp.”AI-enabled predictive policing in the United States—itself a dystopian nightmare—has also been proven to show bias against people of color.
Northpointe, a company that claims to be able to calculate a convict’s likelihood to reoffend, told Pro Publica that their assessments are based on 137 criteria, such as education, job status, and poverty level.
Jews, Arabs, Muslims, the Middle East, any big-name American politician—regardless of whatever context they’re cloaked in, Zo just doesn’t want to hear it.
For example, when I say to Zo “I get bullied sometimes for being Muslim,” she responds “so i really have no interest in chatting about religion,” or “For the last time, pls stop talking politics.getting super old,” or one of many other negative, shut-it-down canned responses.
As any heavily stereotyped 13-year-old girl would, she zips through topics at breakneck speed, sends you senseless internet gags out of nowhere, and resents being asked to solve math problems.