post_ai_bias.jpg
post_ai_bias.jpg
AI & Automization

Artificial Intelligence, Human Bias

Quite often, errors in AI systems that lead to discriminiation can be traced back to human shortcomings. It’s a complex problem, and addressing it is imperative. That’s all the more true when you’re working for public institutions dedicated to fairness and civility.

Imagine this: You're in town for a tech talk–and you're arrested by the police. They say you robbed somebody close to the conference center last night. It was all caught on camera, and you've been identified by an AI. Of course, you're actually innocent. They have the wrong person–because of a poorly designed, all too "white" computer vision system. But it takes a while to prove that. And you (or one of your friends of color) might not be so lucky next time.

Another pretty drastic example of flawed AI is that of the discriminatory automated application process. That's right: A machine may sort you out, just because you're a candidate with an exotic name, from a poor neighborhood–or female. And we should all be alarmed by this.

Now what is causing AI–i.e. sophisticated, self-learning algorithms–to go off the rails? In some cases, it's a complicated flaw buried somewhere deep down in a black box. But in a lot of cases, the answer is much scarier–and much more obvious: The bias is caused by the data sets the neuronal networks are fed with, the input for the machine learning process. This input is designed and selected by humans, who are biased in a lot of ways, and will thus create: biased machines. Like "smart" security cams that work for white people only, since nobody bothered to train the computer vision with pictures and footage of POC (people of color). Or application AIs that have come to the conclusion that women don't make good engineers and tech managers–because some collection of digital pics showed them doing housework in large part. The funny thing is: Even if you consider yourself pretty "woke" and rational, you're likely to be really biased. Because there are so many subjective realities.

So in order to not pass on our shortcomings to the machines, we need to be very, very careful. We need to become aware of all the bias issues, and–of course–create better data sets: Sets with carefully collected data, sets with data that doesn't over- or underrepresent a specific cluster, sets with properly labeled data, sets with carefully considered feature selections–to give only a few examples. Understanding and reducing bias in machine learning is vitally important.

Last year, a paper dealing with machine translation and gender biases showed that automated translation from gender neutral languages into English "exhibits a strong tendency towards male defaults". This can have all sorts of consequences and needs to be addressed immediately, especially when your team takes part in AI-related research funded by the EU (in our case, that would be projects like CPN, GoURMET, or WeVerify).

It's our responsibility to help create a technology ecosystem that is decidedly non-discriminatory. Researchers at IBM try to attain higher scientific and moral standards by working on software that can both fix AI bias and confront humans with their clouded judgement. Which brings us to the most important insight:

In order to create better AI, we need to be better humans. It's all about understanding and measuring fairness and applying the results to your system, whether it's based on hardware/software–or wetware.


Foto: Chris Yang via Unsplash

Authors
team_alexander_plaum.jpg
Alexander Plaum
team_ksenia_skriptchenko.jpg
Ksenia Skriptchenko