AI can also be racist

After yet another tragic racist episode that happened in our country, we’re back to discuss what, apparently, is one of the biggest problems of our culture, despite the fact that a certain political party continues to deny its existence or reduce the discussion of it. However, racism is now so widespread that we humans are not the only ones who have internalized it to the point where we are no longer aware of its manifestation when it occurs, sometimes right under our eyes. In fact, even artificial intelligence can be racist or sexist: but how?

Let’s start by defining it algorithms as “formulas” that allow a large part of the internet, in principle social networks, to come into contact not only with the people we like the most (regardless of whether we really know them or not), but also with things and subjects that attract more and this happens through very complex formulas that also include our habits and choices, which are recorded and studied based on our behavior every time we have a smartphone or a computer keyboard in our hands.

This, of course, has been simplified as much as possible so as not to lead to complicated technical details. Now imagine a group of girls and boys who grow up learning new things just as if they were a blank slate, i.e. without experience and therefore the proper ability to recognize (and therefore avoid) the biases of those who train them and give these examples: these biases, this information “already experienced” by adults that in technical terms is called “prejudice”, inevitably lead to influential choices, precisely because they are based on judgments distorted by the baggage of previous experiences that everyone brings with them.

Here a cognitive bias is a bit like a flaw that compromises the operation of the algorithm when it processes the data it wants to collect to perform a specific function. But let’s take a step back and simplify it even more…

An artificial intelligence is created thanks to a “training”: many examples of the work it needs to perform are presented to the software (such as many faces if the particular program needs to learn to recognize faces) and the machine, based on the data provided for training, writes itself an algorithm to convey as correctly as possible the specific task (in the case of the example, to be able to say with sufficient accuracy whether or not a certain person is present in any image, taking into account the “known” and the remembered).

It is precisely during this training phase that biases appear, i.e. distortions in the thinking of individuals due to cultural macro-issues (trivial, even the socio-economic environment in which one was born and raised can affect a person’s future decisions) which are then transmitted to the machines. Returning to the persons example: in a politically and culturally white-dominated society such as the West, the concept of “normalcy” associated with light skin ends up being installed in the minds of people who are not actively and directly racist, including those responsible for training AI!

So here could be found showing the machine (we repeat, perhaps unwittingly and not maliciously) almost only the faces of white people, at the risk of creating a body that cannot recognize black people’s faces as “faces”. This is by no means a remote case: even Google only recently started to fix their system algorithm of research from images, which even tried to distinguish the skin from the hair of black people.

Many human activities already depend on artificial intelligence and will increasingly so in the coming years: not only to unlock our smartphone or pay the bill at the restaurant with a simple glance or a touch of the finger… Also for this reason, a huge cultural project is more urgent than ever to educate all people, including those working in this 2.0 field, to deconstruct their own prejudices. Especially the most hidden, in the unconscious.

Leave a Comment