Racist By default: the discrimination of the algorithms that Silicon Valley does not solve

Facebook has had to apologize publicly for the racist behavior of your artificial intelligence tools. Last Friday it deactivated one of its automatic labeling

Racist By default: the discrimination of the algorithms that Silicon Valley does not solve

Facebook has had to apologize publicly for the racist behavior of your artificial intelligence tools. Last Friday it deactivated one of its automatic labeling routines and video recommendation after detecting that it had associated with several black men with the "primates" label and suggested users see more primates videos below.

The ruling, in itself, is serious, but it becomes even more worrisome when it is considered that it is not the first time that Silicon Valley's large technological companies have to face this problem.

In 2015, for example, Google's image recognition software classified several photos with black people as "gorillas". The only solution that the company found, considered at the forefront of artificial intelligence techniques, was completely eliminated the labels associated with primates (gorilla, chimpanzees, monkeys ...) to prevent their algorithms from following associating them to photos of human beings .

Last year, Twitter found a related problem. A computer, Toni Arcieri, discovered that the algorithm automatically trims the images when they are too large ignored the black faces and focused on those of white people, regardless of the position in the image.

The well-known zoom application, involuntary star of the pandemic, has also had to modify the code of its applications after detecting that the function of virtual funds could erase the head of black people, especially if they had very dark skin, regardless of how well enlightened they were.

Why do these cases continue to occur? Although we usually talk about "artificial intelligence", the algorithms that are used to detect and label people, animals and objects in a video or a photo are not really as smart as it might seem at first.

They are programmed using several automated learning techniques. Developers feed routine with thousands or millions of images or example videos and the result they expect to obtain. The algorithm, from these examples, infers the model that should apply to any subsequent situation.

If the examples are not carefully selected during that phase, it is possible to introduce biases in the model. If all videos or most videos show white people, for example, the algorithm may have problems correctly identifying a black people in the future.

In addition, computers meet other obstacles when recognizing or correctly labeling black people in videos and photos. One of the ways in which a computer analyzes a photo or a frame is studying the contrast between the different zones of the image. The fare tonality faces tend to show more contrast between them and therefore tend to be valued much more accurately.

This can have very serious consequences. Even the best facial identification algorithms, for example, tend to confuse black people between five and 10 times more than those on white.

Several security agencies are started to use them when resolving crimes or investigating potential crimes and many civil rights protection organizations believe that this higher error rate will end up harming the black population, which already suffers usual Greater pressure during investigations and trials.

As the tool-based machinery-based artificial intelligence expands to other tasks, biases in the system can generate unexpected problems in all kinds of situations.

Amazon, for example, tried to create an automated system to consider candidates for jobs a few years ago. During the training process he used more men's curriculum than women. The company discovered that the resulting algorithm tended to reject candidates more often even without knowing the sex of the person, based on markers that he had learned from the thousands of examples, such as the university or school they had attended.

Date Of Update: 22 September 2021, 02:15

NEXT NEWS