01. Racial bias in health-care algorithms
02. Tay, Microsoft's AI chatbot
03. Joy Buolamwini TED Talk 2017

A narrow vision
The contextualization of an algorithm is very important to understand its operation. This algorithm BC20 was used in a renowned French music production company, which is focused mainly on pop and urban music. In the videos that the algorithm now produces, it reuses the music clips that are on its database from that time. This explains the presence of many French speaking songs. But also a very limited diversity of points of view. Indeed in the video-clips used we feel that the stories are told from a Western vision. There is no space for emerging artists or low-budget video-clips because the music production company only works with successful artists. Therefore this capitalist aspects is also tangible. This makes echoes with a major problem of artificial intelligences; the non-inclusivity and racism they can produce. An artificial intelligence only knows what it is shown, therefore its database, and does not question it. There are many examples of cases where Ai are discriminatory. Joy Buolamwini, a Ghanaian-American scientist, founded the Algorithmic Justice League in 2018, an organisation that looks to challenge bias in decision making software. She realized that a lot of facial recognition don't recognize black faces because the people who coded the algorithm hadn't taught it to identify a broad range of skin tones and facial structures. This can lead to big injustices in medicine or justice fields for example. Redundancies can be found in the narration of the video. They are classic or even cliché narrative patterns, such as love stories between heterosexual couples. It shows how quickly an algorithm can reflect stereotypes of our society. Today, the complicated but essential task is to develop these algorithms with a more inclusive vision.