It was over a decade ago when Amazon launched its tool artificial intelligence that facilitated the hiring of developers of software. The model was capable of processing a large number of resumes, examining them, and then choosing the five best professionals to be hired. However, the company realized that the machine was biased and discriminated against women.
When vetting candidates, the tool favored men for technical positions. “ANDhe case of personnel selection is well known because it was one of the first cases reported as open discrimination against women”, says claudia lopezresearcher of National Center for Artificial Intelligence (CENIA)in conversation with Radio Pattern.
Macho biases in AI tools and algorithms: How do they affect the advancement and development of women?
The expert explains that the model discarded any resume that had any female reference. The case led Amazon to dissolve the team that created the artificial intelligence in 2017.
With massive access to artificial intelligence models such as chatbot ChatGPT and google bardand even the incorporation of these tools in different sectors of the industry, López warns about the biases that can be found in these models, becoming a possible “damage to the opportunities that people can obtain”.
The also academic of the Federico Santa María Technical University of Valparaíso He states that “biases are somehow built into the data sets that artificial intelligence uses today to be able to learn and to be able to generate models that we are eventually going to use, whether it be generative AI or in predictions of things that could happen.”
The expert clarifies that “our behaviors have been biased as a society” so “sometimes these types of biases make us make decisions within the development process that could end up replicating them.”
Check the full interview here: