New York is the testing ground for pioneering laws in the US, which in some cases is the same as saying in the rest of the world as well. Among those recently approved, in addition to the one that sets a minimum wage for delivery people and another that prohibits employment discrimination for reasons of weight, a third, in force since this Wednesday, puts an end to the uncontrolled use of Artificial Intelligence (AI). The new rule imposes restrictions on the use of automated employment decision tools (AEDTs) to prevent the recruitment process from being conditioned by gender and racial bias. Automated decision systems are popularly known as algorithms.
The law, the first of its kind in the world according to some experts, stipulates that recruitment software based on machine learning or AI must pass an audit by an external company to show that it is free of racist or gender bias. An Automated Employment Decision Tool (AEDT) is a computer program that uses machine learning, statistical modelling, data analysis, or artificial intelligence to substantially assist hiring, i.e., make it easier or faster to choose the right person based on the algorithm.
Under New York law, employers or employment agencies wishing to use AEDT must ensure that a bias audit has been conducted before using the tool; post a summary of the audit results on its website, and notify candidates and employees that the program will be used to evaluate them, as well as include instructions for requesting a reasonable accommodation of job requirements and personal abilities. In addition, the company must publish on its website a notice about the type and source of the data used for the tool and the data retention policy. The companies that use software Third-party AEDTs are no longer legally entitled to use such programs if they have not been audited.
AI-based recruitment programs had been under scrutiny for being biased towards racism, sexism and other prejudices, but it took applications such as ChatGPT and Midjourney to become more widespread for congressmen and even many technology company executives to consider regulation. So far, Congress has given little clue as to what those limits might be.
Cities use algorithmic and automated technologies to make all kinds of decisions, from determining the distribution of the school census to deciding whether someone should be released on bail before trial. But until now there have been few safeguards to ensure that these technologies take fair decisions. The first experiences with AI tools in recent years have shown that they can be categorically unfair: for example, at the end of the last decade it was discovered that the algorithms used by law enforcement to assess and score the security risks of minors or booked criminals can negatively and disproportionately affect African-Americans, the population group most susceptible, along with Latinos, to being arrested or questioned by the police.
Experts say that while the new New York law is important to workers, it is still very limited. Julia Stoyanovich, a professor of computer science at New York University and a founding member of the city’s Automated Decision Systems Task Force — the first of its kind in the US, established in 2018 to review the use of algorithms in city programs — sees it as an important start, but still a very limited one. “I am very glad that the law is in place, that there are now rules and that we are going to start applying them,” the researcher told NBC News. “But there are also many gaps. For example, the bias audit is very limited in terms of categories. We do not take into account age discrimination, for example, which is very important in hiring, or disability, ”she added. The expert is also not clear about how the law will be applied or to what extent, but the truth is that the New York initiative contributes to fueling the debate on the uncontrolled development and use of AI.