The last few months may be remembered as the time when predictive artificial intelligence (AI) went mainstream. While prediction algorithms have been in use for decades, the release of apps like OpenAI’s ChatGPT3, and its rapid integration with Microsoft’s Bing search engine, may have opened the floodgates when it comes to easy-to-use AI. . Within weeks of its launch, ChatGPT3 had already attracted 100 million of monthly users, many of whom have no doubt already experienced its dark side: from insults and threats even misinformation and a demonstrated ability to write malicious codes.
Headline-generating chatbots are just the tip of the iceberg. Artificial intelligence to create text, voice, art, and video is advancing rapidly, with far-reaching implications for governance, commerce, and civic life. It is not surprising that capitals flood the sector, as governments and companies alike are investing in companies startups to develop and implement the latest machine learning tools. These new applications will combine historical data with machine learning, natural language processing, and deep learning to determine the probability of future events.
The bottom line: adoption of new natural language processing and generative artificial intelligences won’t be limited to rich countries and companies like Google, Goaland Microsoft, which were the ones that led its creation. These technologies are already spreading far and wide in low- and middle-income settings. In them, predictive analytics applied to all sorts of topics—from reducing urban inequality to tackling food security—holds great promise for cash-strapped governments, businesses, and NGOs looking to improve the efficiency and unlock social and economic benefits.
The problem, however, is that not enough attention has been paid to the potential negative externalities and unwanted effects of these technologies. The most obvious risk is that these unprecedentedly powerful predictive tools will strengthen the surveillance capacity of authoritarian regimes.
A widely cited example is thesocial credit system” from China, which uses credit histories, criminal convictions, online behavior, and other data to assign a score to each person in the country. Those scores can determine whether a person can get a loan, get into a good school, travel by train or plane, and so on. Although the Chinese system is advertised as a tool to improve transparency, it also fulfills the functions of an instrument of social control.
If new AI tools are simply imported and widely used before the necessary governance structures are put in place, they could very easily do more harm than good.
However, even when used by well-intentioned democratic governments, companies focused on achieving social impact, and progressive non-profits, predictive tools can produce suboptimal results. Design flaws in the underlying algorithms and skewed data sets can lead to privacy violations and discrimination based on identity.
This has already become a glaring problem in criminal justice, where predictive analytics routinely perpetuates disparities. racial and socioeconomic. For example, an AI system created to help US judges assess the probability of recidivism erroneously determined Black defendants are at much higher risk of recidivism than white defendants.
Concerns are also growing about how AI could deepen inequalities in the workplace. Until now, predictive algorithms have increased efficiency and profits in ways that benefit managers and shareholders. at the expense of grassroots workers (especially in the sharing economy, also called “gig economy”).
In all these examples, AI systems are a distorting mirror of society, as it reflects and magnifies our prejudices and inequalities. As points According to technology researcher Nanjira Sambuli, digitization tends to exacerbate, rather than ameliorate, pre-existing political, social, and economic problems.
Enthusiasm to adopt predictive tools must be balanced with informed and ethical consideration of their intended and unintended effects. When the effects of these powerful algorithms are unknown or controversial, the precautionary principle would discourage their implementation.
We must not allow AI to become another arena where decision makers ask for apologies rather than permission. It is for this reason that the United Nations High Commissioner for Human Rights and others have requested extensions on the adoption of AI systems until the ethical and human rights frameworks are updated to take into account their potential harms.
Developing the appropriate frameworks will require forging a consensus on the basic principles that should shape the design and use of predictive AI tools. Fortunately, the race for AI has paralleled a avalanche of research, initiatives, institutes and networks about ethics. And while civil society has taken the lead, intergovernmental entities such as the OECD and the Unesco They have also been involved in these issues.
Artificial intelligence to create text, voice, art and video is advancing rapidly, with far-reaching implications for governance, commerce and civic life.
Since at least 2021, the UN has been working on the creation of universal standards for ethical artificial intelligence. In addition, the European Union has proposed a Artificial Intelligence Law (the first such effort by a major regulator) that would block certain uses (such as those that resemble China’s social credit system) and subject other high-risk applications to specific requirements and oversight.
To date, this debate has been overwhelmingly concentrated in North America and Western Europe. But low- and middle-income countries have to consider their own baseline needs, concerns, and social inequalities. Numerous studies have shown that technologies developed by and for markets in advanced economies are often inappropriate for less developed economies.
If new AI tools are simply imported and widely used before the necessary governance structures are put in place, they could very easily do more harm than good. All of these issues need to be considered if we are to design truly universal principles for AI governance.
Recognizing these gaps, the Igarapé Institute and New America have just launched a new Global Task Force on Predictive Analytics for Security and Development. The working group will bring together digital rights advocates, public sector partners, technology entrepreneurs and social scientists from the Americas, as well as Africa, Asia and Europe, with the aim of defining first principles for the use of predictive technologies in public safety and sustainable development in the Global South.
The formulation of these principles and standards is only the first step. The greatest challenge will be to achieve the international, national and subnational collaboration and coordination necessary to apply these principles and standards in law and in practice. In the global rush to develop and deploy new predictive AI tools, harm prevention frameworks are essential to ensure a safe, prosperous, sustainable, and human-centered future.
You can follow PLANETA FUTURO on Twitter, Facebook and instagramand subscribe here to our newsletter.