Last week I wrote an article about the session that took place in the Capitol regarding Generative Artificial Intelligence (AI), in which the CEO of OpenAI (the company behind ChatGPT), Sam Altman, appeared. It was an informative piece that synthesized the dialogue held in said session, where senators and experts also participated and, to my surprise, it immediately ranked among the most read in this medium.
However, after posting it, I was left with the feeling that it lacked analysis; that it satisfied the immediate need to recount that meeting at the Capitol, but lacked a broader context in which to locate and understand it. To try to remedy those failures, I am now writing this other supplementary article.
surveillance capitalism
The first thing is to point out the historical coordinates in which the AI was born and how this new version differs from the previous ones. “Surveillance Capitalism” Explain the researcher and emeritus professor at Harvard Shoshana Zuboff, is the system that emerged around 2002 based on the extraction of personal data, the result of our interaction with the already ubiquitous digital spaces (websites, credit cards, apps, car computers, the Roomba or anything that carries the Internet), from which to generate predictions about our behaviors, which are then sold without consent and for more or less spurious purposes.
In an insufficient legal framework, no matter how much Meta has been fined in Ireland for violating the privacy of users (citizens), there is an exacerbated business competition aimed at accumulating increasingly intimate traces of our experience and, not only predict behaviors, but also modify them. It is a question, says Zuboff, of “transforming the market into a project of total certainty”, since, once we know how we think, feel, what hurts or excites us, it is very easy to manipulate ourselves with the objective of buying certain products, and until we vote in one direction or the other – exactly what happened with the Cambridge Analytica scandal. The non-generative Artificial Intelligence of the algorithms intervenes in these processes, which analyze the data and learn from it.
However, warns the emeritus professor of Neural Science and Psychology at New York University Gaby Marcus: for Russia to be able to interfere in the 2016 US elections, it had to spend millions of dollars in the production of disinformation (mainly in the creation and dissemination of malicious ads on networks ) and, with a tool like generative AI, the costs would have been practically zero. This is so because tools like ChatGPT –and other similar ones– do not reproduce messages, but manufacture them, based on data –unknown until now– that act as fuel, with a high level of credibility, not veracity.
devastating consequences
The consequences can be absolutely devastating in a climate already plagued by algorithmic post-truth, where fake newsmany governments or political parties have adopted the strategies trumpists of institutionalization of the lie, and the profit of the elite has broken the foundations of democracy. We are talking about massive campaigns aimed at altering electoral processes, but also identity theft thanks to photographs that seem real, voice cloning programs, and a more than likely increase in cyber attacksto which would be added the falsification of judicial evidence and, consequently, an almost total human inability to distinguish what is true and what is not, with the logical loss of confidence in the institutions. If we already inhabit a world of fanaticism, assaults on the Capitol, disenchantment and discredit of journalism and politics, the new AI would have enormous potential to amplify these problems..
There has also been talk of the possibility of integrating it into weapons that would work autonomously; the degradation of educational practices (millions of students already use ChatGPT for their work, despite the fact that it incorporates false information) and the loss of collective intelligence that this entails; of the difficulties of producing reliable scientific research if, suddenly, a tide of papers fictitious floods everything; of an unbearable social instability that can lead to conflicts and would also respond to the substantial elimination of jobs that would be replaced by “intelligent” machines.
In order to calm things down, some point to the benefits of AI, for example, in the healthcare sector. Dr. Isaac Kohane, a professor at Harvard, stated in this interview that, With a shortage of primary care physicians in the US, AI could be used to answer patient questions, provide diagnoses and recommend treatments. The curious thing is that at no time does it suggest hiring more doctors, which invites us to think about an automation of healthcare, deprived of human contact and dependent on unregulated gadgets, programmed without any kind of transparency that –according to its creators– can suffer “hallucinations”, that is, generating lies while perpetrating what Naomi Klein has called “the greatest theft in history” when all human knowledge fell into the hands of a few very powerful companies.
Nuances of regulation
The moguls of these corporations are fully aware; Hence, Altman has first approached the congressmen of his country, moved by the desire to collaborate in favor of the regulation of his creature, and now he is on a tour of Europe pursuing the same purpose. The question of which organisms, existing or future, they will be in charge of supervising a monster capable of destroying the pillars of life as we have known them has focused Altman’s conversation with Pedro Sanchezjust as it did in the US Senate session, where IBM executive Christina Montgomery repeatedly emphasized the need to regulate AI application contexts, but not technological innovation, making it clear that the monster will continue to grow while the slow legal frameworks are updated –if they do–.
This would replicate the situation of the first steps of surveillance capitalism (the unpunished extractivism of privacy), today unstoppable. From all this, it could be inferred that Altman’s request for regulation is more due to the search for government collusion with the advancement of AI, and the creation of certain permits or licenses that serve to avoid the worst consequences –for example: in the development of war scenarios, and the constant comparison with nuclear weapons is no coincidence–, but also to legally shield oligopoly companies against the damage inflicted in other areas.
Given the background of the phenomenon, the legal cracks in which it is inserted and the democratic threats that it poses within a panorama where, by itself, democracy guarantees fewer and fewer rights, the question should perhaps be: does it offer any horizon of social improvement? the AI? Backing down rather than thriving in disasters is something that is not yet on the table.