In his famous scientific article of 1960, titled Some moral and technical consequences of automationNorbert Wiener advises that if, to achieve our purposes, we use a mechanical agent whose operation we cannot effectively intervene once it has been activated, we must be extremely sure that the purpose programmed into the machine is the purpose we really want. This alignment between human intention and the machine program is one of the fundamental problems of Artificial Intelligence. It’s also the concept that Sam Altman referred to last week when he introduced the GPT4 as “their most capable and line-up model to date.”
The astute CEO of OpenAI invokes the father of cybernetics to offer an image of responsibility and security in the launch of the product that has sent the market of generative models out of control. And yet the release of GPT4 includes a paper 98 pages of academic pretensions but advertising nature where one can read the following: “Given the competition in the market and the security implications of large-scale models such as GPT-4, this report does not contain further details about the architecture (including the size of the model), hardware, training computational capacity, construction of data sets, training method or similar”. In other words, it is impossible to know how and with what material the model has been trained. This eliminates the possibility of determining with which interests and objectives the most aligned model of Altman’s company has been programmed. Ironically, it is still called OpenAI.
The Large Language Models -Large Language Model or LLM- consist of a deep learning algorithm trained with large amounts of text, organized in databases. These elements already pose a challenge for the lineup. Deep Neural Networks are opaque even to their own developers and the databases are often a poor or biased representation of the world. But there are strategies that can bring us closer to the goal. In his essential The Alignment ProblemBrian Christian offers three points of support: representation, fairness and transparency.
Representation refers to training. It is necessary that the data sets be a reflection of the world in which the AI wants to operate. Justice, to the biases. An unfair judicial system or a macho company culture produce databases that reflect those values. If we train an AI with your examples, we will be automating that pattern. Finally, transparency is the only guarantee that the first two points have been addressed by the company developing the model, and that the necessary time and effort has been invested in making it safe for users, systems and companies, before starting. to mediate in all aspects of our life.
GPT-4 is a black box. In an industry that aspires to be increasingly indistinguishable from magic, it’s more important than ever to point out the distance between what would-be visionaries say and what they actually do. In this case, founding a research laboratory with the express intention of ensuring transparent development of a technology with enormous transformative potential and doing exactly the opposite.
Subscribe to continue reading
Read without limits