Nowadays, no one doubts the capabilities of ChatGPT to generate text in a coherent and natural language. So much so that what at first was an unattainable goal (generating human text) has become a challenge: is it possible to distinguish a language generated by ChatGPT from one generated by a human?
Some of the tools published so far are based, for example, on analyzing text randomness, assuming that human-generated text will be more chaotic. Even the organization that created ChatGPT (OpenAI) itself is developing ways to recognize the text generated by its language model.
In addition, having been trained with texts, the stereotypes present in them will be adopted by the model, introducing biases when generating their language. Therefore, it is necessary to work on equity (fairness): develop algorithms to detect these biases in order to learn a fair model.
Your training
ChatGPT has been trained with millions of texts from the Internet, among which we can find Wikipedia articles, news, books…
It is estimated that they have been used around 300 billion words for your training. Being a language model, its operation is based on calculating the possible words that come after a given one, and returning the one with the highest probability. This is done through a prior supervised verification process in which the model is taught what words would come next. A sentence is entered, and in the event that the model gives an incorrect answer, it will be given the valid answer. This way he learns what to say.
And if the texts are biased?
The fact that he has been trained with such texts inevitably leads to a question, what happens if the texts are biased? Could occur, and indeed it is, that ideas or stereotypes present in society are transmitted in the training data, given that it is what was present in the texts. As in any artificial intelligence model, the biases in the training set will be transmitted to the behavior of the model once trained.
One would expect such a powerful tool to be intended solely for AI professionals. However, anyone with an internet connection can use it. through the platform in which it is integrated .
An artificial intelligence to detect another artificial intelligence
One of the biggest challenges posed by models like this is fraud in their use.
Is it possible to detect a text written by ChatGPT? Currently, they have developed tools to detect whether or not a text has been written by a language model. Although it is also true that they are not always right. One way currently used is the analysis of perplexity or, what is the same, of the randomness present in the text. This measure indicates the degree of disorder in a text. A high perplexity indicates a higher probability that the text was generated by a real person. Therefore, the greater the length of the text, the greater the reliability of the detection tool (a greater length will provide more information on the degree of disorder).
Another method, recently published in a article by researchers at Stanford University, is based on the probability curvature statistical method. As indicated in the article, the text generated by an artificial intelligence has a negative curvature in its log-likelihood function. This is in contrast to human-generated text, which would have positive curvature.
That is to say: on many occasions, fraud detection tools are an artificial intelligence to detect another artificial intelligence.
The organization that created ChatGPT, OpenAI, is already working on the development of watermarks in the generated texts: signs imperceptible to a person but detectable through computer tools that indicate whether the author of the text is a human or an artificial intelligence.
When we wanted to ban calculators
The current debate is somewhat similar to the one about the use of calculators. Instead of prohibiting them, it was achieved integrate them into learning and teach how to use them to get the most out of them. The situation with ChatGPT is similar, the challenge is to integrate this technology into teaching and be able to exploit its capabilities, being aware of the great potential it has. For it, ways will need to be devised to assess learning that uses tools like this.
The model presents some dangers, which mainly consist of academic fraud through plagiarism and bias. In the same way that occurs in other artificial intelligence models, ChatGPT takes some data (in this case written text) for its training. Consequently, the behavior that they present is conditioned by what they have learned with those texts. For example, in the hypothetical case that all the training texts that deal with judicial sentences state that a person is guilty or not depending on their race, the model would learn this rule. And if you were asked how to determine if a person is guilty, you would answer based on their race.
The bias in the data
For all these reasons, studies are now focused on how to learn from the data, taking into account the biases present, and balancing the balance in favor of that equity (called fairness).
From DATAI, the Institute of Data Science and Artificial Intelligence of the University of Navarraresearch work is being carried out on the fairness in decisions made by an algorithm. Frequently the bias is more in the data than in the algorithm and that is why the aim is to detect this bias in the data and repair it automatically. In addition, there is also a group of researchers working on the development of an artificial intelligence engine that allows extracting a personality model from a text using natural language processing.
Despite the risks it may entail, ChatGPT represents a great advance in the scientific field. The question lies in knowing its capabilities, benefiting from them and combating the dangers it implies. In this way, it will stop being a threat to become a help with great potential.