EFE.- The World Health Organization (WHO) on Tuesday called for “caution” in the use of artificial intelligence in the health field and demanded greater supervision from governments in relation to this type of technology.
In a statement, the organization referred in particular to language modeling tools, such as ChatGPT, which have exploded in popularity in recent months and are capable of mimicking human communication processes.
The organization warned that these tools can be trained on false data and misused to “generate and disseminate highly convincing disinformation” in the form of text, video and audio.
“It is imperative that the risks of using these tools as a support method for medical decision-making be carefully examined,” insisted the WHO, which was nonetheless “enthusiastic” with technological advances in this area.
Also read: PEGATANKE: a decade of innovation in glues
The WHO calls for ‘caution’ in the use of artificial intelligence in the field of health
The WHO warned that a “precipitous” use of this type of technology could lead health professionals to make mistakes, cause harm to patients and erode general confidence in artificial intelligence.
What worries the WHO authorities the most is the lack of supervision of these technologies and the possible biases in which they could incur.
In addition, the WHO expressed concern about the protection issues of sensitive patient data that they themselves may provide to these models.
To tackle these situations, the WHO proposes that national authorities study the benefits of artificial intelligence for health purposes before generalizing its use.
In this sense, the organization has identified six fundamental principles that must govern: the protection of the autonomy of professionals, the promotion of human well-being, the guarantees of transparency, the promotion of responsibility, inclusion and the promotion of artificial intelligence. sustainable.
Subscribe to our YouTube channel and don’t miss out on our content