The increasing popularity of artificial intelligence throughout the world has unleashed a wave of concern among some businessmen and experts, who discuss the risks and negative implications that this technology could have in the future.
Brad Smith, president of Microsoft, an American company that has been operating in the technology area since 1975, recently joined the list of tycoons who have openly referred to the issue.
In a speech he gave last Thursday, the businessman and business executive confessed that his biggest fear regarding artificial intelligence is deep forgery, which contributes to the spread of misinformation.
“We’re going to have to address the issues around deep forgeries. We’re going to have to address in particular what we’re concerned about most foreign cyber influence operations, the kind of activities that are already being carried out by the Russian government, the Chinese, the Iranians,” Smith said.
He added: “We need to take steps to protect against altering legitimate content with the intent to mislead or defraud people through the use of AI.”
The deep fake or ‘deepfake’ (as it is known in English) uses a form of artificial intelligence called deep learning to create photos or videos that, although they appear real, are not. Its existence is of great concern in terms of security, since not only images can be manipulated, but also voices.
“A deepfake would be footage generated by a computer that has been trained through countless existing images,” Cristina López, a senior analyst at Graphika, told Business Insider.
With the objective of regulating artificial intelligence, Smith addressed the need to issue licenses that allow protecting both physical security, cybersecurity and national security.
“We will need a new generation of export controls, at least the evolution of export controls that we have, to ensure that these models are not stolen or used in a way that violates the country’s export control requirements,” said the president of Microsoft.
In an article published on May 25, the lawyer also gave five guidelines for governing AI.
- Implement and take advantage of new government-led AI security frameworks.
- Require effective safety brakes for AI systems that control critical infrastructure.
- Develop a comprehensive legal and regulatory framework based on the technology architecture for AI.
- Promote transparency and ensure academic and non-profit access to AI.
- Pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that arise with new technology
Smith has not been the only one who has warned about the risks of artificial intelligence. Just a few days ago, Sam Altman, CEO of OpenAI, appeared before a Senate committee on privacy and technology and called for regulation of this emerging industry.
“The US government should consider a combination of licensing or registration requirements for the development and release of AI models above a crucial threshold of capabilities, along with incentives for full compliance with these requirements,” Altman said. during his appearance.