There is no doubt that artificial intelligence is part of the future and this 2023 is undoubtedly being the “year of AI”, marked by one of the periods of maximum expansion and controversy.
This technology has already demonstrated its potential to revolutionize various areas such as health, finance, transportation or cybersecurity. It can automate tedious tasks, increase efficiency, and provide insight, but it can also help us solve complex problems, make better decisions, or reduce human error. But unfortunately, at the same time we are already seeing widespread use of this technology focused on the development of new and more complex cyber threats.
This misuse of AI has been extensively reported in the media, with a wide selection of reports on how cybercriminals are taking advantage of ChatGPT to help create malware.
We often hear concerns about whether AI will approach or even surpass human capabilities. Although predicting how advanced it will become is difficult, several categorizations already exist today: current artificial intelligence, known as “narrow” or weak AI (ANI); general AI (AGI) would rise to a level of functioning equivalent to that of the human brain, thinking, learning and solving tasks autonomously; while artificial super intelligence (ASI) would encompass those machines capable of surpassing human intelligence.
Limited for the moment to the first level, one of the most recurring concerns is whether artificial intelligence will reach the AGI level, with the consequent risk that it will act on its own and become a potential threat to humanity. Therefore, we must work to align the objectives and values of AI with those of humans and mitigate the risks associated with its most advanced version. It is important that governments, companies and regulators work together to develop strong security mechanisms, legislate, establish ethical principles and promote transparency and accountability in AI development.
And it is that currently there is only a minimum compendium of rules and regulations and an unapproved proposal for an AI Law. Depending on the type of AI, companies that develop and launch AI-based systems must ensure at least minimum standards of privacy, fairness, explainability, and accessibility. Various bans and restrictions have been discussed for ChatGPT, a case in which concerns center on privacy, after data leaks were detected, as well as the fact that there is no age limitation for user access.
We have already witnessed how cybercriminals use AI to refine their attacks, automatically identify vulnerabilities, create targeted phishing campaigns, socially engineer, or even create advanced malware that can change its code to better evade detection systems.
This technology can also be used to generate compelling audio and video deepfakes that can be used for purposes related to political manipulation, false evidence in criminal trials, or to trick users into paying money.
Although the applications of AI in the field of cybersecurity are not exclusively negative, it is also an important aid in defending against cyberattacks. For example, of the more than 70 tools we currently use at Check Point Software to analyze threats and protect against attacks, more than 40 are AI-based. Helps with behavioral analysis, analyzes large amounts of threat data from various sources, including the Dark Web, making it easy to detect zero-day vulnerabilities or automate patching of security vulnerabilities.
The biggest problem we are facing is the careless use of this tool. Most users are unaware that sensitive information entered into ChatGPT, whether at a company or personal level, can be very valuable if leaked, and could be used for targeted marketing purposes.
Now more than ever we must be aware that the future impact of AI in our society will depend on how we choose to develop and use this technology today. We must weigh the potential benefits and risks as we strive to ensure that artificial intelligence is developed in a responsible, ethical, and societally beneficial manner.