The interest that ChatGPT, the OpenAI Artificial Intelligence chatbot, is arousing, also extends to the field of cybercriminals. And it is that, together with the possibilities it offers, it also carries risks to which to pay attention, especially in the business field.
Since the announcement of the public availability of ChatGPT, there are many possibilities that have been known to offer in many areas. But, like all technology, misuse of it can have serious consequences.
This is confirmed by fibratel, a telecommunications service provider, which identifies phishing and the spread of malware among employees of organizations such as the main risks that ChatGPT can entail in the business environment.
The capabilities of this chatbot to generate messages that may seem legitimate, sometimes impersonating, can gain the trust of users and make them fall for the malicious links or files they contain.
This is one of the most widespread methods used by cybercriminals to steal personal information and employee credentials that can seriously compromise the security of the organization.
This requires companies to reinforce their efforts in raise awareness among employees of the risks to which they are exposed and that they learn to identify them to avoid putting their safety and that of the company at risk.
Measures against the risks of ChatGPT
Along with these cyber risks, we must also take into account privacy problems or business fraud that employees may incur. And it is that malicious actors can use ChatGPT to generate fake messages that push employees to perform fraudulent actions without their being aware.
In addition to having cybersecurity solutions that help protect the company, implementing email filters to prevent malicious emails from entering the inbox, it can be very useful.
More advanced proposals, such as the Secure Web Gatewaysguarantee the safety of users’ Internet browsing, blocking potentially dangerous websites and detecting unauthorized access.
In addition, the protection of endpoints it is also essential, not only to combat the possible threats that may come from the use of ChatGPT, but any potential risk.
In this sense, it is important that the protection measures have capabilities to identify attack behaviors in the event of an intrusion attempt in order to anticipate them.
However, Artificial Intelligence, and especially ChatGPT, still have a long way to go, so it will be necessary to find the most appropriate solutions for its security and guarantee the protection of organizations against its malicious use.