Despite the fact that many AI services claim not to cache sent information, several incidents have been reported, especially with ChatGPT, where sensitive data has been exposed to vulnerabilities in the construction of these systems.
To expand on this problem, Check Point Research has reported and provided a fix for a new vulnerability in the Wide Language Models (LLM) used in ChatGPT, Google Bard, Microsoft Bing Chat, and other generative AI services.
“If companies do not have effective security for this type of AI applications and services, their private data could be compromised and become part of their responses,” explains Eusebio Nieva, technical director of Check Point Software for Spain and Portugal.
The use of AI in the development of software code is increasingly common due to its ease and speed. However, these tools can also be an inadvertent source of data breaches.
Present in leading Generative AI applications from ChatGPT, Google and Microsoft, Check Point Research has helped prevent sensitive data leaks for the tens of millions of users who use these tools
The combination of careless users and the vast amount of information shared creates an opportunity for cybercriminals looking to obtain sensitive data such as credit card numbers and logs of queries made in these chats.
As part of its solution, Check Point Software offers companies a URL filtering tool to identify these generative AI websites, with a new category added to its set of traffic management controls.
In addition, Check Point Software’s firewall and firewall-as-a-service solutions also include data loss prevention (DLP). This allows network security administrators to block the upload of certain specific types of data (software code, personally identifiable information, sensitive information, etc.) when using generative AI applications.
Thus, it is possible to activate security measures to protect against misuse of ChatGPT, Bard or Bing Chat. Configuration is done in just a few clicks through Check Point Software’s unified security policy.