In the past few months, more than 100,000 hacked ChatGPT accounts have been put up for sale on the dark web. The cybersecurity firm Group-IB has delved into this space, known as the Internet underworld, discovering usernames and passwords for multiple web services. These include OpenAI artificial intelligence credentials, which is used for professional purposes and therefore sometimes contains sensitive information of the companies that use it.
Since ChatGPT became popular at the end of last year, its adoption has been massive. It reached 100 million users in just two months and today it maintains meteoric growth. Companies like Microsoft have been in favor of their workers using it to automate tasks, although with caution.
But not everyone is so enthusiastic. Some giants, like Apple or Samsung, have been banned from using this or other AI apps for fear of internal information leaking abroad. In this context, a poll conducted by the Fishbowl app, which promotes group discussion in business settings, points out that 68% of those who use ChatGPT or other AI tools do so without the knowledge of their superiors.
The vertiginous growth of ChatGPT suggests that some companies have rushed to use the application, without protocols or user guides. And this has its risks, because the tool stores the history, with all the questions that the user asks and the answers that the AI gives. “Many companies have started using ChatGPT in their day-to-day processes. Some senior managers or sales managers can use it to improve their emails, which are then sent externally. Obviously, in this correspondence there can be sensitive data, such as prices that are handled internally, numbers, information about products, about innovations, invoices and other critical information”, says Dmitry Shestakov, product manager Threat Intelligence at Group-IB.
In all, the cybersecurity firm found 101,134 ChatGPT account credentials exposed on the black market. The cybercriminals used malicious programs, called information stealers, such as Trojans, to steal data. Then they sold them in packages, called ‘stealer logs’, which are compressed files that contain folders and text documents with the usernames and passwords stolen from a device. The average price of one of these files is $10, although Group-IB points out that it is not known how many of them have been purchased.
ChatGPT histories may contain information for internal use, which companies do not want to see circulate freely. But data can also be extracted to practice targeted attacks against the employees of the companies themselves. The attackers could use in a malicious email the name of an employee or some details about processes in which the company works. In this way, they achieve a more credible text and it would be easier for a manager to click on a link or download a file.
Another of the major risks associated with the leaking of ChatGPT accounts is related to the use of this tool in programming. Shestakov explains the problems this can lead to: “Code from products developed within the company is sometimes shared with ChatGPT, creating the risk that malicious actors could intercept, replicate, and sell this code to competitors. In addition, this code can be used to scan it for vulnerabilities in the company’s products, leading to potential security breaches.”
Armando Martínez-Polo, partner responsible for Technology Consulting at PwC, encourages companies to explore generative artificial intelligence, but following certain recommendations. First of all, use policies are needed that clearly define what cannot be done. “The first thing is to establish that personal data and confidential or intellectual property data of companies are not shared with generative artificial intelligences,” Martínez-Polo points out.
“The big problem with OpenAI is that everything you do with them is uploaded to the cloud and, in addition, OpenAI uses it to train its own models,” explains Martínez-Polo, who advises using AI within a private cloud service. “It is important to create a secure work environment with ChatGPT, so that when you provide information about your company to do the training, you know that everything remains within your protected environment.”
At the moment, it does not seem that data leaks are going to diminish. Quite the opposite. The cybersecurity firm Group-IB has observed that the number of files for sale with ChatGPT keys has not stopped increasing in the last year. And it has increased significantly in the last six months. In December 2022, 2,766 hacked accounts of the artificial intelligence tool were found. Last May there were already 26,802. “We anticipate that there will be more ChatGPT credentials included in the stealer logsgiven the increasing number of users registering with the chatbot,” says Shestakov.
You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits