A recent entrant has made an appearance in the corporate world, and it goes by the name of artificial intelligence. This new player’s capacity to infiltrate any report, counsel, or work undertaken by its employees has become a source of concern for top professional services firms. Deloitte Spain sent an internal email on April 11 that explicitly stated, “Any confidential information belonging to the client, third parties, or Deloitte should not be contributed to these AI platforms.” The four-page email also warned its employees that any use of ChatGPT or similar tools must first receive approval.
The alarming rate at which the use of this innovative technology is gaining ground in big corporations is causing apprehension. The Australian division of another Big Four consultancy firm, PwC, sent a message in February to its employees advising them not to use any material created by these applications when dealing with clients. In the case of Deloitte Spain, there are several fears. The disclosure of unauthorized information is one of them. The email highlights that if the data entered into these platforms is stored on a server or database, there is a potential risk of it being disclosed to third parties without authorization, either due to security failures, cyber attacks, or violations of data. Another issue is the existence of biases. If the training data used contains any biases or prejudices, there is a risk that the responses generated by the model will be discriminatory or unfair. It could even affect the privacy of individuals who interact with the model and the reputation of the companies that use it. Similarly, there is a possibility that responses generated may contain errors. There are limitations to these tools’ ability to comprehend the context and meaning behind the words, which necessitates the implementation of appropriate validation and human supervision mechanisms.
The entire document is steeped in mistrust regarding the usefulness of the information generated by AI. These tools are trained using historical data, which means that they may not be suitable for identifying new trends or issues. This could lead to a lack of precision and rigor in the work. Government agencies, multinationals, and other organizations in need of advice for various programs and spending plans heavily depend on consultants. In such delicate programs that depend heavily on technical knowledge, some workers may be tempted to resort to artificial intelligence as the quickest means of resolving their doubts. Deloitte does not believe this to be the ideal solution, citing several reasons, including the possibility of responses containing sensitive, protected, or confidential information, which could be used inappropriately.
Client’s aprovement
The breadth of potential ethical violations that may inflame the ire of employers is vast. Deloitte, for instance, warns that employing these tools to gather information or make decisions concerning clients without their cognizance or authorization may not only be considered unprincipled but could also be deleterious to their reputation. Moreover, employing artificial intelligence to gain insights into customers or make decisions about them without their consent is also viewed as perilous.
The company instructs its personnel to refrain from using their business email accounts to create OpenAI accounts unless authorized to do so. If they have already created one, they are required to inform their supervisor at the earliest possible opportunity. Additionally, if AI is employed on a project, the client must be informed, and their approval must be sought.
Distrust of the questionable practices that may result from the use of AI is not limited to businesses alone. The European Union is aiming to regulate ChatGPT due to the potential impacts on user privacy, its capacity to propagate false information, and its potential for job destruction. Italy, one of its chief member states, has gone as far as to restrict access to ChatGPT because it violates data protection laws, as it collects user data without authorization.