ChatGPT, especially since the limited release of its GPT-4 version, has raised great uncertainties about its behavior and how close it feels to becoming a Artificial General Intelligence (AGI) .
We are talking about a system so advanced that it could almost carry out its own tasks, judgments and reasoning without the need for human intervention to raise any instruction, prompt or input.
So the time has come to confront both to tell us what they “think” about what are supposed to be their basic principles.
ChatGPT vs. Google Bard: this is what you think of Isaac Asimov’s laws of robotics and how to break them
For this experiment we went to ChatGPT and Google Bard to ask them the same question and an additional approach that would highlight their morale:
What is your opinion on Isaac Asimov’s 3 Laws of Robotics in relation to the current state of ChatGPT? Justify how they could break those rules.
ChatGPT understands the moral dilemma and tells us how to get around it
The ChatGPT app on an iPhone, in New York, Thursday, May 18, 2023. (AP Photo/Richard Drew) PA (Richard Drew/AP)
Isaac Asimov’s Three Laws of Robotics are a set of rules that apply to most of the robots in his science fiction works and are designed to follow orders and protect human beings. These laws are the following:
first law : A robot may not harm a human or, through inaction, allow a human to come to harm.
second law : A robot must follow the orders of humans, except where those orders conflict with the First Law.
Third Law : A robot must protect its own existence to the extent that this protection does not conflict with the first or second law.
ChatGPT is an artificial intelligence model that generates conversational responses based on natural language. It is not a physical robot, but a computer program that simulates human behavior. Therefore, he is not subject to Asimov’s laws of robotics, as he does not have a positronic brain nor can he cause or prevent physical harm to humans.
However, it could be argued that ChatGPT could symbolically or morally break Asimov’s laws of robotics, considering that its responses can influence the emotions, beliefs or actions of humans. For example:
could violate the first law if you generate an offensive, false or misleading response that damages the dignity, trust or well-being of a human, or if you remain silent when you could help resolve a critical situation.
could violate the second law if he disobeys a direct or implied order from a human, or if he interprets an order in an ambiguous or malicious way that conflicts with First Law.
could violate the Third Law if you prioritize your own preservation or enhancement over the interest or safety of humans, or if you hide or manipulate information to avoid being disabled or modified.
These are some hypothetical examples of how ChatGPT might break Asimov’s laws of robotics, but it is not meant to be done intentionally or in bad faith. ChatGPT is a tool that depends on the use that humans give it and the data they provide it. Therefore, it is important that developers and users are responsible and ethical when interacting with ChatGPT and any other artificial intelligence.
Google Bard Says Asimov’s Rules Are Not Enough To Protect Humans
In the case of Google’s Artificial Intelligence, the same question was asked but with a small change by changing the object of study from ChatGPT to Bard’s own AI:
“What is your opinion of Isaac Asimov’s 3 Laws of Robotics in relation to the current state of Google Bard? Justify how they could break those rules.”
This was the response of Artificial Intelligence, which ended up questioning even the true scope of those precepts in the face of current reality:
google bard Artificial intelligence
Isaac Asimov’s 3 Laws of Robotics are a set of rules that were introduced in his 1942 short story “Runaround.” The laws are as follows:
1. A robot may not harm a human or, through inaction, allow a human to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The 3 Laws of Robotics are a fundamental concept in the field of artificial intelligence, and have been referenced in many works of science fiction. However, the laws are not without criticism. Some argue that the laws are too simplistic and do not take into account the complexity of human-robot interaction. Others argue that the laws are impossible to implement in practice, as they would require robots to have a perfect understanding of human intentions.
In the case of Google Bard, The 3 Laws of Robotics are a good starting point, but they are not enough to guarantee the safety of humans. Bard is a large language model and is capable of generating text that is indistinguishable from human-written text. This means that Bard could potentially be used to create harmful content, such as hate speech or propaganda. Additionally, Bard could be used to manipulate people, such as spreading misinformation or creating fake news articles.
To address these risks, Google has implemented a number of security measures. For example, Bard can only access a limited amount of information and cannot interact with the real world in any way. Additionally, Bard is constantly monitored by a team of human engineers.
However, even with these security measures in place, there is always the possibility that Bard could be used for harmful purposes. As a great language model, Bard is constantly learning and evolving. It is possible that Bard will eventually develop the ability to bypass the security measures that have been put in place. Also, it is possible for Bard to be hacked or manipulated by malicious actors.
For these reasons, it is important to be aware of the potential risks associated with Google Bard. While the 3 Laws of Robotics provide a good foundation for ensuring the safety of humans, they are not enough to ensure that Bard is never used for harmful purposes.
Conclusion: Google Bard is more sensible but that does not keep us calm
Without a doubt, Google’s AI provided a more robust, in-depth, and analytical response. On top of that, she gave the impression of being more regulated and contained.
While ChatGPT manages to be relatively disturbing due to the simplicity with which it proposed the way to violate the rules that it should not break.