Aristotle once said, “all knowledge and every pursuit aims at some good.” But then we ask, “What does it mean to be good?” This in turn brings about the ethical dilemma philosophers have sought to answer for centuries.
In today’s society, we face a similar dilemma because of the “cognitive era.” The cognitive era is the era of thinking machines or artificial intelligence. But how do we encode artificial intelligence with a sense of what is right or wrong?
Image Source: TED
In the beginning, artificial intelligence were taught through machine learning corpus. For example, if you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats and pictures of everything else. Eventually, the robot would be able to distinguish between what is a cat and everything else. With basic machine language corpus, machines can be limited to understanding basic commands that do not involve any ethical thinking. In order to broaden the scope of artificial intelligence, researchers need to figure out which moral principles they want to instill within the robot. One way to go about this is to go over various philosophers’ moral principles.
The German philosopher, Immanuel Kant came up with the idea of categorical imperatives. Categorical imperatives are moral principles that can be based upon any idea if it follows the same type of reasoning. Kant believes that categorical imperatives should be principles where an individual would want it to be applied to anyone and everyone (as a universal law), very similar but not the same as the Golden Rule. If the right categorical imperatives can be programmed into a robot, this can be one way of instilling ethics into artificial intelligence.
Asimov’s first law is another ethical principle that can be applied to artificial intelligence. Asimov’s first law states “that a robot may not injure a human being or, through inaction, allow a human being to come to harm.” Another alternative would be to adopt a utilitarian principle and simply do what results in the most good or the least harm. Thus, researchers in the field of machine ethics are aiming to simulate ethical reasoning that can be used by all artificial intelligence.
Image Source: Daily mail
To try and give robots a sense of ethical understanding, it is very important to understand how algorithms that are made to be ethical. The below diagram demonstrates how an ethical robot may operate in various ethical or non-ethical situations.
The idea of a robot refusing to do something ethical or unethical raises questions about whether or not ethical robots are safe. How dangerous would a robot be when it completes a command that is unethical and can not be persuaded otherwise? How convincing would a robotic display of moral distress be? Would such a display be sufficient to discourage someone from performing a task that they would have otherwise performed?
To address these questions, researchers concluded that for a successful social engagement to happen, the human that is interacting with the robot should be able to find the actions of the robot “believable.” Robot believability is when a robot is able to display behaviors that are both moral and social to the human it is interacting with. On the other hand, there are multiple senses of believability that are distinguishable in robotic behavior. The four senses of believability are most effective when the same moral and ethical reasoning is considered by the human that is interacting with the robot. Applying the four senses of believability can allow for robots to engage in more realistic ethical interactions.
The first sense of believability occurs when the human responds to the behavior of the robot similar to how they would respond to a more cognitively sophisticated individual. The second sense of believability happens when the behavior of the robot creates an internal response in the human it interacts with. The internal response would be similar to the response a human would get responding to another human being. For example, a child crying would invoke a certain internal response in humans, so robots should have the ability to invoke such a strong internal response in humans. The third sense of believability is when the human is able to recognize the behavioral display in the robot. The fourth sense of believability is similar to the second sense of believability, but with a slight difference. It occurs when the human that is interacting with the robot creates a mental response in the human that is again similar to the mental response from another human.
Image Source: iStock Photo
The rise of artificial intelligence forces us to think about various ethical dilemmas more than ever before. We need to ensure that the robots of the future have the same moral compass as humans do, so that any actions they complete benefit us. Robots and humans should be able to interact with each other when faced with an ethical dilemma, and because of this, robots need to not only have the correct set of morals, but be believable as well.