Blog

Apr 29, 2015

Robotic EQ

Posted by in categories: evolution, homo sapiens, robotics/AI

Can an emotional component to artificial intelligence be a benefit?

Robots with passion! Emotional artificial intelligence! These concepts have been in books and movies lately. A recent example of this is the movie Ex Machina. Now, I’m not an AI expert, and cannot speak to the technological challenges of developing an intelligent machine, let alone an emotional one. I do however, know a bit about problem solving, and that does relate to both intelligence and emotions. It is this emotional component of problem solving that leads me to speculate on the potential implications to humanity if powerful AI’s were to have human emotions.

Why the question about emotions? In a roundabout way, it has to do with how we observe and judge intelligence. The popular way to measure intelligence in a computer is the Turing test. If it can fool a person through conversation, into thinking that the computer is a person, then it has human level intelligence. But we know that the Turing test by itself is insufficient to be a true intelligence test. Sounding human during dialog is not the primary method we use to gauge intelligence in other people or in other species. Problem solving seems to be a reliable test of intelligence either through IQ tests that involve problem solving, or through direct real world problem solving.

As an example of problem solving, we judge how intelligent a rat is by how fast it can navigate a maze to get to food. Let’s look at this in regards to the first few steps in problem solving.

Fundamental to any problem solving, is recognizing that a problem exists. In this example, the rat is hungry. It desires to be full. It can observe its current state (hungry) and compare it with its desired state (full) and determine that a problem exists. It is now motivated to take action.

Desire is intimately tied to emotion. Since it is desire that allows the determination of whether or not a problem exists, one can infer that emotions allow for the determination that a problem exists. Emotion is a motivator for action.

Once a problem is determined to exist, it is important to define the problem. In this simple example this step isn’t very complex. The rat desires food, and food is not present. It must find food, but its options for finding food are constrained by the confines of the maze. But the rat may have other things going on. It might be colder than it would prefer. This presents another problem. When confronted with multiple problems, the rat must prioritize which problem to address first. Problem prioritization again is in the realm of desires and emotions. It might be mildly unhappy with the temperature, but very unhappy with its hunger state. In this case one would expect that it will maximize its happiness by solving the food problem before curling up to solve its temperature problem. Emotions are again in play, driving behavior which we see as action.

The next steps in problem solving are to generate and implement a solution to the problem. In our rat example, it will most likely determine if this maze is similar to ones it has seen in the past, and try to run the maze as fast as it can to get to the food. Not a lot of emotion involved in these steps with the possible exception of happiness if it recognizes the maze. However, if we look at problems that people face, emotion is riddled in the process of developing and implementing solutions. In the real world environment, problem solving almost always involves working with other people. This is because they are either the cause of the problem, or are key to the problem’s solution, or both. These people have a great deal of emotions associated with them. Most problems require negation to solve. Negotiation by its nature is charged with emotion. To be effective in problem solving a person has to be able to interpret and understand the wants and desires (emotions) of others. This sounds a lot like empathy.

Now, let’s apply the emotional part of problem solving to artificial intelligence. The problem step of determining whether or not a problem exists doesn’t require emotion if the machine in question is a thermostat or a Roomba. A thermostat doesn’t have its own desired temperature to maintain. Its desired temperature is determined by a human and given to the thermostat. That human’s desires are a based on a combination of learned preferences from personal experience, and hardwired preferences based on millions of years of evolution. The thermostat is simply a tool.

Now the whole point behind an AI, especially an artificial general intelligence, is that it is not a thermostat. It is supposed to be intelligent. It must be able to problem solve in a real world environment that involves people. It has to be able to determine that problems exists and then prioritize those problems, without asking for a human to help it. It has to be able to socially interact with people. It must identify and understand their motivations and emotions in order to develop and implement solutions. It has to be able to make these choices which are based on desires, without the benefit of millions of years of evolution that shaped the desires that we have. If we want it to be able to truly pass for human level intelligence, it seems we’ll have to give it our best preferences and desires to start with.

A machine that cannot chose its goals, cannot change its goals. A machine without that choice, if given the goal of say maximizing pin production, will creatively and industriously attempt to convert the entire planet into pins. Such a machine cannot question instructions that are illegal or unethical. Here lies the dilemma. What is more dangerous, the risk that someone will program an AI that has no choice, to do bad things, or the risk that an AI will decide to do bad things on its own?

No doubt about it, this is a tough call. I’m sure some AIs will be built with minimal or no preferences with the intent that it will be simply a very smart tool. But without giving an AI a set of desires and preferences to start with that are comparable to those of humans, we will be interacting with a truly alien intelligence. I for one, would be happier with an AI that at least felt regret about killing someone, than I would be with an AI that didn’t.

1

Comment — comments are now closed.

  • larry bender on April 29, 2015 11:19 pm

    Can you send this article to my email? I am on my cell and can’t send? Thanks