Today’s class continued the challenge of figuring out what intelligence is and whether machines can be intelligent. Turing had an interesting view – that if he can’t see whether what’s on the other side of the wall is a human or a machine, he has to assume that it’s intelligent. Searle also had an interesting view – to be intelligent you have to have intentionality. But what exactly does that mean? Doesn’t the machine have the intentionality of carrying out the functions that have been requested of it? We think of a machine as something we program to do certain things, but can’t humans be looked at in the same way? We put information into a machine and get something out of it, depending on what we have requested and what it has been programmed to do. Humans put information into their brains and generate output too, depending on what they want to do with it. And I guess therein lies at least one difference – humans can choose what they want to do with it. They can work with the information in many different ways. But machines could too if they were given the capability. I feel like I’m going round and round with this. We’re trying so hard to discover what intelligence really is and how the human brain works. My husband posed an interesting question – if we somehow created a machine that was able to figure out how the human brain works, would it tell us? Or would it have so much intelligence and intentionality itself that it would control the information? (Sounds like a movie-in-the-making.)
Okay, so back to intentionality. Is that a sign of intelligence? We normally wouldn’t say a tree is intelligent. But doesn’t it have intentionality to survive? What about self-preservation? Would that be part of our definition of intelligence? All living things have some kind of built-in instinct for self-preservation – can we count that as a form of intelligence? And what if machines developed some kind of self-preservation mode like HAL in 2001: A Space Odyssey, as Carr points out. If we ever let technology get ahead of us, we could end up in a machine-controlled world. I know this is not original thinking, but our class discussions lead the mind in many directions, and this was one of them for me. To take it further, we now have technology for creating computer-generated characters that seem incredibly lifelike. I believe some of these are registered with the Screen Actors Guild, which seems bizarre. But as we make these characters more and more real, at what point can we consider them intelligent entities or beings? The fact that they are considered real enough to be required to register with SAG indicates that they are already considered intelligent beings. Right now they have real actors providing their voices, but will we ever computer-generate unique voices that become part of their character? We are certain to become attached to these characters – well, we’re already attached to cartoon characters like Bugs Bunny. This is along the lines of how we get attached to computers like ELIZA and WATSON. If somebody pulled WATSON’s plug, I know I would feel bad, like a living thing had died.
There was an excellent Twilight Zone episode in which a man who had been convicted of a crime had been sentenced to spend his life on a remote planet far from earth. A sympathetic captain of a supply ship brought him a robot in the form of a woman. At first the man, Corey, rejected the female robot, Alicia. But over the months he began to treasure his companion, and fell in love with her. She had been made to be so human-like, with all the feelings, emotions, and senses experienced by humans, that there was no distinction between her and a real human. Then Corey was pardoned and was to be taken back to earth, but there was no room on the ship for Alicia. The captain could not persuade Corey to leave her, so he ended up shooting her in the face to remind Corey that she was a robot. This came as a shock to the viewer, because we all got attached to her just as Corey did. And since she actually experienced human emotion, can we say she wasn’t real? Distinctions between humans and machines are certain to become more and more fuzzy as we develop more realistic computers, computer-generated characters, and robots in the future.