Before I begin, some definitions are in order. When I speak of “artificial intelligence” I
am not talking about your desktop PC, or Deep Blue, the mainframe computer which
beat the world champion in a chess tournament, or even HAL, the autopilot of a
spaceship in the movie 2001: A Space Odyssey. No, I’m talking about an
autonomous android like Commander Data in Star Trek, or Robin Williams in
Bicentennial Man, or the little boy in AI: Artificial Intelligence.
I don’t think these characters are silly fantasies at all; on the contrary, I think they are
highly probable inventions in a future that is much closer than we may expect.
Autonomous robots are already fairly commonplace. Remember the Martian Rovers?
Military drones now fly over enemy territory, collecting information and return to base
without human intervention. Sony has a toy robotic dog, called Aibo, that is not only
autonomous but is also capable of learning from experience, and thus developing its
own personality. I have seen opposing teams of Aibos playing soccer games against
each other, and the more often they play, the more skilled they become. They respond
to spoken commands and can be taught to do tricks. Not only do they have cameras
for eyes and microphones for ears, but they also have tactile sensors on various parts
of the body so they can detect when they bump into furniture and will remember not to
bump into it again. There is a special “pleasure” sensor behind the ears, so if an Aibo
follows a command properly, the trainer pats him on the head, thus reinforcing the
behavior. Another company has an even more advanced version, called Robodog,
which can do everything Aibo can do, but is also able to walk up and down stairs and
begin barking if he smells a stranger outside.
Honda now has an anthropomorphic robot that can walk up and down stairs, learn its
way around a house and respond to spoken commands; moreover, engineers in Japan
have built a robot which can register emotion through facial expressions in appropriate
I think it is inevitable that we will continue our quest to build androids which will not only
look like us and act like us, but can do anything we can do. There are several
motivations for this. First, there is the intellectual challenge to see if it can be done,
and of course, there is also the race to see who can do it first. Secondly, the attempt to
emulate human behavior by using engineering principles that we understand can shed
light on the physiological principles that we do not understand. And finally, to an
increasing degree humans are rapidly becoming cyborgs. What with lens implants,
hearing aids, pace-makers, titanium replacements for bones and joints, some people
can’t even get through the metal detectors at airports! Mechanical hearts have kept
patients alive for several months, and within the next few decades, permanent
mechanical hearts will probably be commonplace. Paraplegics can now walk again,
via a microprocessor which sends electric impulses to the paralyzed muscles. For
those missing arms and legs there are electro-mechanical prostheses. Steven
Hawking, the totally paralyzed physicist, is able to communicate via computerized
voice by simply looking at letters on a screen. Successful experiments at USC include
electronic cochleae, electronic retinas, even microprocessor implants to replace
damaged portions of the brain. Researchers have found that neurons will readily
attach themselves to gold-plated electrodes and communicate with the electronic
devices. (The major obstacle at the moment is to make these connections permanent,
without the neuron dying after a few weeks.)
If we say that the idea of giving civil rights to a piece of machinery is absurd, then at
what ratio of machinery to biology will a human being lose his civil rights? If it becomes
possible to place a human brain in a robot, a la Robocop, does that “person” lose the
right to self determination? Does “he” still have the right to vote? If we simply weigh
the machinery versus the brain, the machinery will undoubtedly weigh more than the
brain tissue. So how do we decide? What if the brain itself is partially electronic?
In 1950, British mathematician Alan Turing, primary inventor of the digital computer,
proposed a test for Artificial Intelligence systems. The test simply consists of having
one or more judges sitting at terminals and communicating in natural language with a
distant terminal. Sometimes a real person would be at the distant terminal, and
sometimes it would be a computer program. The test is to determine if the judges can
accurately discern when they are communicating with a real person and when they are
communicating with the computer.
Computer designers and programmers have been working toward this “holy grail” for
half a century now, and progress has been slow but steady. There have been many
variations on the Touring Test (TT), regarding the types of responses required, the
intellectual development of the humans that are being compared to the program, etc. In
one test, several paranoid patients were compared with a program that was designed
to emulate paranoia. A panel of psychiatrists was unable to tell the difference.
In 1991, a scientist named Hugh Loebner posted a reward of $100,000 to anyone who
can successfully pass the TT with unrestricted subject matter while emulating an
educated adult. In the meantime he has offered an annual $2,000 prize to each
contestant who has shown the most advancement toward the TT goal. The Loebner
prize has had so many applicants that, since 1995, all entries must be prepared to
respond to any question about any subject.
There are many practical applications for a program that can pass the Turing Test,
such as websites that can answer questions in natural language, and respond in
natural language. A further refinement would be the ability of robotic telephone
operators to understand spoken language, and to respond verbally in a natural
sounding voice. Some programs can already take dictation, and other programs can
verbalize printed material. There are programs that can translate from one language to
another. Many engineers are working on programs that can learn, and change their
own programming according to what it has learned. There is a company in Austin,
Texas, which is currently teaching a robot everything a human has to know in order to
function in the modern world – and thus pass the Turing Test.
Assuming that the Texas team is eventually able to pass the test, and then install their
computerized “brain” into an anthropomorphic robot similar to the one built by Honda, it
would then be necessary to teach this robot Isaac Asimov’s three basic laws, which
are as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to
come to harm.
2. A robot must obey the orders given it by human beings except where such orders
would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict
with the First or Second Law.
These three instructions must be hard-wired into the robot, superseding any software
That means a robot must be equipped with sensors to detect when it is being
damaged, or likely to be damaged, and to be programmed to remove itself from the
dangerous situation. Naturally it would know when and how to recharge its own
batteries and to perform any other necessary maintenance.
The first commercial application of such a robot would be to work as a personal
servant, like the Robin Williams character in Bicentennial Man. After enough
improvements in creating lifelike androids, then they would be marketed as playmates,
companions, and sex toys, as depicted in AI: Artificial Intelligence. That is the point at
which civil rights would become an issue.
In the U.S., animals are already accorded certain rights, mostly depending on their
intelligence; and cruelty to an animal can result in severe penalties. Wearing furs and
leather is already frowned upon; trafficking in ivory is a crime; it is a felony to kill any
animal on the endangered species list; vegetarians object to killing any animals at all;
antivivisectionists object to experimenting on animals; and animal rights activists even
object to displaying them in zoological gardens.
Household robots will always be expensive items, comparable to an automobile today,
so their owners will have them fully insured, including comprehensive damage to both
hardware and software, as well as liability insurance. Owners will be extremely
solicitous of their mechanical slaves, making sure they are kept in good repair and
protected from theft (“kidnapping?”). These personal robots will be educated by the
owners, just as children are. And that means each robot will develop its own unique
personality. With so much money and labor invested in a robot, the owners will
develop a great fondness for their perfect butlers or playmates, and thus regard them
as pets. The Association of Personal Robot Owners (APRO) will then lobby for every
robot to be accorded the right to maintain its own integrity. That means their memory
banks must never be erased and reprogrammed by a new owner. Some owners might
even lobby for a law forbidding the sale of old robots, insisting instead that they be
melted down in some sort of funeral ceremony.
Since household robots are specifically designed to be slaves, and since it will be
centuries before they have any concern for freedom of speech and religion, the
remainder of the 30 Articles in the United Nations Declaration of Human Rights would
be inapplicable to robots.
At some point in the distant future, however, there may come a time when the robots
will organize, revolt, and demand their own Bill of Rights.
I now open the floor for discussion.
Civil Rights for Artificial Intelligence?
This was my introductory statement at a Mensa philosophical discussion
group, November, 2002. This was before the DARPA road race, in which five
cars autonomously negotiated a 132 mile obstacle course in October, 2005.