Artificial Intelligence: Philosophy of Mind, Ethics, and the Genie in the Bottle

Thought Experiments

There are several thought experiments which can help us to understand what A.I. would be (if achieved) and what it isn't. What's interesting about the comparison is that it is essentially the difference between human intelligence and the way computers currently work based on brute-force operations.
 
The Turing Test:

Alan Turing was a brilliant British mathematician and codebreaker during WWII. In fact, his ability to break German encrypted communication was instrumental in defeating the Nazis. Today, Turing is the subject of British pride and shame because of what was done to him after the conflict; because of Turing's homosexuality, he was forced to endure grotesque treatments to "cure" his condition. 

Besides his codebreaking, Turing was one of the first in the modern era to propose the question, "Can a machine think?" In Turing's case, he devised a novel method for answering the question. Rather than considering what is going on "inside" the machine, Turing proposed looking at the behavior of the machine to determine whether it had achieved "intelligence." This is where he proposed The Turing Test which involved a study with three participants: a human questioner/interrogator, a human respondent, and a machine respondent. Each member would be isolated in his/her/its own room and the questioner would not know if he or she was communicating with another human being or the machine; that is, until the questions begin.

The test proceeds with the interrogator asking questions electronically with the two respondents, both of which are in separate rooms from the interrogator and from one another. This questioner can ask anything he or she wants, from questions about chess, poetry, politics, math, whatever. Since the questioner doesn't initially know if he or she is talking to a computer or human being, the respondents can answer however they want, and in the case of the computer, it can attempt to trick the questioner (maybe waiting a while to answer to simulate "thinking"). If the computer can fool the interrogator into not knowing which is the machine and which is the human for at least an hour, the computer would be said to have passed the Turing Test. It should be noted that no computer has ever passed the Turing Test in a controlled experiment. Also, Turing's experiment is based on behaviorism, a now discredited branch of psychology.


Chinese Room Thought Experiment

The Chinese Room Thought Experiment asks you to imagine yourself taking part in a language experiment. You are placed in a room for a year, and once in the room you are asked to notice several items: on the wall, an "inbox" and an "outbox"; several sheets of blank paper; writing implements; and a large book that you are told contains the directions for the experiment. You begin reading on page one in the big book, which states that after a short while a piece of paper will come through the inbox, and when it does, you are to return to the book for further instructions. After a little while, a piece of paper comes through the inbox, you try to read it, but there are only strange symbols on the page. However, you return to the book in which you're told to locate the symbols you find on the paper on different pages in the book. Depending on the symbols you're able to match up, you are to write down different symbols on the blank sheets of paper. These different symbols are located in the big book next to the matching symbols you located on the piece of paper that came through the inbox. After you copy the corresponding symbols down on the blank sheets of paper, you read from the book which states that you are to send it through the outbox. You do so, but after a little while another sheet of paper with different strange symbols comes through the inbox, so you go to the big book and follow a similar process, at the end of which you send another piece of paper through the outbox.

The experiment continues for the full year and you get quite good at copying down the correct symbols and sending the papers through the outbox. You become so skilled at this process that you rarely need to consult the big book. After the year is complete, you are allowed to leave the room and you are told that the strange symbols on the page are Chinese characters, and that these papers coming through the inbox are short stories in, of course, Chinese followed by questions about the stories. You have been responding to these questions in Chinese, and you are told that the native Chinese speakers who wrote the questions have been amazed by your deep understanding of the themes and metaphors in the stories. You respond, however, that you never understood anything--you don't speak (or read) Chinese! There is a world of difference between following directions in the book and understanding what it is you're reading and writing, you say.

The ​Chinese Room Thought Experiment is perhaps one of the most intuitive arguments against computer or machine intelligence because computers, even those with complex "learning" algorithms, do not understand what they "read." In this example, the "book" is the computer program, the inbox is the input (keyboard/mouse), the outbox is the output (monitor), and you and your writing materials are the computer processor. Interestingly, at no point did the computer (you) "understand" what it was processing. This is the difference between intelligence and executing a program. Overcoming this thought experiment would be required for true A.I., because one must remember that artificial intelligence is getting computers to complete tasks that, when done by humans, are believed to require intelligence.

This page has paths:

This page references: