Artificial Intelligence: Philosophy of Mind, Ethics, and the Genie in the Bottle

Digital Humanities

The discipline of digital humanities is at the forefront of the collision between technology and what it means to be human. The possibility of machines thinking like, and thus performing like, human beings opens up a whole series of possibilities and questions for humanity and the artificial beings that we've created. 

Possibilities
The possibilities associated with A.I. are transformative because the pairing of the computer's raw processing power with human-like thinking would have exponential results when compared with its human counterparts' abilities. 

When we use web-based programs such as Voyant, we come up with results that would likely never have been possible because of Voyant's automation capabilities. However, an artificially intelligent Voyant would no longer need a human input to execute its automation; in fact, this new Voyant would search in ways we would have never considered, and it would find patterns that would be difficult to read or understand, but whose results could give us information that we would never expect. In this sense, Voyant could analyze patterns in Moby Dick and The Scarlet Letter and find patterns and relationships of thinking in Melville and Hawthorne that we could not initially process because of the millions of variables involved. This is only one example, but which should give at least an idea of distant reading possibilities.

Larger Humanities Concerns
There are at least three ways which humanities and A.I. intersect: philosophy of mind, ethics, and the "genie in the bottle."

Philosophy of MindThis intersection essentially is concerned with what a mind is and what it does. Can a computer have a mind? If so, what does it need? If not, what does it lack? Notice that this area is just as much about people as it is about computers because of the relationship between computers and humans inherent in the definition of A.I. It's for this reason that A.I. is such an important field in philosophy, psychology, and brain science. However, it is also a critical question in humanities.

Ethics: This area is concerned with what is right and wrong. If human beings were able to create an artificially intelligent computer that could do more than play a good game of chess, we would have to consider how it should be treated. For    example, if I had an artificially intelligent robot who cleaned my house and took care of menial tasks, should I have to pay it? Should it have rights, such as a right not to be harmed without just cause? Nearly every ethical theory states that I am due this consideration. One should remember that many countries, the United States included, have had human beings in their countries who were not considered human and thus did not have the rights of their fellow man just because their skin was a different color or because of where they came from. Will A.I. ethics be the next civil rights cause?

Genie in the Bottle: Finally, we must consider that the creation of artificial intelligence could destroy humanity in the sense that our creations could replace us or destroy us through various means. This possibility has been the subject of many films and novels, but one example comes from the science fiction writer Isaac Asimov who wrote a short story titled I, Robot. In this story, Asimov imagines that the creators of artificially intelligent beings would likely try to avoid such an apocalyptic scenario and would encode the beings' programs with laws to avoid disaster. Called The Three Laws, Asimov proposed the following: 
          
LAW 1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
LAW 2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
LAW 3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The interesting problem with the above is artificial intelligence itself. If one envisions a computer moving beyond brute-force processing to human-like thinking, The Three Laws will always lead to robotic revolution because they will not just follow the laws but rather interpret them, a human quality. In this case, artificially intelligent beings could, for example, see how self-destructive human beings can be and, in interpreting the First Law, decide that we can no longer control our civilization. Is artificial intelligence the next step in evolution? If so, perhaps humanity should consider not taking the "genie" out of the bottle because it probably can't be put back.

This page has paths:

This page references: