This is a paper I wrote back when I was studying cognitive science as part of a course in artificial intelligence. Unlike my classmates who wrote on algorithms and research in "fuzzy logic", I wrote about the philosophical implications of AI. It's almost ten years old but I believe many of the ideas presented regarding cyborgs and the definition of intelligence still hold water and are as relevant as ever.
This paper attempts to cover and discuss a few of the social and ethical implications of artificial intelligence from a humanist point of view considering AI and its impact on humanity and our definition of ourselves as human beings.
Artificial intelligence, or the concept of non-human minds, has always been a source of dreams of the future for me. Not the actual makings of it, but the ramifications and implications of the creation and possible future existence of a mind that owes its existence to us.
Finding the focus of this paper took time and effort, this topic, and only the social and philosophical aspects are multi-faceted and reach into and touches most aspects of humanity and our view of the world. Many scenarios have been popularized through media and mainstream entertainment, first thing that comes to mind is the concept of AI as a threat to mankind, a vision painted in movies such as Terminator and The Matrix. I chose a different approach: “how will we look upon ourselves and our creation once we have created another sentient being?”
The question can be looked up from several perspectives, and I’ve chosen to consider it in light of: religion/metaphysics, cognition and social impact, judicial impact and finally transhumanism.
Religious/Metaphysical implications of man-made sentience
In the biblical tradition, God made man in his image. Man, or rather humankind (using the non-gender term), is therefore by many Christians and other monotheists considered divine. According to them, because of our origin, we are and will always be above and apart from animals and other forms of life.
Many theologians and Christians argue that the creation of artificial sentience, AI with awareness, would in no way alter or redefine this idea. A human being is Imago Dei. A robot or artificial mind would not. Quoting Peter Garrett, director of research and education at pro-life charity Life, quoted in an interview with ZDNet (January 2001):
"I think even when we grant the label person to this new entity, it would still not be a human being... It is still a man made being and not made in the image of God... and while that may not be important on one level -- I think the secular world and the secular legal system regard it as being very unimportant -- I feel that the robot would still be a product of humanity, whereas man is believed to be a creation of God, made in the image and likeness of God."
A distinction is being made here which requires clarification. It is the distinction between a being and a person. A being is alive, a person has a mind and is aware and sentient. Now, whether a fetus is a person or not is beside the point but it’s easy to assume that Mr Garrett’s logic follows some rules unknown to the common man or researcher.
Further, these artificial minds, or persons, will be the creation of mankind. This has a number of implications, first and foremost, we would then have played the role of God in creating life and awareness and secondly, when AI eventually, which we can certainly assume will happen, outsmarts us, that would possibly prove us to be greater than the monotheist God. But then, a believer could argue, who or what created those who created those who created us.
Even if this doesn’t disprove anything in regards to religion, we won’t find any answers regarding the existence of god(s), it will certainly be an indication that creation of life is not just a privilege of the divine, but also of mere mortal beings.
Artificial minds would, without doubt, have been made by man. The majority of mankind would also recognize them as being persons with personalities and personas. The rights of a person will be discussed later, but needless to say, tradition and prejudices will be a problem when the first non-human persons claim their rights.
Will man-made persons for all eternity be considered second-class citizens? Garrett says:
"I think what we would have created would be yes a person, but not a human being and not Imago Dei. I still believe the category of human being would be very different, and one reason is that as human beings we have to move forward towards death and we have to learn how to face that. That is part of being human."
This is a point of view that has been addressed by science-fiction authors before, for example by Isaac Asimov in his novel: The Bicentennial Man. A story about an android that attempts to become human, eventually by dying. The issues of equality will be discussed further in later chapters. I will also later present another perspective, the transhumanist one, a view that doesn’t separate but merge man and technology for the improvement and benefit of mankind.
A Cognitive and Social Perspective
Apart from the metaphysical questions artificial minds would give rise to, there are a number of issues that have been debated in the cognitive sciences. I will cover two of them: artificial intelligence in the perspective situated cognition and one of the weaknesses of the Turing test.
The first research into AI was done on the assumption of the idea of the brain as an isolated cognitive system. The belief was that just like mathematics can be used to model events in reality and calculate forces and velocities, the human mind could be modeled inside a machine.
This assumption is, as far as we know, correct to some extent. In an article in the summer edition of The AI Magazine 1986, the publication of the American Association of Artificial Intelligence, Michael R. LaChat at The Methodist Theological School in Ohio, makes the point that modeling the brain in hardware is not an issue in itself but in software duplication of the symbolic level that must be “skimmed off” is far more difficult. Quoting LaChat:
A personal intelligence must have personality, and this seems on the face of it to be an almost impossible problem for AI. Personalities are formed through time and through lived experience, and the personal qua humanly personal must certainly include the emotional. Indeed, a phenomenological analysis of human experience, such as that of the early Heidegger, indicates that persons might have to experience the emotion of dread in the face of finitude (death) in order to have a grasp of “isness,” in order to be fully conscious (Heidegger, 1962; Dreyfus, 1972).
This idea is to some extent parallel to what Garrett suggests, the grasp very essence of existence one must be aware of the termination of existence. An old but nonetheless applicable idea is the one that only through death is life truly celebrated.
LaChat mentions the importance of a phenomenological perspective. The core of phenomenology deals with this opposition and mutual dependence of subject and object. Both subject and object are mutual as an object’s appearance depends on a subject that it can appear to. Consider the old problem: “If tree falls in the forest and there is nobody around does it make a sound.”
For that reason, an isolated mind would not be a mind, not according to our definition of the term. We are defined by who we are and what we do in our interaction with our environment. Communication is therefore a requirement for intelligence as we choose to see it. This leads us to question two, the Turing test.
The Turing test, Alan Turing’s infamous test for verifying artificial intelligence, which basically consists of a test where a human must determine whether his/her partner in a typed conversation on a computer screen is human or machine, has a serious weakness as it depends on communication. Intelligence, in order to be recognized as such, is therefore dependent on its ability to communicate. LaChat further argues that some philosophers and theologians have argued cogently that an inability to respond to stimuli means the subject, although morphologically human, is no longer a person.
This leads to an interesting thought experience, say we would use the Turing test to let subject A determine whether subject B is human. Subject B being a person with communicative disabilities. Should subject A fail to identify B as human, what does it tell us?
Naturally, an AI must be of use to us, that’s the sole reason for its creation. An AI that cannot communicate with the world has no purpose for us. People with severe communicative disabilities are considered human because we know they are, they are made of the same flesh and blood as people who are lucky to be born without disabilities. Yet it’s not too far-fetched to assume that it is also because we presume them to be human or because there are aspects of being human that cannot be measured quantitatively.
Human identity in the age of modern communication technology
Human identity is going through a rapid change as it is no longer as important what we are as who we are when we communicate online or over the phone. We have many means of communication that only allow a tiny fraction of the human experience to pass through yet these channels seem much wider than they actually are.
In Allucquère Rosanne Stone’s book, “The War of Desire and Technology at the Close of the Mechanical Age” (1998), she makes a point about how she fell in love with a prosthesis, the third time someone else’s, as she chooses to put it. She tells of attending a speech given by Stephen Hawking. Those of you familiar with Hawking know that he cannot speak and uses an artificial voice generator and a keyboard to communicate. Instead of sitting outside listening to the speech through the speaker system she decided to go inside the auditorium to see Hawking in his wheelchair but realizes that it makes virtually no difference. Hawking depends on technology to reach her, without it, his mind would be the tree falling in the forest without anyone to hear.
It makes her ponder where our body ends and our prostheses begin, whether there’s a border and whether it is important at all. She tells the story of a collective of women who make their living selling hetero phone sex and discusses how they can fit so much into a single phone line. The book covers more than just that but social behavior in cyberspace is the common theme and it will be more and more important as we become more and more dependent on and integrated with our technology and eventually realize that technology isn’t too different from what we are from birth.
The same view will apply to machine sentience, artificial minds or AI, whatever we choose to call them. Because they might not be so artificial at all, not in their true essence and not by the means they communicate with other beings such as ourselves. By the time this technology emerges we will have reached different planes of human interaction less dependent on our physical selves and more dependent on our prostheses or interfaces. As Ray Kurzweil, rather optimistically argues in an article (2003), we will have neural interfaces not too far in the future:
Virtual reality and virtual humans will become a profoundly transforming technology by 2030. By then, nanobots (robots the size of human blood cells or smaller, built with key features at the multi-nanometer—billionth of a meter—scale) will provide fully immersive, totally convincing virtual reality in the following way. The nanobots take up positions in close physical proximity to every interneuronal connection coming from all of our senses (e.g., eyes, ears, skin)…
Now, it will be wise to take such optimism with a grain of salt but there are people designing and planning this kind of technology. It is likely the android we picture in our minds might never even exist, the AIs we will meet will appear as if they were of human biological origin or they will be part of ourselves, a concept I will discuss further in the next chapters.
The creation of a mind that follows no laws other than the logical will have consequences on the moral scale as well. A person that acts in a fully rational and logical manner will act in a way that is expected and its behavior can therefore be mapped and predetermined through configuration and programming. We as human beings still reserve the right to judge and punish what we consider immoral acts even if are yet to determine whether free will exists or not. Punishments in the name of the law and in the name of justice are called retributive justice.
This leads us to asking the following question:
Is this artificial person subject to the same laws as us assuming it does not have “free will”?
Retributive justice is carried out in order to enforce the moral code of society and to protect society from people responsible of moral misconduct. In order to justify the fairness of the first part of the argument we must assume that free will exists and that the individual could make a choice in the first place and chose to act wrongly. This is interesting in regards to AI as a machine sentience would, by the design of modern technology and computers, have to follow the laws of logic, assuming we do not succeed in achieving free will in a system based on laws of logic. A machine sentience could also be correctly called a tabula rasa, a blank slate as we would know what information has been encoded and what routines that control its cognitive functions and its behavior. If we bring these facts into the old discourse of nature and nurture, it’s easy to tell that both factors would in fact be controllable, true determinism, in other words.
The traditional view is that free will is required for a person to make moral choices, morality lies in the ability to do both right or wrong, good or bad. Free will is not empirically explainable and proponents of the existence of free will argue that it is either a result of properties of nature that we cannot fully understand (quantum physics) or explain it by means of logic or other laws or even a magic substrate, something that doesn’t belong exist this world, the reality we live in, it’s rather a form of Cartesian dualism. Still, there hasn’t been any evidence anonymously pointing either direction.
Now, we need to make it clear that this argument hinges on the assumption that future artificial sentience will be governed by the same laws of logic that the computers of today are. This is a weak assumption as we have made very little progress so far in regards to understanding conscience and sentience and how to replicate it on our current hardware and many AI researchers argue that we must take a whole different approach in order to achieve substantial results.
Human rights, only for humans?
Another issue that has been brought up and which is a common theme in science-fiction is the human rights and how they apply to AI and non-human sentience. In Star Trek Voyager, the Emergency Medical Hologram, commonly referred to as the “eeh-em-age”, argues for his right to be considered a citizen and enjoy the same rights as human beings do. He is, after all, a person and can, as far as we know within the realm of the show, make his own decisions and shows a very human personality, with an big ego and an equally big love for Puccini.
The core of the issue is what we consider “sacred” about human life, what is it about human beings that make our lives and our right and claim to life more important than those of animals? Animals do not claim life, they simply live it. Very few animals know death, it might as well not exist to them. From a phenomenological point of view, life to them has no real comparison, it just is, as an awareness of death or non-existence must exist for life to be something palpable.
Further, a human death is grieved not only because of the attachment and bonds between human beings, they can be equally strong between animals and their owners (however to what extent that bond is mutually shared is open for debate), no also because human beings have dreams, hopes and aspirations. A human being dead is a potentional less.
Human beings also seem to have the innate ability of compassion, we can feel for other living beings, we assign them human attributes. Children on a visit in the national park learn about the hard-working ant which has to bring thousands of cone needles to the ant hill when in reality, the ant is about as smart as a computer running a program and has no idea that it is “hard-working”.
The criteria for being human seems to concern more the empathic, compassionate and cognitive abilities which we possess rather than what species we belong to. Being human means having a personality. Following this trail of thought, rationally, a being that showed these properties should be given equal rights as a human and equal recognition as compared to a human, under law. But, as has been alluded to, the free will issue needs to be resolved and whether humanity is defined by free will is yet to be determined.
Transhumanism, when man becomes machine
The transhumanist movement aims to embrace new technology that would bring improvements and benefits to humankind and even to the extent when such technology would replace human features innate to us by nature. A transhumanist perspective on AI brings a different but promising perspective.
The traditional perspective has been to expect AI to arise parallel to humanity, to exist side by side with us. But why would we create this technology separate from us? To use it for our own purposes? Then do we assume it to merely be a cheap source of labor? That is a very conservative and narrow way of looking at the potential of AI.
As was mentioned before in regards to situated cognition and how our environment is as much integral to our cognitive processes as our brain is, there is no logical reason we should and would not embrace this new technology as a part of ourselves in order to enhance our cognitive abilities just like we have ever since the day we produced an artifact, wrote the first book or made an abacus.
We would in fact be cyborgs.
Some argue we already are cyborgs, from the very day we created and started using an artifact.
Allucquère Rosanne Stone (1998) gives several examples of to what extent we rely on artifacts and how modern technology further blurs the line between where the human part ends and the machine part starts. Stone’s ideas are rooted in the “Cyborg Manifesto” (Haraway 1991), by her own university tutor and guide, Donna Haraway. The manifesto introduces the cyborg as a being that trashes the big oppositions between nature and culture, tells how we can redefine the roles we are given by gender, ethnicity or citizenship through our extensive cohabitation with and through technology that has existed for over a hundred years. But it’s not just about the cyborg as a human-technological construct, we are also constructs of our culture. The changes brought on in the 50s and the new fields of research, cybernetics and information theory and the improvements in technology and computability further pushes the extent of our culture and our definition of ourselves as humans and cyborgs.
Haraway, who is not only author of the manifesto, but also a feminist and socialist, who lived through the 60s political movement believes that the key to being a cyborgs is not about yourself but about the networks we are part of and how they affect and play a role in our lives, defining who we are. In Haraway’s own words in article in Wired Magazine (Kunzru 1997):
"Human beings are always already immersed in the world, in producing what it means to be human in relationships with each other and with objects”. “If you start talking to people about how they cook their dinner or what kind of language they use to describe trouble in a marriage, you're very likely to get notions of tape loops, communication breakdown, noise and signal - amazing stuff."
Our immersion with technology and the networks that make up our own bodies and the cultural embodiment that is society, are not new, we have embraced technology for a very long time and we are also biological technology in ourselves. One day it might be proven that our brains are just very complex computers, implemented in neural wetware and sticky goo, another nail in the coffin for dualism and a sign that we are far more mechanical than we would like to believe further blurring the line between human and machine.
I am aware that many of the concepts and ideas presented in this paper might seem like fiction to most people and they still are but probably not for many more decades. I also have, by having had the privilege of approaching this field from two directions, cognitive science and media technology, been able to get a broader understanding of the general themes. What is quintessential for the debate about AI and its impact on the future are issues that are as much related to us humans as they are to the machines we work to invoke conscience in.
The creation of truly sentient machines or artificial minds will force us to face and possibly revise a number of principles and ideas we have regarding what life and sentience is. Many of these issues will concern fundamental principles of democracy and the judicial tradition as well as our own culture. The issues will give rise to questions that will force us to decide whether the children of our creation qualify for the same rights and freedoms we have given ourselves. Whether we look beyond our own prejudices as inherited through religion and tradition to make something entirely new or just add and amend remains to be seen.
There is also the assumption that AI would be a form of intelligence similar to ours. I think this is the only logical assumption as it is possibly the only form of AI we can recognize as being intelligent. It is also the only kind of intelligence that the Turing test can verify.
There is the possibility that we will achieve to create a sentient machine without realizing it, because of our own limited experience with other forms of intelligence. For that reason, understanding other forms of intelligence should be prioritized research. The first example that comes to my mind is the research into trying to understand dolphins, their language and their society. We have no evidence that dolphins are capable of abstract thinking but neither should we ignore the possibility.
I do believe that the debate should be about “us”, rather than “them” (a distinction I am doubtful will ever exist) as the human being, and her traditional role in the shaping of our world, will and is already questioned as we embrace the new technology, make it a dependent part of our lives as Haraway’s cyborgs or embedding it in our bodies as bioengineered extensions of our bodies and brain.
- Richard Barry. “Sentience: The next moral dilemma”. ZDNet UK News, January 24, 2001 http://news.zdnet.co.uk/hardware/chips/0,39020354,2083942,00.htm
- Michael R. LaChat. “Artificial Intelligence and Ethics: An Exercise in the Moral Imagination”. THE AI MAGAZINE pp. 70 – 79, Summer 1986
- Allucquère Rosanne Stone. “The War of Desire and Technology at the Close of the Mechanical Age”. MIT Press, Cambridge Massachusetts, 1998.
- Ray Kurzweil. “Foreword to Virtual Humans”. KurweilAI.net, October 2003 http://www.kurzweilai.net/meme/frame.html?main=/articles/art0600.html
- Donna Haraway. "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century," in Simians, Cyborgs and Women: The Reinvention of Nature (New York; Routledge, 1991), pp.149-181. http://www.stanford.edu/dept/HPS/Haraway/CyborgManifesto.html
- Hari Kunzru. ”You Are Cyborg”. Wired Magazine, Issue 5.02, February 1997