Personal Identity –an Information Based Approach

 

          1.The “I Robot” Problem.

 

                   How do we teach a robot or AI system to use personal pronouns like “I”, “me”, “mine” etc? How do we teach it to develop a concept of itself? Can it acquire one on its own? How do human beings do it? Pro- forma responses like “I don’t know the answer…” would be easy to introduce if the system had something like a rudimentary command of a natural language but what is required for a sense of self as evidenced by spontaneous self reference? Is the concept of self our ones own individual identity dependant on a particular kind of language?

 

          We should note that we cannot perceive ourselves as we perceive the rest of the world. I can experience an apple, say, in many different ways and develop a concept of an apple through these experience. My apple concept would then be generated by my apple experiences. Can the sense of self be generated in the same way? In prehistoric times, before mirrors were invented, many people had never even seen themselves except as reflections in the water. Did desert people have a different sense of self than those who lived around a lot of lakes? Is the sense of self somehow limited with the congenitally blind? This seems absurd. Is our sense of self dependant on our experiences as or of ourselves?

 

          Imagine a language with no personal pronouns in which the subjects of sentences of certain types are assumed to be the speaker unless someone or something else is identified. Thus “Went to the park.” is taken to mean that the speaker went to the park and is to be translated as “I went to the park”.  Assume that even the object of a sentence normally involving a personal pronoun is not present, e.g., “The book belongs.”- means “The book belongs to me”. How do you express a personal identity in a language like this? Do you still have one if you can’t express it? Is the sense of self tied up in the language used to relate it?

 

           What we are after of course is what we have previously called the driving functions for these words. What information produces them in verbal behavior? It may be that in some cases at least that the personal pronouns are introduced purely as a result of syntactic or mnemonic considerations. I say “I left the park….” because I have a recollection of the experiences associated with leaving the park.   But how do these memories produce the information that leads to saying “I....”? It is easier to figure out what the components of self awareness are than to figure out how we can exhibit it verbally. The components are pieces of information which are all we have of ourselves save for our bodies and artifacts.

 

 

     2. Can  a Personal Identity Give Rise to Values?

 

          We are used to backing up our hard drives, digital systems today allow for perfect reproduction. This marks a distinction between digital and biological systems. We cannot backup a human brain and probably will never be able to do so. The problem is one of information recovery or extraction, in order to duplicate a human brain we would have to somehow extract the information from every neuron and synapse in it. This means recovering the information in billions of neurons and trillions of synapses and ultimately the molecules and atoms that comprise the neurons and synapses.  So the problem becomes that of acquiring the information in a biochemical system without altering or destroying it in the process. Even for a dead brain this would be an unimaginably difficult task to accomplish before the specimen degraded, for the living brain this looks like an absolute technological impossibility.

 

          Artificial intelligence (AI) may not use digital systems like the current types in the future; it may use quantum computers or new forms of non- digital information storage and we might then have no guarantee of a complete copying capability. But for the present it looks like we have an ultimate distinction between the biological and the artificial.  Robots are reproducible –at least if they’re digital, humans are not. Every human brain is unique, through genetics and development and most importantly, through experience and is not replicable. You can make as many copies of a robots brain as you want or at least copies of the robot’s acquired information and that’s the difference between human like machines and people.

 

          This human uniqueness is what we will ultimately have to rely on to define personal identity, assuming we regard uniqueness as in integral aspect of a personal identity. Every human being is different from every other human being and the differences or the fact of the differences are the essence of each individual’s personal being. But what’s the significance of this personal identity?  Mortality is one aspect of it. Machines can live forever, human beings, at least currently cannot. When a human dies, something unique is lost to the universe. The value problem is that the universe doesn’t care; only other humans and sympathetic robots care about human mortality. Diversity is another aspect; we should expect for at least the short run that humans will be more diverse than machines.  But even while we have greater diversity and a sympathetic interest in it there is no obvious value to be associated with it. The value has to be the diversity itself which to other types of systems may be little more than a curiosity.  

 

          If human individuality and a unique personal identity are givens as a matter of fact can the same be said for AI systems? Even an AI agent with a particular and peculiar personal history can be reproduced if technology allows for it.  These agents would know that other examples of themselves were possible even if the possibility had never been realized and was unlikely to be realized. In what sense then can an entity like this have a personal identity?  Is not uniqueness in principle at least an element of personal identity? Good or bad, there can only be one “me”. How would things be different if there could be another me, if I was replaceable with an exact replica?  Would I place the same value on my life, property, relationships etc. if I was essentially replicable and immortal? Would any one else place the same value on me under these circumstances? A situation where unique types become tokens is novel and problematic with respect to intelligent entities.

 

           But perhaps we would be wrong to deny uniqueness to systems that can exactly replicate themselves. There is a difference between experience and the record or memory of experience. The difference is between being a certain way at a certain moment of time and the record of what that moment was like. The memory of that moment involves a qualitative loss because it involves transduction, and a loss or change of information because of information processing, at least with all system with which we are currently familiar.

 

 

          3. Robot Values

 

AI agents might well have a hard time in understanding human values. And it will be hard to inculcate human values in a system whose existence is so phenomenally different. Ethics is finally a matter of sentiment, of feelings, wishes, hopes, aspirations and empathy. Our empathies are built in to a certain extent; they are in hardware, in mirror cells and built in capacities to recognize the other. Perhaps this can be built in to machines but a fundamental problem remains. The machines will be intellectually different, their knowledge of the nature of their own existence will influence their judgments- or at least we should suspect that it does until it is proven otherwise.

 

          Asimov’s Laws for robots are sometimes thought to be the answer here:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.  (From Wikipedia)

 

But only a childlike belief in the importance and efficacy of rules would allow someone to think that “laws” like these were adequate for AI agents. What exactly is harm to a human being? Could a robot surgeon do a discretionary operation on a human being? Cutting into a human being is harmful unless done for a greater good. Who or what decides if the good is great enough? What about conflicting human rights or values?  Can a robot policeman use force to disrupt a demonstration? How does Robocop decide between free speech and the right to assemble and peace and tranquility in the public arena?  Adding the second law only compounds the problem. Now the robot has to decide if the human giving the orders is right.  (The third law is essentially an economic principle; the robot is to defend itself against the largely economic loss associated with its own destruction.) 

 

     Any interactive system, one capable of affecting something external to itself, must have what are in effect values, norms, goals or desires something which the system is trying to produce or control by its actions. A thermostat tries to regulate temperature, a pressure relief valve limits system pressure and an active AI system is going to have far more sophisticated and complicated goals for its own activity. ( Values as a cybernetic problem.) How are human values to be inculcated into the system? If the system understands a human language then we can presumably use verbal instructions.  But to understand a human language you have to be something like a human being especially when things like value judgments are involved. The robot would have to understand pain, grief, regret, hopes and aspirations, many human emotions and conditions if it is to be “moral”. The idea that we could introduce a cognitive knowledge based theory of value into an AI device is at least very problematic. Ethics has a qualitative, “feeling” aspect which may be quantitatively generated in many cases. We may feel strongly or not at all about any given situation in which value judgments might be relevant. This is at least partially a hardware phenomenon related to the juices running around in us, any AI system that lacks comparable hardware is going to have a hard timing understanding much less using human values.

 

     There is then a paradox in advanced robotics. The more valuable, i.e., intelligent and capable an AI agent is the more dangerous it is in that it may not adopt or even be capable of understanding human values. But the problem here is not that of the robots gone bad in science fiction. The problem is that the robot may not understand the difference between good and bad because of the type of system that it is. Technocrats as such have no understanding of this; they need guidance from psychologists, physiologists and even philosophers if they are to proceed responsibly in the quest for artificial people.

 

RCE 8/12