4. Alien Language – Understanding the Origins of Linguistic Behavior
Suppose that an alien spacecraft controlled by intelligent robots were to land on earth. The robots have a form of verbal communication, they signal each other with sound but we have no idea what is being said. They depart, leaving one of their own for us to experiment with and we discover that we can access its solid state brain and other systems in order to attempt to learn the language and communicate with the artificial intelligence. Since the robot is alien we may very well have no common concepts outside perhaps of those of science. What’s worse, the robot is uncooperative but passive with respect to our efforts to discover its language. What can we do?
First of all, we could try for some ostensive definitions; try to get the robot to name things, perhaps objects in the visual field. But this is unlikely to get us very far, we wouldn’t have much to show the robot that it was familiar with and we could never really be sure it was naming things as opposed to doing something else when it talked. But suppose we were able to identify the robot brain’s subsystems and their points of integration. We find the control system for the sound generator, the sensory system components and perhaps the memory units. If we can identify all the possible types of inputs to the sound generator we have accomplished something important, although it may be hard to grasp what this is from the standpoint of human beings and human languages. What we have done is to determine the possible information sources to the vocal system we have in a sense determined what the robot “can talk about”. This is not yet meaning, reference or any currently recognized aspect of linguistic behavior, it is something much purer and simpler, the sources of information that can generate speech.
It probably would not be easy to do this unless the robot was modular and digital. But conceivably we could identify things like light and chemical sensors and other components that sent information into the central control unit, the “brain “ as well as internal subsystems that process the information provided by the sensory equipment. With controls on “motivational centers” we might force the robot to “name” objects in the sensory field. We might then be able to distinguish between overall control centers or systems that determine if and when audible behavior is produced with the systems or centers that determine what specific behavior is produced.
We might be able to identify specific internal operations outside the speech generator that determine output behavior in a repeatable manner, operations or functions that under the correct “motivational” circumstances always produce the same output. These operations or system functions then drive the output center in conjunction perhaps with the operations of other non –specific information based systems. Let us then call these operations “driving functions” because when they are present they produce output by “driving” the response generating components of the system.
Driving functions produce the information basis for output; they decide what should be said but not whether it is said. The sensory apparatus of an artificial agent would presumably produce driving functions – the robot would probably be responsive to the environment and be able to talk about it. The robot would probably have driving functions produced by internal states as well, these unrelated to the outside environment. The robot could, for example, probably relate the situation with its energy resources –its state of charge- or other internal system conditions of various sorts. It might also be able to relate the contents of its memory or even the states of its overall control or motivational systems.
It might prove difficult to characterize all the possible driving functions for the robot’s verbal behavior, probably the outputs would sometimes involve the integration of more than one or even of one type of driving function. We could imagine the robot saying something like: “I’m feeling tired, I sense my batteries are low, I see a wall outlet over there, I’m going to plug myself in.” Here the robots internal states, observations and intentions are all related in a single element of behavior that might well prove very difficult to decipher in the laboratory with only the tools of electronic system analysis. We can imagine many complications and difficulties here, but we could also imagine that at least in theory, something like this could be done.
With a thorough understanding of the robots sensory machinery we can assume that we might be able to determine the referents of some of the robots ostensively definable “words”, although determining the “meaning” of its verbal behavior might still be virtually impossible. If, for example, we thoroughly understood the robots optical system, we might be able to determine what aspects of the visual situation were generating the information incorporated in a particular driving function that was producing an element of verbal behavior. We might for instance, through hardware or processing algorithm analysis, determine whether it was the color, shape, distance or number of objects that was determining what the robot was “saying” when the robot was responding to the immediate visual situation.
At this point then our problem with understanding the robot is the problem of going from cause to meaning; we might know why the robot is talking even if we don’t understand what it is saying. Understanding a linguistic performance, knowing what it means, requires at least understanding the language of the performance. But causal information is not necessarily irrelevant, it can sometimes be very informative, especially with words we really don’t understand very well. At very least, experimenting with the robot would teach us something very important, linguistic behavior is information driven, we can in principle determine the information producing any element of linguistic behavior independent of knowing the meaning of what is said. We can also determine the limits of direct knowledge based on the sensory capabilities of the system and determine what the robot can know directly about its own functions and operations. These have to produce information for the language production system if they are to be spoken about.
We can imagine doing the same thing with human beings even though the brain is harder to decipher than a modular electronic system. We know where the generative speech centers are, at least roughly. We also know where the sensory systems are in the brain, where we decide what’s out there. We can identify fiber tracts which carry information and more generally convey activity back and forth between various neurological systems. We have then for human beings a basic understanding of the brain as a producer of linguistic behavior. The question then becomes: What if any use is this?
Typing the sources of information for verbal behavior is informative because as it turns out, there are only a few types unless we think that there are little miracles occurring in our nervous systems. If the nervous system is closed at the physical level, if there are no inputs from beyond the physical world then we can classify the information sources into two types, those that are produced by sensory information and those which are the result of internal operations not immediately caused by external stimuli. The distinction is between direct responses to external sensory stimuli –which are generally rare-and everything else.
Concentrating on the information in the response, or at least on the type of information in the response can be illuminating even if we don’t know the exact source of the information. This again does not provide meaning or reference, it is about information and information alone doesn’t necessarily tell us anything of importance. But if we know the source of the information we can characterize in a general way the nature of the linguistic behavior that it is producing. Most of the information generating the linguistic behavior of human beings is produced by internal processes not immediately related to their current sensory experiences –adults don’t walk around talking about what they see, feel, smell or hear for the most part. Most of it is generated by internal processes of various sorts that we don’t understand very well.