3. The Epistemology of Systems and Processes – Some Fundamental Considerations.
There is a clear message from modern science and technology; most of what is of interest involves systems and processes. Everything from geology to psychology, from engineering to medicine is ultimately about systems and processes. We don’t easily grasp this as human beings because our innate cognitive faculties our concerned only with the immediate and specific. We have tools and technologies that assist us in getting around this fundamental limitation but unless we employ them constantly our cognitive behavior defaults to a simple now and this operating mode, to a “this particular thing at this particular time” level of functioning. This limitation is easily illustrated. Imagine your favorite movie, how much of the complete movie do you actually remember? How much of it can you recall in a single instance of visual imagination, one frame, a section or area of a frame? Can we even imagine a system that can review the entire movie in a single operation? This is 960 frames per minute or 86,400 frames for a 90 minute movie. There is no reason in theory why such a system could not exist. No reason to imagine that a system that could internalize, for example, “Gone with the Wind”, and inspect the whole movie at once for say the presence of an apple in any frame is impossible.
Perhaps no one has ever even tried to imagine such a system before because of the onerous hardware and software requirements. If we were to build a film analyzer device we would probably have it work very quickly on one frame at a time rather than trying to have it see the whole movie at once. This might be quicker or more efficient, but the point is that we can barely conceive of such a device and tend to think of the one frame at a time approach. Did anyone throw anything in “Gone with the Wind”? If you recall that someone did, is this recollection visual or is it a remembrance of something you heard, or perhaps said to yourself? While our motor performances can be quite lengthy without external prompting – reciting the entire Bible from memory for instance- our ability to recall without the feedback from the performance itself is generally quite limited.
The nervous system in perception and recollection tends to identify simple objects and work with very short periods of time. This is a problem in a world consisting of systems and processes. Macroscopic systems and processes can neither be defined nor known in terms of singular elements and brief instances. We might tend to think that generalities, abstractions, general principles, scientific laws and the like allow us to escape or get beyond our inherent limitations but this is illusory. These limitations result from the nature of information processing in the central nervous system, and most importantly from information reduction which is a basic operating mode of neurological systems. Neurological systems reduce the amount of information received by the sensory apparatus. This is the basic operational mode for generating universals. We map many specific instances into a smaller number of responses. Thus there are many identifiably different patterns of illumination of the retina which will be called red and many patterns of features, lines, curves etc. and their arrangements that we will regard as human faces.
It is not clear that a cognitive system has to use universals, if it has to regard something as being of a particular sort, “one of many”, “like the others”, “an example off..”, or “governed by these laws or rules” in order to function. This probably depends on the details of system operations and the nature of the system’s output. But if we’re traipsing around the jungle it helps to hear slightly varying lion roars as all the same thing, a cause to be concerned or frightened. It is easy to understand how universals are convenient and even necessary for evolved intelligence, for created intelligence the case is not so simple. The problem ultimately has to do at least with transduction and measurement at any analogue level of the system. If the system converts one sort of non-digital influence into something else it would seem too necessarily involve a conversion loss or approximation. The case for distinctly digital processes is not as clear however.
If analogue systems necessarily involve universals in operation due to information reduction, or maybe even to approximations or something like rounding errors, the same is not true for digital systems. A system using digital information alone has no universals involved in its analytical or cognitive operations even though it might look like it does if we only see system output. But at the most basic level of the system, any change in form or content is a change in identity. The system must use rules to determine output as opposed to what we might call subsuming approximations in an analogue system. Thus it might “know” only two different colors, i.e., have only two different color output states, red and blue for instance which are dependant on the digital input from the optical sensor. While we might think that the system sees differently, i.e., has different visual perceptions or has different visual experiences and as a result employs different universal terms, the behavior is actually based on programming and a different mode of operation.
Human beings can distinguish thousands of different colors and could conceivably learn different color names for each one. If we describe different color experiences with only a few color words it is simply because we haven’t invented enough color terms. Perhaps we haven’t learned all the color names used in a language. Our behavior is limited by knowledge or the history of verbal invention and not by rules. But what look like universals in digital machines can have other peculiar characteristics. They can be determined by variables which are independent of the defining characteristics of the input, by arbitrary factors such as time or the day of the week. Suppose for example we have a complicated industrial controller that relies on numerous digital inputs to make decisions governing the operations of the system. It might for example be an electrical power system controller that is constantly deciding whether or not to bring another power generating unit on line.
The variables used by this system might involve measuring demand, estimating future demand and comparing it with current excess capacity. If the anticipated demand exceeds the available surplus generating capacity then the controller will fire up the reserve system (which takes some time to come on line) in anticipation of the requirement for more power. The inputs to a system like this could be numerous and varied; things like weather projections for air conditioning or heating loads as well forecasts for demand from various industrial users which are dependant on the time of day. All of these parameters might be arranged in a demand map which dictates the decision on the auxiliary unit. They in total might be taken to generate a multi parameter “start up universal” for the digital system – a yes/no decision with many possible variables and values determining the decision. But this startup command universal might be further controlled by economic considerations. A demand based “on” command might be negated by the availability of low cost power from other sources outside the immediate system. So a rule not involving the physical input parameters is applied: ”Buy power if it is cheap, otherwise fire up the auxiliary unit.”
Anyone not familiar with the programming of the control unit would have a very difficult time in determining what exactly the rule for firing up the turbine auxiliary actually was. Imagine further that the economic factor was determined not by monitoring the immediate cost of outside power but by a built in inaccurate algorithm that used stored information about the tendencies in the electrical market for a given time of day or calendar date. The overall start up universal might then be next to impossible to decipher using externalities. The system might have many other capabilities, it might seem to us to be as intelligent as a human being, but unless it is capable of relating the entirety of its decision processes we still would not understand it completely. A question of concern here is whether or not there are similar problems associated with human cognition. Are there rules or some sort programming that determines our behavior which we cannot identify because we don’t understand our own internal operations?
But there is an even more fundamental problem. The fact is that we cannot even in principle directly determine or relate the details of our cognitive operations because intermediate steps are lost to the output space. We know only the final result of our cognitive processes, what’s going on in the middle is lost to the output side of the system which “knows” only the final stage of the information processing. While we can explain what we think or give reasons for our beliefs, these too is only other examples of the externalization of something going on internally which we can never directly grasp.
At present we don’t know if cognitive processes will ever be completely determinable by the outside observers either. Because of the delicate nature of living biological systems we may never be able to probe deeply enough or extract enough information to completely determine what’s going on in our heads when we say or do something. There may be secret “rules” in our heads analogous to the economic rules governing the behavior of the electrical grid controller, rules that are essentially mnemonic but not recollectable or even accidental in origin which no one, neither the speaker that uses them nor the outside observer who is trying to understand them, can ever figure out. In all likelihood the only thing we are really ever going to be able to determine with any accuracy or reliability for human intelligent behavior is the immediate source of the information that is producing it. This is where epistemology starts.