2. What does Watson Know? (The Evaluation of Smart Systems)

 

            The computer Watson beat human competitors in a contest of general knowledge on the TV game show “Jeopardy”. Questions about the intellectual; capabilities of Watson abound, mostly centered on the issues like whether or not the computer really knows anything or is really intelligent. We consider these questions here along with some related questions about the nature of cognitive systems.

 

            Watson is intelligent perhaps if by intelligent you mean behaving as if it were intelligent in answering questions. If this sort of behavior is the sole criteria, then a semi-autonomous information system like Watson is intelligent because it can perform as if it were.  Watson perhaps could also exhibit some integrative or deductive capacity based on what it knew so that it might have some sort of reasoning ability, admittedly though it was stochastic in nature. But if we extend our criteria further to include a creative capacity based on the integration of internalized information then the issue is less clear. Watson was evidently not designed to be inductive or creative.

 

            Does Watson really know anything? The same problems develop here. If by knowing we mean the ability to recite facts appropriately, e.g., in response to verbal clues then the answer is yes. If by knowing we mean the ability to function using the implications of internalized information then maybe the answer is “a little”. This depends on Watson’s reasoning ability. Watson certainly does not have anything resembling human knowledge, however, because it is not a human system, it lacked sensory organs for example. But what constitutes knowledge, we at least might suspect, is system dependant. It depends on what kind of system we are concerned with. We might well want to consider employing different criteria for different systems.

 

            These are mostly verbal disputes but we can sometimes learn things by pursuing them.  But the really important questions about systems like Watson concern not their cognitive capacities but their agency – what can the system do and why does it do what it does. Watson might be important as an information system, but apparently it is not a direct agent. Watson can’t do anything but answer questions under some complicated contrived circumstances. Watson is no robot. Robots, especially autonomous robots, are more important. Similarly systems, especially autonomous or semi autonomous control systems that actually due things are much more important.  And what’s important about the design of these systems is that they employ a theory of harm in their operations.

 

            Harm is the negative effect of any system state or operation. Harm might be mechanical, biological, chemical, economic, social, or even political. What harm a system can do depends on the type of system it is. A flight control computer might do harm by crashing the plane. A computer trading system for stocks might do harm by crashing the stock market. Whatever type of system we are concerned with the first operating rule for the system should be like the Hippocratic Oath –“first, do no harm” The problem is that identifying harm and preventing its production can be very difficult problems. Some conditions that might represent serious harm on some occasions –zero altitude produced by a flight control system for example- might be desirable under other circumstances, like when the plane is being landed in an appropriate circumstance.

 

            The first commandment then for system developers is this “Identify and minimize the harm potential for your system.” Googols rule: “don’t do evil” is inadequate at least because it suggests that the undesirable is the result of doing –it may well be the case that not doing is harmful as well. This rule is an technologist’s approach to meta- ethics it emphasizes active systems operations as opposed to an integrated theory of agency.