Tim's Simple Situation Model is not as simple as Opher’s Simple Situation Model, and it contains things other than events. However, I was under the impression that Tim and Opher were nonetheless each propounding a situation model that accurately (or as accurately as possible) represented some “reality”.
Both have now clarified their respective positions. In On Faithful Representation and Other Comments, Opher points out that his model involves events (in the computer world) representing the situation (in the "real world"), and he doesn't say anything about the situation itself representing anything. Meanwhile in The Secret Sauce is the Situation Models, Tim concurs that we are interested in modelling our knowledge of the real world.
If the model represents our knowledge of the real world, is it possible to measure or analyse the gap between our knowledge and reality itself? Not without a certain amount of philosophical cunning.
Which gives us a problem with uncertainty. In his comment to my earlier post, Opher argued that this problem is orthogonal to the representation problem, but I disagree. I believe that the problem of knowledge and uncertainty fundamentally disrupts our conventional assumptions about representation, in much the same way that quantum physics disrupts our assumptions about reality.
Let's look at the implications of uncertainty for business computing. There are different strategies for dealing with an uncertain situation. One strategy is to determine the most likely situation, based on the available evidence, and then to act as if this situation was the correct one. Another strategy is to construct multiple alternative interpretations of the evidence (possible worlds), and then to find actions that produce reasonable outcomes in each of the possible worlds. The notion that a situation model must be a faithful representation of the Real World makes sense only if we are assuming the first strategy.
For example, in fraud management or security, the first approach uses complex pattern matching to divide transactions into “dodgy” or “okay”. There is then a standard system response for the “dodgy” transactions (including false positives), and a standard system response for the “okay” transactions (including false negatives). Obviously the overall success of the system depends on accurately dividing transactions into the two categories “dodgy” and “okay”. Meanwhile, the second approach might have a range of different system responses, depending on the patterns detected.
A third strategy involves creating intermediate categories: “definitely dodgy”, “possibly dodgy”, “probably okay”, “definitely okay”. In this strategy, however, we are no longer modelling the pure and unadulterated Real World, but modelling our knowledge of the real world. This shifts the question away from the accuracy of the model towards the adequacy of our knowledge.