Sunday, August 10, 2008

Faithful representation 2

In my previous post, Faithful Representation, I discussed the view that a situation model represented some reality, and attributed this view to both Tim Bass and Opher Etzion. However I should have made clear that Tim and Opher don't see things in quite the same way.

Tim's Simple Situation Model is not as simple as Opher’s Simple Situation Model, and it contains things other than events. However, I was under the impression that Tim and Opher were nonetheless each propounding a situation model that accurately (or as accurately as possible) represented some “reality”.

Both have now clarified their respective positions. In On Faithful Representation and Other Comments, Opher points out that his model involves events (in the computer world) representing the situation (in the "real world"), and he doesn't say anything about the situation itself representing anything. Meanwhile in The Secret Sauce is the Situation Models, Tim concurs that we are interested in modelling our knowledge of the real world.

If the model represents our knowledge of the real world, is it possible to measure or analyse the gap between our knowledge and reality itself? Not without a certain amount of philosophical cunning.

Which gives us a problem with uncertainty. In his comment to my earlier post, Opher argued that this problem is orthogonal to the representation problem, but I disagree. I believe that the problem of knowledge and uncertainty fundamentally disrupts our conventional assumptions about representation, in much the same way that quantum physics disrupts our assumptions about reality.

Let's look at the implications of uncertainty for business computing. There are different strategies for dealing with an uncertain situation. One strategy is to determine the most likely situation, based on the available evidence, and then to act as if this situation was the correct one. Another strategy is to construct multiple alternative interpretations of the evidence (possible worlds), and then to find actions that produce reasonable outcomes in each of the possible worlds. The notion that a situation model must be a faithful representation of the Real World makes sense only if we are assuming the first strategy.

For example, in fraud management or security, the first approach uses complex pattern matching to divide transactions into “dodgy” or “okay”. There is then a standard system response for the “dodgy” transactions (including false positives), and a standard system response for the “okay” transactions (including false negatives). Obviously the overall success of the system depends on accurately dividing transactions into the two categories “dodgy” and “okay”. Meanwhile, the second approach might have a range of different system responses, depending on the patterns detected.

A third strategy involves creating intermediate categories: “definitely dodgy”, “possibly dodgy”, “probably okay”, “definitely okay”. In this strategy, however, we are no longer modelling the pure and unadulterated Real World, but modelling our knowledge of the real world. This shifts the question away from the accuracy of the model towards the adequacy of our knowledge.

1 comment:

  1. An accurate representation of the real world is only one aspect of a model. Another aspect is the ability to use the model in a computer (you need one or more algorithms to power it). And if you're dealing with uncertainty and you want to take a probabilistic approach, then your model must also correspond to some (often very annoying and impractical) conditions that will let you make inferences.

    There is plenty of study into models that are faithful to reality, translate into algorithms and allow for statistical inference. The bottom line is that no one has found one that works for every case.

    At the moment, there is also plenty of ongoing research (which Opher frequently mentions) for mapping descriptive semantic models to one or more algorithms or probability models. Unfortunately, these techniques are still fairly limited. There is still no way to take an infinitely flexible semantic model and translate it into a working program with correct inference capabilities.

    So all of this leaves the question: what is the alternative? You either work with the models you have or you don't, right? If you're modeling uncertainty, then yes you are by definition not sure about the results. But what other option do you have?

    Also, on measuring the gap between the model and the real world: unfortunately, most of the time this is only possible in hindsight. You determine where the model was correct or close and where it was not - there is your gap. But it's more tricky to determine the gap when you're predicting something that hasn't happened yet. Again, though, what is the alternative? Either you accept the risk of incorrect prediction, or you don't try to predict...

    ReplyDelete