Tuesday, August 12, 2008

Responding to Uncertainty

How does a system respond intelligently to uncertain events?
"A person may take his umbrella, or leave it at home, without any ideas whatsoever concerning the weather, acting instead on general principles such as maximin or maximax reasoning, i.e. acting as if the worst or the best is certain to happen. He may also take or leave the umbrella because of some specific belief concerning the weather. … Someone may be totally ignorant and non-believing as regards the weather, and yet take his umbrella (acting as if he believes that it will rain) and also lower the sunshade (acting as if he believes that the sun will shine during his absence). There is no inconsistency in taking precautions against two mutually exclusive events, even if one cannot consistently believe that they will both occur." [Jon Elster, Logic and Society (Chichester, John Wiley, 1978) p 84]

Austrian physicist Erwin Schrödinger proposed a thought experiment known as Schrödinger's cat to explore the consequences of uncertainty in quantum physics. If the cat is alive, then Schrödinger needs to buy catfood. If the cat is dead, he needs to buy a spade. According to Elster's logic, he might decide to buy both.

At Schrödinger's local store, he is known as an infrequent purchaser of catfood. The storekeeper naturally infers that Schrödinger is a cat-owner, and this inference forms part of the storekeeper's model of the world. What the storekeeper doesn't know is that the cat is in mortal peril. Or perhaps Schrödinger is not buying the catfood for a real cat at all, but to procure a prop for one of his lectures.

Businesses often construct imaginary pictures of their customers, inferring their personal circumstances and preferences from their buying habits. Sometimes these pictures are useful in predicting future behaviour, and for designing products and services that the customers might like. But I think there is a problem when businesses treat these pictures as if they were faithful representations of some reality.

This is an ethical problem as well as an epistemological one. I have a recollection (I can't find the details right now) of a recent incident in which a British supermarket, having inferred that some of its female customers were pregnant, sent them a mailshot that assumed they were interested in babies. But this mailshot was experienced as intrusive and a breach of privacy, especially as some of the husbands and boyfriends hadn't even been told yet.

Instead of trying to get the most accurate picture of which customers are pregnant and which customers aren't, wouldn't it be better to construct mailshots that would be equally acceptable to both pregnant and non-pregnant customers? Instead of trying to accurately sort the citizens of an occupied country into "Friendly" and "Terrorist", wouldn't it be better to act in a way that reinforces the "Friendly" category?

Situation models are replete with indeterminate labels like these ("pregnant", "terrorist"), but I think it is a mistake to regard these labels as representing some underlying reality. Putting a probability factor onto these labels just makes things more complicated, without solving the underlying problem. These labels are part of our way of making sense of the world, they need to be coherent, but they don't necessarily need to correspond to anything.


Hans said...

If I can predict behavior, doesn't that make my model a faithful representation?

Your worries about making incorrect inferences are valid, but what is the alternative? Unless you can find a perfect predictor, you are stuck with a choice between an imperfect predictor or nothing.

Richard Veryard said...

I don't have a problem with prediction. I just don't think it's necessary to regard the prediction as representing something, faithfully or otherwise.

Schrödinger may buy catfood in the optimistic belief that his cat is still alive. But must we say that his belief either faithfully represents a living cat or unfaithfully represents a dead cat?

Instead of describing knowledge and belief as representing something, we might choose to characterize knowledge and belief as the product of an inquiry process.

Prediction is a kind of belief about the future, and I don't want to bother with a metaphysical representation of the future. If I want to assess the quality of your predictions, I just need to look at how you arrive at these predictions, as well as perhaps counting how many of your past predictions have been lucky.

Hans said...

I think I see what you're getting at.

On one hand, it is really important to know what you are representing. For models to predict well, reality must follow certain rules. So you are capturing the rules of reality in your model.

On the other hand, the model isn't reality. By definition, it's a summary of reality.

Richard Veryard said...

I'm afraid I don't agree that a model has to be a summary of reality. A model can be Useful and Meaningful without being True, and a model can be True without corresponding to Reality. These are deep philosophical questions, and for most practical purposes I find it perfectly possible to avoid talking about Reality altogether.

Hans said...

I'm not sure I follow. I'm no philosophy expert either. Like is a perfect model distinguishable from reality itself? Sounds like a Turing-esque question.

I'm not sure what kinds of models you're referring to, but in the models I'm used to, you have to ensure that properties in the model correspond to reality. I'm talking about stochastic models, differential equations, semantic models. All of that stuff comes with properties that definitely need to be checked against reality...

Sorry if I'm being dense and not getting your point.