Thursday, December 04, 2008

Responding to Uncertainty 3

Even in DC, Authentication is a Guess, writes Gunnar Peterson, and goes on to discuss the security problems faced by a minor politician who can't believe it really is the president-elect on the telephone. 

Gunnar uses the example to illustrate the importance of separating authorization from authentication. In the normal case, authentication involves "piecing together your guess at the reality of the situation and then binding to the principal". 

But risk is not the same as uncertainty, and there are alternative ways of taking risk out of the equation without removing the uncertainty. Sometimes instead of selecting (and binding to) the most likely reality, the best thing for security might be to detect the possibility of impersonation and produce a low-risk response that fits any of the possible realities. 

In other words, you might be authorizing a range of possibilities, and then it is not so critical whether you can reliably authenticate a particular individual within that range. 

So if you don't know whether the caller is a president or a radio prankster, the best thing to do is not to hang up, not to say something foolish, but to find things to say that fit both possible contexts. And if a caller asks for information about your products, and you suspect it might be a competitor, then you give him a bit of information (in case it is a real customer) but not too much. 

This is similar to my earlier points about responding to uncertainty in complex event processing.

2 comments:

  1. This is indeed an interesting topic.

    Lately I have been doing a lot of geospatial event processing where we have a number of location aware reaction rules reacting to GPS location events.

    Normally this works fine. But in some cases it would be very nice to assign a probability to a GPS location event. In reality they all have some error in them, but most of the times its small enought to be ignored.

    But when the error is big enough to be a problem, I have been thinking that it would be nice to have a "Car inside Zone" rule trigger with certain probability. A probability which could automatically be derived from the participating events.

    So we could have a rule generate an event "Car inside Zone with 80% probability instead".

    Nice? Useful? Just cool? Don't know yet, have to ask some customers first ;)

    ReplyDelete
  2. Do you mean possible error in the location of a known vehicle, or possible error in the identification of a vehicle at a known location, or some combination?

    My concern here is that you may be combining different kinds of uncertainty into a single probability figure. This might make the audit trail difficult to reconstruct. But then I wonder how many of your customers worry about things like audit trails? Obviously there will be some applications where some kind of retrospective transparency may be important.

    The other question is how you tune this kind of system. If it turns out that 80% probability is giving you too many false positives, is there some simple knob for the user to turn the probability up to 90%. How do you set the "right" probability level - by trial and error? Or can the level be adjusted dynamically to achieve some predefined outcome? Or do you build more flexibility into the overall system by having the rule (or a series of rules) to generate a series of events at different probability levels (75%, 80%, 85%, 90%) - and then the application can be altered to respond to the "right" one?

    ReplyDelete