Showing posts with label negation. Show all posts
Showing posts with label negation. Show all posts

Thursday, December 20, 2007

Cancelling Events

Opher Etzion has an interesting post on deleted event, revised event and converse event.

In my comments to his post, I raised the following special case, which I thought was worth a special mention:

Wikipedia defines composite event in terms of inference. But such an inference can be overturned (cancelled) by additional information, which creates an alternative (and more likely) explanation of the original data.

Let's say a given combination of basic events makes it likely that a car accident has occurred, and this inference triggers some emergency response. The car accident is represented as a composite (inferred) event. But it is possible to discover in a particular instance that what appeared to be a car accident was in fact something else. This discovery (presumably itself a new event) causes the "carAccident" event to be cancelled, and any emergency services to be notified and stood down.

Indeed, an unstable system might change its mind several times about whether a particular event had happened or not, and this instability would be a cause of concern for a system engineer.

Opher suggested this could be regarded as a form of error, but we need to be careful here because the word "error" can easily be misunderstood. We are talking about a situation where the inference was correct based on the information available at the time - the information may have been incomplete rather than inaccurate. I believe this is known in law as Error Coram Nobis.

Opher also pointed out the lack of support for this concept by current event processing products. Quite so. But the lack of tool support doesn't stop people building systems, it merely makes such systems more difficult and expensive. And I hope that by discussing real business examples of complex event logic, we will encourage further tool development.

Monday, November 26, 2007

Non-Events

Opher Etzion and Tim Bass discuss non-events or absent events. This is something I've discussed here before (Nothing Causes Action).

Opher points out that absence requires a context. His examples involve something not happening within a defined time period. (no financial transaction today, pizza hasn't arrived within 40 minutes). Tim points out that Opher's examples can all be modelled as clock events - the event that triggers action is the end of the period, occurring when the system is in the no-transaction or no-pizza state.

But there are more complex cases where the context for the negation is not a precisely defined time period. Consider Lacan's story of the three prisoners.

The prison governor invites three prisoners into his office and shows them three green discs and two red ones. Then he puts a green disc on each prisoner's back. Each can see the disc on the other two prisoners' backs. The first one to deduce the colour of the disc on his own back will be granted his freedom.

The correct deduction appears to depend on the hesitation of the group. "If I had a red disk, then each of the other prisoners would not hesitate to deduce immediately that he was green. Since neither has done so, I must also have a green disk." In other words, one prisoner makes a deduction based on the failure of the other two prisoners to make a deduction. In other words, non-deductions or absent events.

But this failure cannot be a clock event. The prisoners are not watching the clock, they are watching one another.

Questions of the timing of decisions are critical in a number of event-driven (and non-event-driven) situations. For example, stock markets fluctuate according to a wide range of events and non-events. An investor may buy or sell a particular stock based on the observation that other investors have not been buying or selling that particular stock. In a lot of situations, the best time to make a decision is the day before everyone else comes to the same conclusion.

Wednesday, October 20, 2004

Nothing Causes Action

In a posting entitled Alice in Wonderland, Hugo Rodger-Brown discussed the difficulties within BizTalk of having an alert triggered by the non-arrival of a message. He explains how he has to turn BizTalk inside out to cope with this.

Gregory Bateson pointed out long ago that Nothing can be a cause. "The letter which you do not write can get an angry reply, and the income tax form which you do not fill in can trigger the Inland Revenue boys into energetic action." (I quoted this in my 1992 book in Information Modelling.)

So the non-arrival of a message may convey significant information. (As Bateson puts it, it is "a difference that makes a difference".) Exception actions may be triggered by positive messages or negative ones. In a perfect world, there may sometimes be pure symmetry between the presence of a message saying X and the absence of a message saying not-X.

Example: If my wife is delayed, I may need to collect the children from school. Does it matter whether my action is triggered by a phone call saying she's been delayed, or is inhibited by a phone call saying she's on time – as long as we agree which way round it is?

But what if there is a problem with her mobile phone – perhaps she is out of range or out of battery? A system that relies on positive messages is more robust than one that relies on the absence of messages – although they may be logically, functionally equivalent.

Many business processes rely, at least in part, on unreliable communications (email, voicemail, fax, leaving messages with whoever answers the phone). Business processes generally build in practical safeguards against faulty communication – acknowledging, confirming, chasing, expediting, recorded delivery. These safeguards complicate the workflow – but they are generally thought necessary for business-critical systems.

Will technology one day give us communication that is reliable enough to abandon these complicated safeguards? I don't think so. But we may perhaps look to technology to support these safeguards more efficiently and consistently. This already happens in the world of safety-critical, fault-tolerant and real-time systems, which are generally based on positive messages.

Update (July 2005)

Over on the Usable Security blog, Ka-Ping Yee talks about the Simon Says problem.
A Simon Says problem occurs when the safe course of action requires the user to respond to the absence of a stimulus. ... The “Simon Says” game and login-spoofing attacks both exploit an especially severe form of the Simon Says problem, in which the user is expected to respond to the absence of something with the absence of an action. Such a situation is doubly troublesome for humans.

Technorati Tags: