In my previous post on Cancelling Events, I pointed out that new information sometimes causes us to revise our opinion as to whether a given event had occurred. Marco Seiriö wants to handle my example by adding probability into an event model. According to Marco, you would require that each part of the system can deal with these probabilities.
I can see that probabilistic events could be useful sometimes. However, I am not convinced we need to propagate these probabilities (and the accompanying complexity) throughout the system. I'd prefer to find a way of containing the complexity, so that some parts of the system are presented with a simple binary event statement (either it happened or it didn't) while other parts of the system may be presented with a more complex probabilistic event statement (it might have happened, with probability X%). This is a form of attenuation. - it can be regarded as an application of the need-to-know principle.
This attenuation could be managed architecturally by layering - for example we might separate a process coordination layer (which knows about the probabilities) from a process execution layer (which doesn't).
There is another problem with probabilities - which is that they may change continuously. If a promised action does not appear, the probability that the other person has forgotten increases with time. In the car accident example, if the driver fails to respond under certain conditions, it becomes increasingly likely (but still not certain) that the driver is dead or unconscious. The emergency response parts of the system may need to respond in real-time or near-real-time to this continously shifting probability, but we want to decouple this from other parts of the system that do not have this requirement.
More fundamentally, probability introduces some algebraic challenges that can be solved for relatively simple examples (and perhaps the car accident example is relatively simple) but don't scale for more complex examples (I guess I'll have to construct one).