Systems people (including some SOA people and CEP people and BPM people) sometimes talk as if a system was supposed to be a faithful representation of the real world.
This mindset leads to a number of curious behaviours.
Firstly, ignoring the potential differences between the real world and its system representation, treating them as if they were one and the same thing. For example, people talking about "Customer Relationship Management" when they really mean "Management of Database Records Inaccurately and Incompletely Describing Customers". Or referring to any kind of system objects as "Business Objects". Or equating a system workflow with "The Business Process".
Secondly, asserting the primacy of some system ontology because "That's How the Real World Is Structured". For example, the real world is made up of "objects" or "processes" or "associations", therefore our system models ought to be made of the same things.
Thirdly, getting uptight about any detected differences between the real world and the system world, because there ought not to be any differences. Rigid data schemas and obsessive data cleansing, to make sure that the system always contains only a single version of the truth.
Fourthly, confusing the stability of the system world with the stability of the real world. The basic notion of "Customer" doesn't change (hum), so the basic schema of "Customer Data" shouldn't change either. (To eliminate this confusion you may need two separate information models - one of the "real world" and one of the system representation of the real world. There's an infinite regress there if you're not careful, but we won't go there right now.)
In the Complex Event world, Tim Bass and Opher Etzion have picked up on a simple situation model of complex events, in which events (including derived, composite and complex events) represent the "situation". [Correction: Tim's "simple model" differs from Opher's in some important respects. See his later post The Secret Sauce is the Situation Models, with my comment.] This is fine as a first approximation, but what neither Opher nor Tim mentions is something I regard as one of the more interesting complexities of event processing, namely that events sometimes lie, or at least fail to tell the whole truth. So our understanding of the situation is mediated through unreliable information, including unreliable events. (This is something that has troubled philosophers for centuries.)
From a system point of view, there is sometimes little to choose between unreliable information and basic uncertainty. If we are going to use complex event processing for fraud detection or anything like that, it would make sense to build a system that treated some class of incoming events with a certain amount of suspicion. You've "lost" your expensive camera have you Mr Customer? You've "found" weapons of mass destruction in Iraq have you Mr Vice-President?
One approach to unreliable input is some kind of quarantine and filtering. Dodgy events are recorded and analyzed, and then if they pass some test of plausibility and coherence they are accepted into the system. But this approach can produce some strange effects and anomalies. (This makes me think of perimeter security, as critiqued by the Jericho Forum. I guess we could call this approach "perimeter epistemology". The related phenomenon of Publication Bias refers to the distortion resulting from analysing data that pass some publication criterion while ignoring data that fail this criterion.)
In some cases, we are going to have to unpack the simple homogeneous notion of "The Situation" into a complex situation awareness, where a situation is constructed from a pile of unreliable fragments. Tim has strong roots in the Net-Centric world, and I'm sure he could say more about this than me if he chose.