Friday, November 21, 2008

Progressive Design Constraints

When I wrote my piece on Post-Before-Processing, I wasn't thinking about how it might apply to the design process. So when I read Saul Caganoff's reply, on Progressive Data Constraints, my first reaction was, well that's interesting but completely different. But when I read Saul's piece again more slowly, I started to see a common pattern between designing an engineering system (using CAD) and designing a response to a complex unstructured set of events (using what first Vickers and then Checkland call an "appreciative system"). 

In both cases, there is a pitfall known as "jumping to conclusions" - seeking premature closure as a result of an inability to tolerate incompleteness, inconsistency or uncertainty. There is a related pitfall of psychological attachment - being unable to abandon or revise one's earlier decisions. One of the most important skills for the business analyst or systems designer is the ability to throw away his/her first attempt and start again, or to make radical revisions to an existing artefact. (I once built an advanced modelling class with a series of exercises designed to give students opportunities to practise this skill. I think classes like this are still pretty rare.)

There are two apparently opposite approaches to design. According to some design methodologies, we are supposed to start with a vague, generic and all-inclusive concept, and gradually add specific detail and refinement until we have something that we can implement. Work-in-progress should always be consistent and horizontally complete - it just lacks vertical completeness. Alternatively, we start with a bunch of conflicting and sometimes incoherent requirements, and try to find a way of satisfying as much of them as possible. Saul's description fits the second of these.

Bringing the topic back to SOA at the end of his post, Saul indicates the relevance of this kind of thinking to SOA design-time governance. Perhaps because of my background in Open Distributed Processing (ODP), I always think of SOA in terms of open distributed systems, whose design is never complete, always open-ended. So design-time governance always spills over into run-time governance, or is it the other way around?

If we take the SOA project seriously, we should be building systems and services that will interoperate with third-party systems and services, using unstructured data and complex events from a broad range of heterogeneous sources, to be reused and repurposed in user contexts we haven't yet thought about. So we have to be prepared for the emergence of "undocumented features" or "anomalies" (or whatever we want to call them) - and these may emerge at any point in the lifecycle.

Of course in mission-critical or safety-critical systems, there are certain categories of system failure that cannot be permitted. And even for non-critical systems, there is a limit to the amount of buggy behaviour that users will tolerate. But we don't achieve an acceptable level of quality in any kind of complex service-oriented systems engineering by pretending we can design and test our way to perfection in a single step.

No comments:

Post a Comment