Saturday, April 19, 2008

SOA Testing 2

Nearly finished my article on SOA testing.

Some people seem to think that SOA testing is all about testing web services – as if SOA testing is the same as JBOWS testing. But I am keen to emphasize other aspects of SOA testing as well.

One of the things I’ve been talking to a few people about is the testing of the service architecture itself. Some people have expressed doubts about this. Surely if there are any flaws in the architecture, these will become apparent when we test the other levels. Or perhaps we can just wait until the system goes live, and fix any architectural problems then.

I don’t agree. If we don’t test architecture, how do we know whether the architects have messed up? If we care about the quality of architecture, we need tests that are specifically designed to determine the correctness and robustness of the architecture, as well as tests to determine whether the intentions of the architects have been correctly realised. For example, we may decide to monitor calls between the layers, to verify that there aren’t any hidden dependencies, or to measure the performance overhead associated with each layer. Or when the architects have specifically designed multiple provision of some resource to avoid a single point of failure, we might monitor the utilization of this resource over a range of conditions to ensure that this aspect of the architecture is successful.

I suspect most people don’t bother – they assume that any flaws that don’t surface through other tests don’t matter. But I think this is an optimistic assumption, and system testing should never be based on optimism. Architectural testing may sometimes be difficult and expensive, and would therefore need to be justified against some assessment of business risk, but it should always be considered as an option.

So I’d welcome some comments from readers. Do you carry out tests that are specifically designed to verify and validate the architecture as a whole? If you don’t carry out such tests, is this (a) because you don’t have the time, (b) you don’t know how, or (c) you or your sponsors cannot see any value from such tests?


Philip said...

... or (d) because you wouldn't be sure what to test it against, given that you can't test it against everything. Is part of the difficulty here that we don;t know how to define the pragmatics of demand?

Richard Veryard said...

Thanks Philip. Of course you are right - if testing is supposed to anticipate every possible demand, then architectural testing can never be complete.

But that's true of other forms of testing as well. The British media attacked British Rail because certain types of engine had not been tested against a certain type of snow - a type of powdery snow that is very rare in the UK. (See Wikipedia.) But if British Rail had decided to ship engines to Switzerland to conduct adequate tests against this type of snow, the media would have attacked this as an unnecessary jaunt. (So BR needs to deal not only with the unpredictable demands of the weather but the unreasonable demands of the media.)

The challenge for testing is not to struggle hopelessly for completeness but to determine which tests are worth doing - in other words, provide enough information to reduce risk by an amount that justifies the cost of the test.

Because we don't know how to define the pragmatics of demand, we don't know what tests are worth doing. But is that really an excuse for not doing any tests at all?