A common reaction to the concept of web service based architectures is to be deeply concerned about operational integrity. It is primarily for this reason that perhaps the majority of media commentators are advising that web services will not become pervasive for three to five years. However this perspective is based on an extrapolation of current practice. In the future, outsourced services are more likely to be business services or software services, but not application services. Software services will rapidly become pervasive in the same timeframe as the current PC model of computing transitions to the web based model, and we rent our usage of whatever personal productivity software we choose to use. Business services will be strongly favored over application services because this places the commercial costs and risks together with the operational responsibility and overcomes the operational issues.
Suppose I represent an insurance company, and I use a component-based service from another company to help me perform the underwriting. Don't I need to know the algorithm that the other company is using? Suppose that the algorithm is based on factors that I don't believe in, such as astrology? Suppose that the algorithm neglects factors that I believe to be important, such as genetics or genomics? Am I not accepting a huge risk by allowing another company to define an algorithm that is central to my business?
There are three main attitudes to this risk. One attitude, commonly found among civil servants, lawyers and software engineers, is to break encapsulation, crawl all over the algorithm in advance, and spend months testing the algorithm across a large database of test cases. If and when the algorithm is finally accepted and installed, such people will insist on proper authorization (with extensive retesting) before the smallest detail of the algorithm can be changed. The second attitude is denial: impatient businessmen and politicians simply ignore the warnings and delays of the first group.
There is a third approach: which is to use the forces of competition as a quality control mechanism. Instead of insisting that we find and maintain a single perfect algorithm, or kidding ourselves that we've already achieved this, we deliberately build a system that sets up several algorithms for competitive field-testing, a system that is sufficiently robust to withstand failure of any one algorithm.
The most direct mechanism is a straight commercial one. If the company operating the underwriting algorithm also bears all or some of the underwriting risk, then its commercial success should be directly linked to the "correctness" of the algorithm.
Where this kind of direct mechanism is not available, then we're looking for feedback mechanisms that simulate this, as closely as possible. Just as the survival of the company using these underwriting services may depend on having access to several competing services, so the survival of the underwriting services themselves may depend on being used by several different insurance companies, with different customer profiles and success criteria. (This reduces the risk that all the customers for your service disappear at the same time, and gives you a better chance to fix problems.)
The bottom line is that I'm buying an underwriting service rather than an underwriting calculation service, a business service rather than an application service. ASPs will have to rebrand themselves yet again, and present themselves as genuine business process outsourcers.
This relates to my own contention (to be explored in my forthcoming book on the Component-Based Business) that we should focus on business components that deliver business services, rather than merely information or application services.
extract from article Are You Being Served? (CBDI Forum Journal, July 2000)
No comments:
Post a Comment