Monday, August 13, 2007

Service Ecosystem and Market Forces

One of the problems with a network of services is that the responsibilities and costs and risks are often in the wrong place.

In this post I'm going to explain what I mean by this statement, outline some of the difficulties, and then make some modest proposals.

The statement is based on a notion of the efficiency of an ecosystem. If there is one service provider and a thousand service consumers, it may be more efficient for the ecosystem as a whole if the service provider includes some particular capability or responsibility within the service, instead of each service consumer having to do this. In addition to the economics of scale, there may be economics of governance - for example, increased costs of managing the service relationship, especially if the service provider doesn't provide a complete service (in some sense).

One important application of this idea is in security, risk and liability. There is a very good discussion of this in the recent British House of Lords Science and Technology Committee Report into “Personal Internet Security", who specifically address the question whether ISPs and banks should take greater responsibility for the online security of their customers.

"A lot of people, notably the ISPs and the Government, dumped a lot of the responsibility onto individuals, which neatly avoided them having to shoulder very much themselves. But individuals are just not well-informed enough to understand the security implications of their actions, and although it’s desirable that they aren’t encouraged to do dumb things, most of the time they’re not in a position to know if an action is dumb or not." [via LightBlueTouchpaper]
In other words, the responsibility should be placed with the player who has (or may reasonably be expected to have) the greatest knowledge and power to do something about it. In many cases, this is the service provider. Some of us have been arguing this point for a long time - see for example my post on the Finance Industry View of Security (June 2004).

Similar arguments may apply to self-service. When self-service is done well, it provides huge benefits of flexibility and availability. When self-service is done poorly, it merely imposes additional effort and complexity. (Typical example via Telepocalypse). Some service providers seem to regard self-service primarily as a way of reducing their own costs, and do not seem much concerned about the amount of frustration experienced by users. (And this kind of thing doesn't just apply to end-consumers - similar considerations often apply between business partners.)

But it's all very well saying that the service provider ought to do X and the service consumer ought to do Y. What if there is no immediate incentive for the service provider to adopt this analysis? There are two likely responses.
  1. "We don't agree with your analysis. Our analysis shows that the service consumer ought to do X."
  2. "We agree it might be better if service providers always did X. But our competitors aren't doing X, and we don't want to put ourselves at a disadvantage."
More fundamentally, there may be a challenge to the possibility of making any prescriptive judgements about what ought to happen in a complex service ecoystem. This challenge is based on the assertion that such judgements are always relative to some scope and perspective, and can easily be disputed by anyone who scopes the problem differently, or takes a different stakeholder position.

Another fundamental challenge is based on the assertion that in an open competitive market, the market is always right. So if some arrangement is economically inefficient, it will sooner or later be replaced by some other arrangement that is economically superior. On this view, regulation can only really achieve two things: speed this process up, or slow it down.

But does this mean we have to give up architecture in despair - simply let market forces take their course? One of the essential characteristics of an open distributed world is that there is no central design architectural authority. Each organization within the ecosystem may have people trying to exercise some architectural judgement, but the overall outcome is the result of complex interplay between them.

How this interplay works, whether it is primarily driven by economics or by politics, is a question of governance. We need to spell out a (federated?) process for resolving architectural questions in an efficient, agile and equitable manner. This is where IT governance looks more than ever like town planning.


The House of Lords Science and Technology Committee Report into “Personal Internet Security" was published on August 10th 2007 (html, pdf). Richard Clayton , who was a specialist adviser to the committee, provides a good summary on his blog. Further comments by Bruce Schneier and Chris Walsh.

No comments: