Showing posts with label deconfliction. Show all posts
Showing posts with label deconfliction. Show all posts

Saturday, April 08, 2017

Another Update on Deconfliction

As the situation in Syria goes from worse to worser, the word "deconfliction" has reappeared in the press. On Friday, following a chemical attack on the Syrian population apparently by the Syrian government, the USA bombed a Syrian government airbase.

 "Russian forces were notified in advance of the strike using the established deconfliction line. US military planners took precautions to minimize risk to Russian or Syrian personnel located at the airfield," said a Pentagon spokesperson.

A few hours later, the Russian Foreign Ministry announced it was suspending the deconfliction agreement, accusing the Americans of "a gross, obvious and unwarranted violation of international law".

The normal purpose of deconfliction is to avoid so-called "friendly fire". But in the case of the deconfliction line in Syria, a more practical objective would be to avoid minor incidents that might escalate into major war. (Anne McElvoy quotes a senior former British commander in Iraq talking about the jeopardy of the next crucial months in Syria: "powers tripping over each other – or America hitting the Russians by accident".) We might fondly imagine that the Pentagon and the Russian Foreign Ministry still share this objective, and will continue to share a limited amount of tactical information for that purpose, despite public disavowals of coordination. Deconfliction as minimum viable coordination.

Much less serious, and therefore more entertaining, is the "friendly fire" that has meanwhile broken out within the White House. Gun metaphors abound (cross-hairs, opened fire). Successful businessmen understand the need to establish clear division of responsibilities and loose coupling between different executives - otherwise everyone needs to consider everything, and nothing gets done. But this is not a simple matter - excessive division of responsibilities results in organizational silos. Large organizations need just enough coordination - in other words, deconfliction. It is not yet clear whether President Trump understands this, or whether he thinks he can follow President Roosevelt's approach to "creative tension".



Bethan McKernan, Syria air strikes: US 'warned Russia ahead of airbase missile bombardment' (Independent, 7 April 2017 11:42)

May Bulman, US air strikes in Syria: Russia suspends agreement preventing direct conflict with American forces (Independent, 7 April 2017 15:39)

Matt Gertz, Breitbart takes on Jared Kushner: Steve Bannon is shielded as Trump’s son-in-law is in the crosshairs (Salon, 6 April 2017)

Matt Gertz, To Defend Bannon, Breitbart Has Opened Fire On The President's Son-In-Law (Media Matters, 6 April 2017)

Anne McElvoy, Washington is confused by Trump’s act. What became of America First? (Guardian, 9 April 2017)

Reuters, Kushner and Bannon agree to 'bury the hatchet' after White House peace talks (Guardian, 9 April 2017)


Related Posts

What is Deconfliction? (March 2008)
Update on Deconfliction (November 2015)
The Art of the New Deal - Trump and Intelligence (February 2017)

Wednesday, November 18, 2015

Update on Deconfliction

The obscure word #deconfliction has started to appear in the news, referring to the coordination or lack of coordination between American and Russian operations in the Middle East, especially Syria.

The Christian Science Monitor suggests that the word "deconfliction" sounds too cooperative, and quotes the New York Times.

“Defense Secretary Ashton B. Carter sharply took issue with suggestions, particularly in the Arab world, that the United States was cooperating with Russia, and he insisted that the only exchanges that the Pentagon and the Russian military could have on Syria at the moment were technical talks on how to steer clear of each other in the skies above the country.”

But that's exactly what deconfliction is - "how to steer clear of each other" - especially in the absence of tight synchronization and strong coordination.

The Guardian quotes Gary Rawnsley, professor of public diplomacy at Aberystwyth University, who says such jargon is meaningless and is designed to confuse the public. But I think this is unfair. The word has been used within military and other technical circles for many decades, with a fairly precise technical meaning. Obviously there is always a problem (as well as a risk of misunderstanding) when technical jargon leaks into the public sphere, especially when used by such notorious obfuscators as Donald Rumsfeld.

In the current situation, the key point is that cooperation and collaboration require something more like a dimmer switch rather than a simple on-off switch. The Americans certainly don't want total cooperation with the Russians - either in reality or in public perception - but they don't want zero cooperation either. Meanwhile Robbin Laird of SLD reports that the French and the Russians have established "not only deconfliction but also coordinated targeting ... despite differences with regard to the future of Syria". In other words, Franco-Russian coordination going beyond mere deconfliction, but stopping short of full alignment.

Thus the word "deconfliction" actually captures the idea of minimum viable cooperation. And this isn't just a military concept. There are many business situations where minimum viable cooperation makes a lot more sense than total synchronization. We could always call it loose coupling.



Helene Cooper, A Semantic Downgrade for U.S.-Russian Talks About Operations in Syria (New York Times, 7 October 2015)

Jonathan Marcus, Deconflicting conflict: High-stakes gamble over Syria (BBC News, 6 October 2015)

Robbin Laird, The RAF Unleashed: The UK and the Coalition Step up the Fight Against ISIS (SLD, 6 December 2015)


Ruth Walker, Feeling conflicted about deconfliction (Christian Science Monitor, 22 October 2015)

Matthew Weaver, 'Deconflict': buzzword to prevent risk of a US-Russian clash over Syria (Guardian 1 October 2015)

Ben Zimmer, In Conflict Over Russian Role in Syria, ‘Deconfliction’ Draws Critics (Wall Street Journal, 9 October 2015)

More posts on Deconfliction

Updated 7 December 2015 

Monday, May 10, 2010

Differentiation and Integration

In my post on Business Design Choices, I suggested that the real challenge for business architecture was to appreciate the economic and social impact of structure. I identified some advanced business design choices where business architecture has a useful role to play, and where some IT people might have some relevant aptitude. In this post, I am going to expand on one of these.


Where does standardization make sense, and where should requisite variety be deployed? This question goes back to the work of Lawrence and Lorsch (L2), and has recently been rediscovered (in slightly different terms) by Ross, Weill and Robertson (RWR).

In their book Enterprise Architecture as Strategy, RWR define something they call an Operating Model, with two independent dimensions, business process standardization and integration. "Although we often think of standardization and integration as two sides of the same coin, they impose different demands. Executives need to recognize standardization and integration as two separate decisions." (p27)

Many people in the IT world take for granted that standardization (reduction in variety) is a good thing.  RWR acknowledge the benefits of standardization not only in terms of throughput and efficiency, but also predictability. However, they point out the potential downside of standardization both as a state (standardized processes limit local innovation) and as a state-change (politically difficult and expensive to rip out and replace perfectly good and occasionally superior systems and processes).

Integration, which RWR define primarily in terms of data sharing, is also assumed to be a good thing. RWR identify the benefits of integration in terms of efficiency, coordination, transparency and agility, and acknowledge the challenge of integration as a state-change in terms of "difficult, time-consuming decisions".

The two dimensions of standardization and integration produce a two-by-two matrix as follows.




Operating Model Quadrants (Adapted by Clive Finkelstein from Figure 2.3 of “Enterprise Architecture as Strategy”) - Source The Enterprise Newsletter #38.


Although RWR are careful to present a contingency theory of enterprise strategy, in which any of these four operating models may be strategically valid, the conventional rhetoric of the two-by-two matrix places the preferred strategy into the top right quadrant - thus Unification. Many enterprise architects from a traditional IT background may feel most comfortable with the Unification quadrant. (There may be a process of idealization here - see Schwartz.) Indeed, RWR go on to present an Enterprise Architecture Maturity Model, which starts with the easy bits of enterprise architecture (where things are already Unified) and ends with the challenging bits (where things are or should be Diversified).



L2 also identify two dimensions, which they call differentiation and integration. They tend to see increasing differentiation as a healthy response to increasing opportunity and complexity - a growing organization in a growing market - "faster change and greater heterogeneity" (p 235). Differentiation is not merely a difference in working practices, but includes at its core "the difference in cognitive and emotional orientation among managers in different functional departments" (p11).

Integration is then the administrative response to this increasing differentiation - how to maintain the overall coherence and viability of the enterprise as a whole. Integration is defined as "the quality of the state of collaboration that exists among departments that are required to achieve unity of effort by the demands of the environment" (p11). The topic of integration covers both the organizational state and the mechanisms to produce and support this state. For L2, the challenges of integration are not primarily IT related (data sharing) but personal and political (conflict resolution).

L2 don't offer a two-by-two matrix of Differentiation and Integration in their 1967 book (I guess the two-by-two matrix hadn't then established itself as an essential consultancy tool), but if they did then presumably the top-right quadrant would be High Differentiation, High Integration. This very roughly corresponds to what RWR call Coordination.

But in any case, a two-by-two matrix would be misleading. The point isn't to choose whether you have differentiation and integration or not; the point is to determine how much and what kinds of differentiation  and integration you need. L2 are explicit in their support of contingency theory (different strategies being appropriate for different organizations depending on environmental factors). The more complex and dynamic the environment, the greater the need for differentiation and integration.

If we are to take business architecture seriously as a discipline, then this kind of question is clearly central. In his post on Contingency Theory and Enterprise Architecture, Andy Blumenthal argues that contingency theory entails keeping your options open, but sometimes it just means making appropriate strategic choices. If the slogan "enterprise architecture as strategy" is to mean anything at all, then surely this is what it means.



Lawrence, P., and Lorsch, J., "Differentiation and Integration in Complex Organizations" Administrative Science Quarterly 12, (1967), 1-30. (see summary here)

Paul R. Lawrence and Jay W. Lorsch, Organization and Environment, Managing Differentiation and Integration. Harvard University 1967.

Jeanne W. Ross, Peter Weill and David C. Robertson, Enterprise Architecture as Strategy. Harvard Business School Press 2006.

Howard S. Schwartz, Narcissistic Process and Corporate Decay – The Theory of the Organizational Ideal. 1990. See his paper On the Psychodynamics of Organizational Disaster (Columbia Journal of World Business, Spring 1987) See also Joe Bormel's blogpost on Narcissism, Oxygen and HCIT Vision (June 2009).


Related posts

Business Design Choices (January 2010)
EA Effectiveness and Process Standardization (August 2012)

Saturday, November 28, 2009

Complexity and Power

@RSessions kindly gave me a quick overview of his SIP methodology, and how he calculates the complexity of a system of systems, based on the number of elements in each system and the number of connections between systems. The internal complexity of each system increases in a non-linear manner with the number of elements, and the external complexity increases with the number of connections between the systems, so the trick is to find a structure that optimizes the overall complexity.

Obviously we have to be clear as to what counts as an element (for example functions), and what counts as a connection. Using the SIP lens, it is possible to see how certain architectural styles (including those popular in the SOA world, such as hub-and-spoke or layered) only deliver simplicity (and the benefits of simplicity) if we can assume that only certain kinds of connection are significant. Roger's view is that this assumption is unwarranted and invalid.

In general, the so-called functional requirements are associated with the elements and the logical connections between them. In my view, architects also need to pay attention to the nature of the connections (coupling) because these will have important consequences on the structure and behaviour of the system as a whole. For example, synchronous versus asynchronous. At present, Roger's complexity calculations don't differentiate between different kinds of connection, so it would be interesting to investigate the costs and risks associated with different kinds of connections, to see how much difference it could make. 

Roger's primary interest is in IT systems, but the same principles would appear to apply to processes and organizations. If you are running a factory, you have an architectural choice about the connection between say the moulding shop and the paint shop. With an asynchronous flow you have two loosely coupled operations separated by a buffer of work-in-progress; with a synchronous flow you have two tightly-coupled operations connected on a just-in-time basis. The former is a lot easier to manage, but it has an overhead in terms of inventory cost, storage cost, increased elapsed time, slower response to changes in demand, and so on. The latter may be more efficient under certain conditions, but it can be more volatile and the impact is much greater when something goes wrong anywhere in the process.


Intuitively, there seems to be a difference in complexity between these two solutions. The first is simpler, because the connection between the two systems is weaker; the second is more complex. With greater complexity comes greater power but also greater risk. Surely this is exactly the kind of architectural trade-off that enterprise architects should be qualified to consider. Roger's SIP methodology does give the architect a very simple lens to try and understand system-of-system complexity. Not everyone agrees with Roger's definition of complexity, and we can find some radically different notions of complexity for example in the Cynefin world, but at least Roger is raising some important issues. The EA world certainly needs to pay a lot more attention to questions like these.

Tuesday, March 25, 2008

What is Deconfliction?

Deconfliction is used in various engineering and management fields. It is essentially a command-and-control concept, and means designing systems and operations to reduce conflict and interference.

For example, airspace deconfliction means having enough distance between planes so they don't hit each other. RF deconfliction means avoiding radio interference between neighbouring frequencies. In a military context, deconfliction means keeping units or missions apart to reduce the likelihood of so-called friendly fire.

In many situations, the only alternative to deconfliction is devoting a lot more resources to coordination. If two units are in constant communication, then they can operate closer together without increasing the risk. But there is a trade-off here: an adequate level of coordination often makes high demands on people and technology, and isn't always practical.

But deconfliction also typically involves some compromise - reducing the frequency or density of activity, which may reduce efficiency or effectiveness. Better forms of coordination, supported by better technology, may improve the trade-off here - allowing more complex missions to be undertaken more efficiently and reliably. with less deconfliction.

(At least that's the correct use of the term. In some places, however, the term "Deconfliction" is confusingly used to mean "Deduplication" - removing duplicate entries from a list, or recognizing repeated events - notably in the Common Malware Evaluation process maintained by Mitre. I think this is an incorrect usage of the term and should be discouraged. In fact, Mitre has published many other papers that use the term correctly. Meanwhile, other people seem to think deconfliction is a fancy name for conflict resolution.)

On this blog, I am primarily interested in deconfliction in the context of business organizations and information systems - loosely coupled organizations with loosely coupled computer systems, following a "service-oriented" paradigm. For more posts on deconfliction in this context, please select the deconfliction label.

Saturday, January 19, 2008

Technological Perfecta

There are several technologies that might work well together, indeed they certainly should work well together. At various times in this blog, I've talked about the potential synergies between (i) SOA and Business Intelligence, (ii) SOA and Business Process Management, and (iii) SOA/EDA and Complex Event Processing. The third of these synergies is currently getting some attention, following some enthusiastic remarks by Jerry Cuomo, WebSphere CTO (see Rich Seeley and Joe McKendrick).

All four together would be amazing, but a lot of organizations aren't ready for this. Moreover each technology has its own set of tools and platforms, and its own set of disciplines and disciples.

In Betting on the SOA Horse, Tim Bass describes this potential synergy using the language of gambling - exacta and trifecta. I'm not very familiar with this language, but what I think this means is that you only win the bet if the horses pass the post in the correct sequence. Tim writes:
"Betting on horses is a risky business. Exactas and trifecta have enormous payouts, but the odds are remote."

In On Trifecta and Event Processing, Opher Etzion disagrees with this metaphor. He argues that these technologies are mutually independent (he calls them "orthogonal"). If he is correct, this would have three consequences: (i) flexibility of deployment - you can implement and exploit them in any sequence; (ii) flexibility of benefit - you can get business benefits from any of them in isolation, and then additional benefits if and when they are all deployed together; and therefore (iii) considerably lower risk.

My position on this is closer to Opher. I think there are some mutual dependencies between these technologies, but they are what I call soft dependencies. P has a hard dependency on Q if Q is necessary for P. Whereas P has a soft dependency on Q if Q is desirable for P.

In planning a technology change programme, it is very useful to recognize soft dependencies, because it permits some deconfliction between different elements. Deconfliction here means forced decoupling, understanding that the results may be sub-optimal (at least initially), but accepting this in the interests of getting things done.

In a perfect world, we might want to deploy all four technologies together, or in a precisely defined sequence. But pragmatism suggests we don't bet on the impossible or highly improbable. The challenge for the technology architect is to organize a technology portfolio to get the best balance of risk and reward. This is not primarily about comparing the features of different products, but about understanding the fundamental structural principles that allow these technologies to be deployed in a flexible and efficient manner.

Discussion continues: Technological Perfecta 2

Tuesday, April 11, 2006

Loose Coupling 2

Is loose coupling a defining characteristic of the service-based business and/or a core principle of SOA? ZDnet's analysts have been producing a set of IT commandments (13 at the last count), and Joe McKendrick's latest contribution is Thou Shalt Loosely Couple.

Joe quotes John Hagel's definition of loose coupling, which refers to reduced interdependencies between modules or components, and consequently reduced interoperability risk. Hagel clearly intends this definition to apply to dependencies between business units, not just technical artefacts. I think this is fine as far as it goes, but is not precise enough to my taste.

In his post The Developer's View of SOA: Just adding complexity?, Ron Ten-Hove (Sun Microsystems) defines loose coupling in terms of knowledge - "the minimization of the "knowledge" a service consumer has of a service provider it is is using, and vice versa".

(It is surely possible to link these defining notions - interoperability and interdependency, risk and knowledge - at a deep level, but I'm not going to attempt it right now.)

I want to have a notion of loose coupling that applies to sociotechnical systems of systems - and therefore needs to cover organizational interdependencies and interoperability as well as technical. I have previously proposed a definition of Loose Coupling based on Karl Weick's classic paper on loosely coupled organizations.

The trouble with proclaiming the wonders of loose coupling is that it sounds as if tight coupling was just a consequence of stupid design and/or stupid technology. It fails to acknowledge that there are sometimes legitimate reasons for tight coupling.

Ron Ten-Hove puts forward a more sophisticated argument for loose coupling. He acknowledges the advantages of what he calls a mixed service model, namely that "it allows for creation of a component model that combines close-coupling and loose-coupling in a uniform fashion". But he also talks about the disadvantages of this model, in terms of reduced SOA benefits and increased developer complexity, at least with the current technology.

Loose coupling is great, but it is not a free lunch. It is not simply a bottom-up consequence of the right design on the right platform. Sometimes loose coupling requires a top-down forcing-apart. I think the correct word for this top-down forcing-apart is deconfliction, although when I use this word it causes some of my colleagues to shudder in mock horror.

Deconfliction is a word used in military circles, to refer to the active principle of making one unit independent of another, and this will often include the provision of redundant supplies and resources, or a tolerance of reduced utilization of some central resources. Deconfliction is a top-down design choice.

Deconfliction is an explicit acceptance of the costs of loose coupling, as well as the benefits. Sometimes the deconflicted solution is not the most efficient in terms of the economics of scale, but it is the most effective in terms of flexibility and interoperability. This is the kind of trade-off that military planners are constantly addressing.


Sometimes coupling is itself a consequence of scale. At low volumes, a system may be able to operate effectively in asynchronous mode. At high volumes, the same system may have to switch to a more synchronous mode. If an airport gets two incoming flights per hour, then the utilization of the runway is extremely low and planes hardly ever need to wait. But if the airport gets two incoming flights per minute, then the runway becomes a scarce resource demanding tight scheduling, and planes are regularly forced to wait for a take-off or landing slot. Systems can become more complex simply as a consequence of a change in scale.

(See my earlier comments on the relationship between scale and enterprise architecture: Lightweight Enterprise.)

In technical systems, loose coupling carries an overhead - not just an operational overhead, but a design and governance overhead. Small-grained services may give you greater decoupling, but only if you have the management capability to coordinate them effectively. In sociotechnical systems, fragmentation may impair the effectiveness of the whole, unless there is appropriate collaboration.

In summary, I don't see loose coupling as a principle of SOA. I prefer to think of it as a design choice. I think it's great that SOA technology give us better choices, but I want these choices to be taken intelligently rather than according to some fixed rules. SOA entails just-enough loose coupling with just-enough coordination. What is important is getting the balance right.

Saturday, March 25, 2006

Lightweight Enterprise

One of the interesting divisions of opinion at the SPARK workshop (and surfacing afterwards in the blogs of some of the participants) was about something we might call the weight of the enterprise.

Dare Obasanjo asserts My Website is Bigger Than Your Enterprise. And Jeff Schneider retorts My Enterprise Makes Your Silly Product Look Like A Grain of Sand. So who is right?

The basic issue here seems to be the relative amounts of complex engineering required to produce enterprise-grade software versus web-grade software. Dare (and some other participants at the SPARK workshop, including the guys from mySpace) are producing web software with staggering user volumes - much greater than most enterprises have to deal with.

Dare is scornful of enterprise architects, who (he says) "tend to seek complex solutions to relatively simple problems". He and David Heinemeier Hansson are particularly rude about James McGovern.

But as Jeff points out, "most people who have never seen a large enterprise architecture have no concept of what it is". CIOs and enterprise architects generally regard enterprise software (their own realm) as much more challenging than web software, involving much higher levels of complexity, and much stricter "non-functional" requirements (security, robustness, and so on). Are they right, or is this just narrow self-interest?

Size, complexity, weight - these are basic architectural concepts. Part of the difficulty with this debate is that we simply don't have an agreed language for discussing architecture properly. In my opinon, IT education is very deficient in this area. (See Footnote 1.)

Enterprise Mashup

One way of thinking about enterprise mashups is that they provide a way of achieving enterprise-grade systems without enterprise-heavy engineering. Let me give an example of something I regard as a business mashup. My example doesn't involve Ruby, it doesn't involve web services, and many people might say it doesn't even count as SOA. But it has always been my favourite example of the "plug-and-play" business that I describe in my book on the Component-Based Business.

Tesco Personal Finance (TPF) is an example of what I would now regard as a business mashup. It combines (mashes together) banking services (provided by the Royal Bank of Scotland) with retail services (provided by Tesco).

From Tesco's perspective, this mashup provides an extremely lightweight way of entering the banking business. In the old days, if you wanted to open a bank, you had to collect a large amount of gold, which you had to deposit with the banking regulators to satisfy them of your financial stability. Then you had to open a number of branches, and employ a small army of clerks and managers. This represented a considerable investment and risk - which meant that the existing banks could earn high profits behind high entry barriers.

In contrast, TPF represented for Tesco a low-cost, low-risk way of entering the banking industry. Simply plug in some external banking services (which already satisfy all the necessary banking regulations), and a humble retailer can become a bank overnight.

SOA

Although the Tesco Personal Finance initiative predated the SOA and Web 2.0 technologies we have today, we can now interpret it as a precursor or harbinger of these technologies - almost a non-technical proof-of-concept. (See Footnote 2.)

SOA enables the lightweight enterprise. It does this not through services but through architecture. (That, in my view, was the primary justification of SPARK.)

The challenge of architecture is to deconflict the enterprise software space - to create a genuine separation of concerns. This is easier said than done, and it requires some complex architectural thinking. One of the products of this thinking may be platforms on which developers may use small-scale and simple development techniques to produce large-scale and complex solutions.

See my recent posts on the Business Stack (SPARK Workshop 2) and Enterprise Mashups and Situated Software.


Footnote 1 - I have always been critical of university IT courses that teach students to write small programs, and fail to teach them the differences between small stand-alone programs and large interconnected suites of programs. If you go to architecture school, you may never build anything larger than a garden shed with your own hands. But you will have looked at skyscrapers and understood the relevant design and construction principles. You won't qualify as an architect without understanding the scale differences between garden sheds and skyscrapers. But in IT there is no such educational requirement. Vast numbers of people get university degrees in IT-related subjects, and even fancy job titles including the word "architect", without having a clue about effects of scale on software.

Footnote 2 - Sometimes technology seems to make new things possible. But we often find that these things were possible all along, they were just too difficult or expensive or risky. In the 1950s, Stockhausen was producing music that predated the modern synthesizer - and it took him months to produce music that later composers would produce with a few quick knob-twiddles. Some might argue that Stockhausen's achievement was all the greater because of his lack of tools. Similarly, many Renaissance painters might have been able to produce their pictures more efficiently if they had had digital cameras. But perhaps their paintings (and their artistic innovations) were all the greater for being done without modern technology. See my post on Art and the Enterprise.


Saturday, July 09, 2005

Purpose-Agnostic

Sean McGrath (Propylon) thinks purpose-agnosticism to be one of the really useful bits of SOA. He refers to a posting he made to the Yahoo SOA group in June 2003, where he wrote:
The real trick with EAI I think, is to get purpose-agnostic data representations of business level concepts like person, invoice, bill of lading etc., flowing around processing nodes.

The purpose-agnostic bit is the most important bit. OO is predicated on a crystal ball - developers will have sufficient perfect foresight to expose everything you might need via an API.

History has shown that none of us have such crystal balls.

Now if I just send you the data I hold, and you just send me the data you hold - using as thin an API as we can muster - we don't have to doubleguess each other.
Propylon has been implementing this idea in some technologies for the Irish Government.
However, Sean believes that purpose agnosticism can only be pushed so far. He cites the service example “test for eligibility for free telephone allowance”, which is evidently purpose-specific. Thus there are some areas where very prescriptive messages (which Sean calls "perlocutionary") are more appropriate.

Let's push this example back a little. Why does B need to know if a particular customer is entitled to a free telephone allowance? Only because it alters the way B acts in relation to this customer. We should be able to derive what B needs to know from (a model of) the required variety of B's behaviour. Let's suppose B delivers a telephone to the customer and also produces an invoice. And let's suppose the free telephone allowance affects the invoice but not the delivery. Then we can decompose B into B1 and B2, where only B2 needs to know about the free telephone allowance, allowing us to increase the decoupling between B1 and A. Furthermore, the invoice production may draw upon a range of possible allowances and surcharges, of which the free telephone allowance is just one instance.

On a cursory view, it appears that the technologies Sean has been building for the Irish Government support a programme of deconfliction - using some aspects of SOA (or whatever you want to call it) to separate sociotechnical systems into loosely coupled subsystems. If this is done right, it should deliver both IT benefits (productivity, reuse) and business benefits. This is all good stuff.


Joined-Up Services - Trust and Semantics

But the deconfliction agenda leads to a new set of questions about joined-up services - what emerges when the services interact, sometimes in complex ways? How do you impose constraints on the possible interactions? For example, there may be system-level requirements relating to privacy and trust.

One aspect of trust is that you only give data to people who aren't going to abuse it. Look at the current fuss in the US about data leakage and identity theft involving such agencies as ChoicePoint. The problem with Helland's notion of trust is that it deals with people who come into your systems to steal data, but doesn't deal with people who steal data elsewhere. So you have to start thinking about protecting data (e.g. by encryption) rather than protecting systems. (This is an aspect of what the Jericho Forum calls deperimeterization.) Even without traditional system integration, an SOA solution such as PSB will presumably still need complex trust relationships (to determine who gets the key to which data), but this won't look like the Helland picture.

A further difficulty comes with the semantics of the interactions. If we take away the assumption that everyone in the network uses the same reference model (universal ontology), then we have to allow for translation/conversion between local reference models. As Sean points out elsewhere, this is far from a trivial matter.


Technology Adoption

My expectation is that the deployment of technologies such as the Public Service Broker will experience organizational resistance to the extent that certain kinds of problem emerge out of the business requirements. In my role as a technology change consultant, I am particularly concerned with the question of technology adoption of SOA and related technologies, and how this is aligned with the business strategy of the target organization. I invite readers of this blog to tell me (in confidence) about the adoption of SOA in their organizations, or their customers' organizations, and the specific barriers to adoption it has encountered.


See also

Collaboration and Context (January 2006) Context and Purpose (February 2006)
Context and Presence (Category)

Monday, April 25, 2005

Goal-Oriented Requirements Engineering and its relevance to SOA

The i* approach

Last week I attended a seminar organized by the BCS RESG. Modelling your System Goals: The i* approach. The main speaker was Eric Yu (University of Toronto), the original developer of i*. The remaining speakers reported their experiences using and extending i* in a range of projects.

The i* approach is a methodology for goal-oriented requirements engineering (GORE). Professor Yu characterized the distinct nature of Goal-Oriented RE as follows.

Conventional RE Goal Oriented RE
Describes how things work (AS-IS) and how they should work (TO-BE). Describes why things work (or should work) that way.
Models of behaviour. Models of intention and rationale.

i* is particularly focused on so-called "early stage" requirements engineering - before you know what "the system" is going to be. It involves two key models: strategic dependency models and strategic rationale models.

The strategic dependency model identifies goal dependencies between actors. For example, a CarOwner actor depends on "CarBeRepaired", and this can be fulfilled by the BodyShop actor. Put crudely, a goal dependency consists of a pair: {"I want", "I can"}.

i* distinguishes between two different kinds of goal dependencies: hard goals and soft goals. Although this distinction was explained in terms of precision, this seems unsatisfactory to me, since surely precision is itself a matter of degree and timing and negotiation. However, the soft goals that were used as examples seemed to me to possess some other interesting features, which may be relevant to making this distinction clearer. Firstly, the soft goals were generally not amenable to precise specification by the consumer of the goal. Secondly, there is a sense that what counts as satisfaction of the goal is context-dependent and shifts over time. Thirdly, the perceived outcome would generally affect the relationship of trust between the consumer and the provider, and thus the future behaviour of the consumer towards the provider.

This led me to an alternative way of formulating the distinction - according to who owns the goal. In some cases, the consumer determines and decomposes the problem, and then delegates the solution. I think this produces a hard goal that is (or should be) relatively easy to specify. In other cases, the consumer delegates the problem (formulation of) as well as the solution (provision of). I think this would produce what i* calls a soft goal.

So for a soft goal such as FairSettlement, what counts as fairness is delegated either to the service provider, or to a trusted third party such as a regulator. (In some cases, notions of fairness are implicitly delegated to virtual third parties, such as TheCommunity or PublicOpinion, but these are notoriously unstable and unreliable mechanisms.) The significance of this distinction for goal modelling is that identifying something as a soft goal introduces an additional set of concerns for the requirements analyst.

The strategic rationale model identifies causal connections between goals and subgoals for a given actor. Note that the behaviour of an actor may depend on perceived causal connections and associations, rather than actual ones. So if we are going to model rationale properly, we need to include some representation of the actor's beliefs. However, last week's i* presentation did not introduce any notation for belief.

SOA interpretation


Much of the material presented was pre-SOA in outlook, and there were some limiting assumptions about the nature of the systems being built. However, I was looking to push i* beyond these assumptions. What I wanted to explore was the possibility of using i* to design services and service-oriented solutions.

There appears to be a very straightforward mapping from goal dependencies on the strategic dependency model to business services. And these requirements are supported by an understanding of the rationale of each actor. The reason for this is that we want to design business services in terms of added value, and this seems to rely on our having some notion of value from the perspective of the service consumer. SOA is geared towards flexibility, and an appreciation of the possible rationale of each actor also helps build solutions that support a good range of different scenarios.

Another SOA-related motive for modelling dependency and rationale is to analyse compliance (including so-called non-functional requirements). For example, we may identify some system vulnerability, and then identify a reciprocal dependency that provides some enforcement mechanism. We can then look at the system dynamics within a collaboration, to determine the adequacy of this mechanism, as well as any possible side-effects.

In general terms, I am convinced that some modelling of intentions and outcomes is an important aspect of modelling for SOA. In the Boxer-Cohen Triple-Articulation approach to Asymmetric Design, this is known as the Deontic Articulation. i* is much simpler, and widely practised - but is it powerful enough?

Complexities, difficulties and future opportunities


Modelling intentions. There are difficulties involved in modelling the intentions of both organizations and machines, and these difficulties cause some people to say that it doesn't make sense to attribute intentions to either organizations or machines - only to people. I don't agree, for three reasons. The first is that many of these difficulties also apply when we are trying to understand the intentions of people, since people often have confused or concealed intentions. The second is that there are established techniques for tackling these difficulties, which help us to understand the intentions of all kinds of system, at different levels. The third is that we simply cannot make sense of business strategy and organization at a reasonable level of abstraction without somehow talking about organization goals.

Goal mismatch
. In practice, there is unlikely to be a neat and exact match between "I want" and "I can". There may be both pragmatic mismatch (crudely: what the provider does is not exactly what the consumer wants) and semantic mismatch (crudely: what the consumer understands is not what the provider understands). This is a form of impedance or asymmetry.

For example, a car driver's real goal may be to have a reliable car with low total maintenance cost. But no service provider offers exactly this service. So the car driver has to compose something that approximates to his real goal, using a combination of car maintenance services and car warranty (insurance) services. (There may be significant commercial opportunities for a value-adding service or service platform in this kind of space. Hence the relevance of goal modelling to business strategy.)

Within i*, the strategic dependency assumes an exact match between the consumer's view of the goal and the provider's view of the goal. To the extent that there is some unfulfilled remnant, this is analysed within the strategic rationale of that actor. But it is not clear to me how this approach would support architectural reasoning about the unfulfilled remnants and the strategic opportunities for developing new added-value services.

moreDeconfliction. Given that a complex situation contains considerable goal conflict, we need a systematic way of resolving goal conflict. Deconfliction means organizing operations in a way that minimizes the potential risk of interference and internal conflict. Conversely, we need ways of resolving goal synergy.

Note that from a security perspective, goal synergy between (supposedly) independent actors may not be a good thing - especially where it indicates opportunities for collusion and fraud.

Variation. A robust service economy typically needs to accommodate considerable variation in the goals and rationale of the actors. So instead of modelling a single standard strategic rationale, we need to understand the nature of the variation between different rationales, and the implications of this for producing robust and agile systems.

For example, a healthcare system makes some assumptions about the rationale of a patient. But a patient that happens to be a professional athlete (with particular concerns about stamina and performance) will have a very different strategic rationale in relation to healthcare, as compared to a patient that happens to be a healthcare professional (with above average exposure to infection and other risk).

Dynamic and recursive rationale modelling. The causal relationships between goals and subgoals are subject to dynamic effects. For example, many processes operate in terms of surrogate goals - outcomes that are valued not for their own sake but because they are correlated with some real goal. The surrogate goals are available for measurement before the real goals, and may therefore provide useful predictive metrics. (See discussion of Surrogate EndPoints in relation to SOA Pharma.) This means that cause-effect modelling may need to include system dynamics such as delay and oscillation.

There is also a problem that the rationale of one actor may depend on that actor's beliefs about the rationale of another actor. For example, Professor Yu presented an example from insurance, where reorganizing the strategic dependency model could align the insurance broker with the interests of the customer (instead of the broker being aligned with the interests of the insurance provider). In this situation, what the insurance customer demands from the insurance broker depends crucially on the customer's belief as to whose side the broker is on. But that in turn may depend on the insurance broker's beliefs about the customer's acting in "good faith", and about the future commercial tactics of the insurance company. So the whole thing becomes recursive, rather like the Knots identified by R.D. Laing.

Summary


I think the overall approach of i* is extremely interesting for service-oriented architecture, and is broadly consistent with the general approach to business modelling for SOA we have been developing.


A version of this post was published in the RESG Newsletter Requirements Quarterly (June 2005).

Friday, April 01, 2005

Deconfliction and Interoperability

Deconfliction

Deconfliction is an important type of decoupling. In October 2001, a Time Magazine cover story (Facing the Fury) used the term.

Bush's gambit — filling the skies with bullets and bread — is also a gamble, Pentagon officials concede. The humanitarian mission will to some degree complicate war planning. What the brass calls "deconfliction" — making sure warplanes and relief planes don't confuse one another — is now a major focus of Pentagon strategy. "Trying to fight and trying to feed at the same time is something new for us," says an Air Force general. "We're not sure precisely how it's going to work out."

The military take interference very seriously - it's a life and death issue. Deconfliction means organizing operations in a way that minimizes the potential risk of interference and internal conflict, so that separate units or activities can be operated independently and asynchronously.

But deconfliction is often a costly trade-off. Resources are duplicated, and potentially conflicting operations are deliberately inhibited.

As communications become more sophisticated and reliable, it becomes possible to reintroduce some degree of synchronization, to allow units and activities to be orchestrated in more powerful ways. This is the motivation for network-centric warfare, which brings increased power to the edge.

Although the word isn't often used in commercial and administrative organizations, a similar form of deconfliction can be inferred from the way hierarchical organizations are managed, and in traditional accounting structure of budgets and cost centres. This is known to be inflexible and inefficient. Whenever we hear the terms "joined-up management" or "joined-up government", this is a clue that excessive deconfliction has occurred.

Interoperability

Deconfliction leads us towards a negative notion of pseudo-interoperability: X and Y are pseudo-interoperable if they can operate side-by-side without mutual interference.

But there is also a positive notion of real interoperability: X and Y are interoperable if there is some active coordination between them. This forces us to go beyond deconfliction, back towards synchronization.

General Shoomaker: "We've gone from deconfliction of joint capability to interoperability to actually interdependence where we've become more dependent upon each other's capabilities to give us what we need." (CSA Interview, Oct 2004).

Philip Boxer writes: "The traditional way of managing interoperability is through establishing forms of vertical transparency consistent with the way in which the constituent activities have been deconflicted. The new forms of edge role require new forms of horizontal transparency that are consistent with the horizontal forms of linkage needed across enterprise silos to support them. Horizontal transparency enables different forms of accountability to be used that take power to the edge, but which in turn require asymmetric forms of governance." (Double Challenge, March 2006)

Relevance to Service-Oriented Architecture (SOA)

It is sometimes supposed that the SOA agenda is all about decoupling. Requirements models are used to drive decomposition - the identification of services that will not interfere with each other. These services are then designed for maximum reuse, producing low-level economies of scale.

Clearly there are some systems that are excessively rigid, and will benefit from a bit of loosening up.

But this is only one side of the story. While some systems are excessively rigid, there are many others that are hopelessly fragmented. The full potential of SOA comes from decomposition and recomposition.


Further Reading
For more on Architecture, Data and Intelligence, please subscribe to this blog.


Other Sources

Friday, February 04, 2005

Interoperability by Design

In his latest Executive Email (Feb 3rd, 2005), Bill Gates discovers a new mantra: Interoperability by Design. Now where have I heard that phrase before?

Here is Admiral Edmund P. Giambastiani Jr, Commander of the US Joint Forces Command, talking to the US Congress in March 2003 (html, pdf). He talks a great deal about collaborative partnerships, coalition partners and "jointness". Here is the key paragraph:
Coherent jointness is the third characteristic of future joint operations, which facilitates coordinated, synergistic employment of the full range of joint capabilities to achieve the desired affects. The interoperability of joint and Service capabilities further enables, and amplifies this common joint ethos. To achieve this synergy of doctrinal, organizational, and human factors, future capabilities must be “born joint.” Interoperability by design in the first instance will permit true integration. It will solve, by moving beyond, the current challenge of de-conflicting service systems that do not talk to each other. Born joint capabilities will require a greater depth of understanding of joint capabilities, an agreed Joint Operating Concept and a shared joint warfighting culture. It enables the execution of seamlessly joint actions at levels appropriate to the mission.

According to Gates, interoperability comes from XML. (The official Microsoft line is that interoperability is all about different software products working together.)

According to Giambastiani, interoperability comes from jointness. (This is clearly system interoperability rather than just software interoperability.) I see this as establishing a significant mandate for collaborative composition.