Showing posts with label coupling. Show all posts
Showing posts with label coupling. Show all posts

Saturday, September 04, 2021

Metadata as a Process

Data sharing and collaboration between different specialist areas requires agreement and transparency about the structure and meaning of the data. This is one of the functions of metadata.

I've been reading a paper (by Professor Paul Edwards and others) about the challenges this poses in interdisciplinary scientific research. They identify four characteristic features of scientific metadata, noting that these features can be found within a single specialist discipline as well as cross-discipline.

  • Fragmentation - many people contributing, no overall control
  • Divergent - multiple conflicting versions (often in Excel spreadsheets)
  • Iterative - rarely right first time, lots of effort to repair misunderstandings and mistakes
  • Localized - each participant is primarily focused on their own requirements rather than the global picture

They make two important distinctions, which will be relevant to enterprise data management as well.

Firstly between product and process. Instead of trying to create a static, definitive set of data definitions and properties, which will completely eliminate the need for any human interaction between the data creator and data consumer, assume that an ongoing channel of communication will be required to resolve emerging issues dynamically. (Some of the more advanced data management tools can support this.)

Secondly between precision and lubrication. Tight coupling between two systems requires exact metadata, but interoperability might also be achievable with inexact metadata plus something else to reduce any friction. (Metadata as the new oil, perhaps?)

Finally, they observe that metadata typically falls into the category of almost standards.

Everyone agrees they are a good idea, most have some such standards, yet few deploy them completely or effectively.

Does that sound familiar? 



J Bates, The politics of data friction (Journal of Documentation, 2017)

Paul Edwards, A Vast Machine (MIT Press 2010). I haven't read this book yet, but I found a review by Danny Yee (2011)

Paul Edwards, Matthew Mayernik, Archer Batcheller, Geoffrey Bowker and Christine Borgman, Science Friction: Data, Metadata and Collaboration (Social Studies of Science 41/5, October 2011), pp. 667-690. 

Martin Thomas Horsch, Silvia Chiacchiera, Welchy Leite Cavalcanti and Björn Schembera, Research Data Infrastructures and Engineering Metadata. In Data Technology in Materials Modelling (Springer 2021) pp 13-30

Jillian Wallis, Data Producers Courting Data Reusers: Two Cases from Modeling Communities (International Journal of Digital Curation, 2014, 9/1, 2014) pp 98–109

Friday, May 10, 2019

The Ethics of Interoperability

In many contexts (such as healthcare) interoperability is considered to be a Good Thing. Johns and Stead argue that "we have an ethical obligation to develop and implement plug-and-play clinical devices and information technology systems", while Olaronke and Olusola point out some of the ethical challenges produced by such interoperability, including "data privacy, confidentiality, control of access to patients’ information, the commercialization of de-identified patients’ information and ownership of patients’ information". All these authors agree that interoperability should be regarded as an ethical issue.

Citing interoperability as an example of a technical standard, Alan Winfield argues that "all standards can ... be thought of as implicit ethical standards". He also includes standards that promote "shared ways of doing things", "expressing the values of cooperation and harmonization".

But should cooperation and harmonization be local or global? American standards differ from European standards in so many ways - voltages and plugs, paper sizes, writing the date the wrong way. Is the local/global question also an ethical one?

One problem with interoperability is that it is often easy to find places where additional interoperability would deliver some benefits to some stakeholders. However, if we keep adding more interoperability, we may end up with a hyperconnected system of systems that is vulnerable to unpredictable global shocks, and where fundamental structural change becomes almost impossible. The global financial systems may be a good example of this.

So the ethics of interoperability is linked with the ethics of other whole-system properties, including complexity and stability. See my posts on Efficiency and Robustness. Each of these whole-system properties may be the subject of an architectural principle. And, following Alan's argument, an (implicitly) ethical standard.

With interoperability, there are questions of degree as well as of kind. We often distinguish between tight coupling and loose coupling, and there are important whole-system properties that may depend on the degree of coupling. (This is a subject that I have covered extensively elsewhere.)

What if we apply the ethics of interoperability to complex systems of systems involving multiple robots? Clearly, cordination between robots might be necessary in some situations to avoid harm to humans. So the ethics of interoperability should include the potential communication or interference between heterogeneous robots, and this raises the topic of deconfliction - not just for airborne robots (drones) but for any autonomous vehicles. Clearly deconfliction is another (implicitly) ethical issue. 



Note: for a more detailed account of the relationship between interoperability and deconfliction, see my paper with Philip Boxer on Taking Governance to the Edge (Microsoft Architecture Journal, August 2006). For deconfliction in relation to Self-Driving Cars, see my post Whom does the technology serve? (May 2019).



Mauricio Castillo-Effen and Nikita Visnevski, Analysis of autonomous deconfliction in Unmanned Aircraft Systems for Testing and Evaluation (IEEE Aerospace conference 2009)

Michael M. E. Johns and William Stead, Interoperability is an ethical issue (Becker's Hospital Review, 15 July 2015)

Milecia Matthews, Girish Chowdhary and Emily Kieson, Intent Communication between Autonomous Vehicles and Pedestrians (2017)

Iroju Olaronke and Olaleke Janet Olusola, Ethical Issues in Interoperability of Electronic Healthcare Systems (Communications on Applied Electronics, Vol 1 No 8, May 2015)

Richard Veryard, Component-Based Business (Springer 2001)

Richard Veryard, Business Adaptability and Adaptation in SOA (CBDI Journal, February 2004)

Alan Winfield, Ethical standards in robotics and AI (Nature Electronics Vol 2, February 2019) pp 46-48



Related posts: Deconfliction and Interoperability (April 2005), Loose Coupling (July 2005), Efficiency and Robustness (September 2005), Making the world more open and connected (March 2018) Whom does the technology serve? (May 2019)

Wednesday, November 18, 2015

Update on Deconfliction

The obscure word #deconfliction has started to appear in the news, referring to the coordination or lack of coordination between American and Russian operations in the Middle East, especially Syria.

The Christian Science Monitor suggests that the word "deconfliction" sounds too cooperative, and quotes the New York Times.

“Defense Secretary Ashton B. Carter sharply took issue with suggestions, particularly in the Arab world, that the United States was cooperating with Russia, and he insisted that the only exchanges that the Pentagon and the Russian military could have on Syria at the moment were technical talks on how to steer clear of each other in the skies above the country.”

But that's exactly what deconfliction is - "how to steer clear of each other" - especially in the absence of tight synchronization and strong coordination.

The Guardian quotes Gary Rawnsley, professor of public diplomacy at Aberystwyth University, who says such jargon is meaningless and is designed to confuse the public. But I think this is unfair. The word has been used within military and other technical circles for many decades, with a fairly precise technical meaning. Obviously there is always a problem (as well as a risk of misunderstanding) when technical jargon leaks into the public sphere, especially when used by such notorious obfuscators as Donald Rumsfeld.

In the current situation, the key point is that cooperation and collaboration require something more like a dimmer switch rather than a simple on-off switch. The Americans certainly don't want total cooperation with the Russians - either in reality or in public perception - but they don't want zero cooperation either. Meanwhile Robbin Laird of SLD reports that the French and the Russians have established "not only deconfliction but also coordinated targeting ... despite differences with regard to the future of Syria". In other words, Franco-Russian coordination going beyond mere deconfliction, but stopping short of full alignment.

Thus the word "deconfliction" actually captures the idea of minimum viable cooperation. And this isn't just a military concept. There are many business situations where minimum viable cooperation makes a lot more sense than total synchronization. We could always call it loose coupling.



Helene Cooper, A Semantic Downgrade for U.S.-Russian Talks About Operations in Syria (New York Times, 7 October 2015)

Jonathan Marcus, Deconflicting conflict: High-stakes gamble over Syria (BBC News, 6 October 2015)

Robbin Laird, The RAF Unleashed: The UK and the Coalition Step up the Fight Against ISIS (SLD, 6 December 2015)


Ruth Walker, Feeling conflicted about deconfliction (Christian Science Monitor, 22 October 2015)

Matthew Weaver, 'Deconflict': buzzword to prevent risk of a US-Russian clash over Syria (Guardian 1 October 2015)

Ben Zimmer, In Conflict Over Russian Role in Syria, ‘Deconfliction’ Draws Critics (Wall Street Journal, 9 October 2015)

More posts on Deconfliction

Updated 7 December 2015 

Friday, August 27, 2010

Organizational Intelligence - Does Size Matter?

On a Linked-In group discussion (CBDI Forum), Richard Gilyead asked some excellent questions about organizational intelligence, and I'm going to post an edited and expanded version of the discussion here.


Richard started by asking

Does size matter? Do organisations with "thin skin" have a better ability to sense and respond? Is it inevitable that organisations will develop "thicker skins" as they grow?


Of course it has long been a popular idea that large organizations should behave like small organizations, while small organizations have problems of their own. I believe we can use organizational intelligence as a "lens" to attain a more precise understanding of some of the things that get lost as an organization gets larger, and use this lens to try and preserve and restore these elements. Meanwhile, a small organization is more dependent than a larger one on the knowledge and intelligence of its business partners and other members of its ecosystem, and it is useful to pay explicit attention to these aspects of its external relationships.

Richard then asked a follow-up question

What aspects of cohesion should be emphasised to preserve small organisation benefits as they grow? 

I think this should be a vital question for enterprise architecture (even though enterprise architects mostly work in large organizations, it's surely never to late to ask it), and I don't think there is a one-size-fits-all answer.

An organization makes a strategic choice (whether consciously or unconsciously) about the kind of "structural coupling" that matters. For example, a high-tech start-up company may be physically located near a university campus, and the founders have good personal relationships with researchers at the university. This will help the organization remain close to technological trends, but possibly at the expense of developing a broader understanding of market opportunities. At some stage the venture capitalists may persuade them to move their head office closer to the customers and/or to bring in new executives whose personal network faces in a different direction. Meanwhile, a different company might be dominated by supply chain and distribution issues, and for that company it will be these relationships that are most strategically significant.

The point is that this "structural coupling" affects many aspects of organizational intelligence: the events and trends that the organization can respond to, knowledge flows, innovation and organizational change. Which brings us back to the question of stability - where are the fixed points around which organizational systems (including formal information systems and services) can be built, and where are the points where flexibility is required. This is not a new question, but the organizational intelligence "lens" provides a new way of thinking strategically about the implications here.

Saturday, November 28, 2009

Complexity and Power

@RSessions kindly gave me a quick overview of his SIP methodology, and how he calculates the complexity of a system of systems, based on the number of elements in each system and the number of connections between systems. The internal complexity of each system increases in a non-linear manner with the number of elements, and the external complexity increases with the number of connections between the systems, so the trick is to find a structure that optimizes the overall complexity.

Obviously we have to be clear as to what counts as an element (for example functions), and what counts as a connection. Using the SIP lens, it is possible to see how certain architectural styles (including those popular in the SOA world, such as hub-and-spoke or layered) only deliver simplicity (and the benefits of simplicity) if we can assume that only certain kinds of connection are significant. Roger's view is that this assumption is unwarranted and invalid.

In general, the so-called functional requirements are associated with the elements and the logical connections between them. In my view, architects also need to pay attention to the nature of the connections (coupling) because these will have important consequences on the structure and behaviour of the system as a whole. For example, synchronous versus asynchronous. At present, Roger's complexity calculations don't differentiate between different kinds of connection, so it would be interesting to investigate the costs and risks associated with different kinds of connections, to see how much difference it could make. 

Roger's primary interest is in IT systems, but the same principles would appear to apply to processes and organizations. If you are running a factory, you have an architectural choice about the connection between say the moulding shop and the paint shop. With an asynchronous flow you have two loosely coupled operations separated by a buffer of work-in-progress; with a synchronous flow you have two tightly-coupled operations connected on a just-in-time basis. The former is a lot easier to manage, but it has an overhead in terms of inventory cost, storage cost, increased elapsed time, slower response to changes in demand, and so on. The latter may be more efficient under certain conditions, but it can be more volatile and the impact is much greater when something goes wrong anywhere in the process.


Intuitively, there seems to be a difference in complexity between these two solutions. The first is simpler, because the connection between the two systems is weaker; the second is more complex. With greater complexity comes greater power but also greater risk. Surely this is exactly the kind of architectural trade-off that enterprise architects should be qualified to consider. Roger's SIP methodology does give the architect a very simple lens to try and understand system-of-system complexity. Not everyone agrees with Roger's definition of complexity, and we can find some radically different notions of complexity for example in the Cynefin world, but at least Roger is raising some important issues. The EA world certainly needs to pay a lot more attention to questions like these.

Thursday, August 06, 2009

Loose Coupling: Innovation and the Web

"Cosy social networks are stifling innovation", according to an article in the New Scientist (5 August 2009).

This is a familiar argument in a new context. It has often been argued that corporate culture can inhibit innovation, and that groups responsible for experimental products and processes tend to perform better if they are put into a separate organization unit at some distance from the main offices. In recent years, many companies have set up R and D units in special science parks, co-located with similar units from other companies, usually with links to a nearby university. This is essentially applying architectural thinking to the geographical location and distribution of certain classes of capability, and reflects a common belief in the importance of these factors.

But interaction and clustering is nowadays much less dependent on physical geography, and much more dependent on virtual online communities and networks. The New Scientist article quotes Viktor Mayer-Schönberger of the National University of Singapore, who argues that today's software developers work in social networks in which everyone is closely linked to everyone else. "The over-abundance of connections through which information travels reduces diversity and keeps radical ideas from taking hold."

What Mayer-Schönberger sees as an over-abundance of connections is actually another form of tight coupling. If we want to build the capability for radical innovation, we need to create a decoupled space to support a loosely coupled knowledge cycle. Which means careful attention to the effects of social networking and organizational intelligence.

Wednesday, September 10, 2008

Event Processing Example - Shoplifting

Perhaps shoplifting (or rather the prevention of shoplifting) is a good application of complex event processing (CEP).


Shoplifting prevention has a level of complexity that is absent from something like air traffic control (the subject of Tim's next post). The behaviour of aircraft is subject to the laws of physics, which (at least for the purposes of air traffic control) are pretty well understood and unlikely to change. Whereas the behaviour of shoplifters is (i) subject to the observation and confirmation of shoplifting patterns and (ii) subject to learning and innovation by the shoplifters themselves. Therefore the two cases aren’t the same: there is a form of flexibility I’d want to build into a shoplifting system that I wouldn’t need in the Air Traffic Control system.

I think we need to distinguish between modelling shoplifting (constructing a theory about the observable patterns associated with shoplifting) and modelling shoplift-detection (designing a system that will observe or infer these patterns). In the comments to Tim's blog, there is a discussion on the relative merits and technical feasibility of two mechanisms: image processing and RFID-based merchandise-object tracking. I think these two mechanisms belong to a model of shoplift-detection rather than a model of shoplifting; in discussing them we are not saying anything about the phenomenon of shoplifting itself.

Perhaps the reason I think this distinction might be important is that I’m coming at this from a service-oriented perspective, and so I’m always looking for ways to build solutions in a loosely coupled way, therefore decoupling the models. I accept that other people may not see this as (i) relevant or (ii) technically feasible at the current state-of-the-art.

There is of course an arms race between shoplifters and shoplifting detection systems. Shops fit cameras and RFID tags, shoplifters adopt new evasion tactics and carry cheap devices to reprogram the RFID tags. So both the shoplifting model and the shoplifting-detection models evolve over time.

The evolution of the shoplifting model sometimes affects the evolution of the shoplifting-detection model. Suppose we discover that professional shoplifters work in teams - one distracts the shop assistant, a second one puts the item into a bag, then switches bags with a third team member before leaving the store. So even if the theft was spotted on the cameras, the thief no longer has the goods, goes protesting to the manager's office to be searched, and then leaves the store with an apology. How are you going to build a system to detect that? It's no good just tracking individual shoppers and trying to determine which of them might be shoplifters (which is the form of simplistic event-response I criticized in my earlier posts). The granularity of the event-tracking system has to change in response to a new theory about the behaviour of shoplifters.

However, the shoplifting-detection model may evolve as a consequence of technological change - improved image processing, more sophisticated or widespread RFID. This change doesn't immediately affect the behaviour of shoplifters, although shoplifters may change their behaviour over time to respond to these advances in technology. Loosely coupled, therefore.

Wednesday, March 19, 2008

Lost Coupling

Sometimes loose coupling doesn’t exactly give you what you want. At Gate 4 at San Diego airport today, I was sitting opposite a young man who evidently worked for one of the airlines. His girlfriend was stranded at some other airport, and he was trying to get her onto a plane that was just about to leave, talking alternately to her and to various people in his own office. He was telling her to stay at the gate (she didn’t); the man at the gate was telling her to go back through security to revalidate her ticket (she didn’t have time); the boyfriend was telling his pal to revalidate the ticket remotely (he did); he was telling the girlfriend what to tell the man at the gate (apparently the man at the gate could have been a lot more helpful, just get his name and number); he was checking the empty seats on his laptop (there were five); he told her to get back to the gate (why had she not stayed?); but the plane had left without her (he not happy, her feelings not recorded, probably off the scale); and he was left trying to find an alternative route home for her. I was minded of one of those toys that complex systems researchers use to demonstrate unpredictable non-linear behaviour: a jointed pendulum that swings chaotically in all directions.

Why do so many companies do this? Even for the girlfriend of an employee, even with the boyfriend pulling all the strings he could reach by cellphone, though the plane had empty seats, they couldn’t manage to get her home. Presumably any other customer in the same situation would have had even less chance.

This mess results from the worst kind of loose coupling – but it is a kind that is common to many organizations and systems. Simple-minded SOA people who chant “loose coupling” as a mantra must remember that sometimes tight coupling makes more sense. But then, on the other hand, if you have spent a lot of time dealing with the worst kind of tight coupling, loose coupling certainly has its attractions too.

But the answer isn't to oscillate erratically between loose and tight coupling, but to get the coupling right. This is an architectural challenge that really isn't well served by the kinds of modelling tools that SOA architects use today.

Afterword

I typed this on the plane from San Diego to LA. When I reached LA for my connecting flight back to London, it was showing a delay of 4 hours. I made my way to the gate and managed to get myself onto the earlier flight. Many thanks to the British Airways staff at LA who made this possible - I think that deserves a plug, don't you?

Wednesday, March 05, 2008

Agility and Variation 2

In a post called Kitchen Sink Variability, Harry "Devhawk" Pierson (Microsoft) lays into a research note by Ronald Smeltzer (Zapthink): Why Service Consumers and Service Providers Should Never Directly Communicate.

As Harry explains, there is a contradiction between Variability and the principles of Agile Development. (I discussed some aspects of this contradiction in my post on Agility and Variation.) Harry thinks Zapthink is advocating what he calls Kitchen Sink Variability - let everything vary except the kitchen sink.

Without architecture, this would indeed be a crazy idea.

Harry appeals to the ExtremeProgramming principle of YouArentGonnaNeedIt, which says "don't build in extra functionality". But does this principle apply to flexibility - does flexibility count as a kind of functionality? Should we really avoid building in flexibility?

If building in flexibility means constructing specific additional stuff (mechanisms, abstractions) to anticipate non-specific future variation, then this would seem to entail additional cost (development overhead, maintenance overhead, runtime overhead) with no clear benefits. We might reject this kind of thing as "over-engineering".

But if building in flexibility means using appropriate architectural patterns and existing technologies that allow you to ignore future variation, that seems to be a different thing altogether. Zapthink is talking about a specific pattern - decoupling the service consumer from the service provider - that is supported by current SOA platforms. That seems to make a lot of sense to me. But Zapthink justifies this pattern by appealing to a general principle of variability, and this is what has aroused Harry's scorn.

For my part, I don't think SOA means uncritically embracing all aspects of variation and variability. I have always said Loose Coupling is a choice, not a mandate. (See my earlier post Loose Coupling 2). But let's understand where the variation comes from. It is not because enterprise architects are going around promoting supply-side variation. More likely because many enterprise architects are struggling with increasing levels of demand-side variation. The traditional choice has always been to suppress as much variation as possible, but for many organizations (and their customers) this choice is no longer acceptable.

The relationship between variation and complexity is not linear. A truly agile architecture should be able to accommodate some increase in variation without increasing complexity by the same amount. But of course there is still some cost. SOA improves the trade-off, but there is still a trade-off.

The bottom line is that architects need to be able to reason intelligently about variation and variability. Not abstraction for the sake of abstraction, not decoupling for the sake of decoupling, but abstraction and decoupling driven by the needs of the enterprise. Loose Coupling is not a mandate but a choice.

Thursday, November 02, 2006

Semantic Coupling

In a recent post, Semantic Coupling, the Elephant in the SOA Room, Rocky Lhotka identified semantic coupling as one of the challenges of SOA. Udi Dahan agrees that semantic coupling is harder, but adds that in his view SOA is all about addressing this issue. Meanwhile Fergal Somers, chief architect at Cape Clear, doesn't think it is so hard in practice, although he acknowledges that the relevant standards are not yet mature.
Any systems that are linked together as part of a broader workflow involves semantic-coupling as defined above, but so what? We have been building these systems for some time.

Although I wouldn't go as far as saying SOA is all about any one thing in particular (see my earlier post on Ambiguity), I also agree that semantic coupling (and semantic interoperability) are important.

Rocky's argument is based on a manufacturing analogy.
  • In simple manufacturing, the economics of scale involves long production runs, so that you can spread the setup costs across a large volume.
  • In agile manufacturing, the economics of scope involves minimizing the setup costs, so that you can have shorter production runs without affecting the economics of scale.
  • I interpret Rocky's argument as saying that a major element of the setup costs for services involves matching the semantics.
Part of the economic argument for SOA is that it can deliver economics of scope (adaptability, repurposing) as well as economics of scale (productivity).

But there's more. If we combine SOA with some other management innovations, we may also be able to improve the economics of alignment. I don't think this is illustrated by Rocky's manufacturing analogy.

However, Kenneth LeFebvre reads more into Rocky's post than I did.
There is meaning to the interaction between a consumer and a service. What does this mean? SOA is all about making the connections between applications using “services” but it does not bridge the gap between the real world of business and the “virtual” world that runs within our software. This is precisely the problem object-oriented design was intended to solve, and was just beginning to do so, until too much of the development population abandoned it in search of the next holy grail: SOA.

At my request, Kenneth has elaborated on this statement in a subsequent post SOA OOA and Bridging the Gap. I agree with him that the rhetoric of OO was as he describes. But I still don't see much evidence that "it was just beginning to do so", and I remain unconvinced by his argument that some things are better represented by objects than by services. (More concrete examples please Kenneth.)

For a definition of the economics of scale, scope and alignment, see Philip Boxer's post Creating Economies of Alignment (October 2006).


Note: earlier material used the term Economics of Governance. For various reasons, we now prefer the term Economics of Alignment.

Updated 25 October 2013

Tuesday, April 11, 2006

Loose Coupling 2

Is loose coupling a defining characteristic of the service-based business and/or a core principle of SOA? ZDnet's analysts have been producing a set of IT commandments (13 at the last count), and Joe McKendrick's latest contribution is Thou Shalt Loosely Couple.

Joe quotes John Hagel's definition of loose coupling, which refers to reduced interdependencies between modules or components, and consequently reduced interoperability risk. Hagel clearly intends this definition to apply to dependencies between business units, not just technical artefacts. I think this is fine as far as it goes, but is not precise enough to my taste.

In his post The Developer's View of SOA: Just adding complexity?, Ron Ten-Hove (Sun Microsystems) defines loose coupling in terms of knowledge - "the minimization of the "knowledge" a service consumer has of a service provider it is is using, and vice versa".

(It is surely possible to link these defining notions - interoperability and interdependency, risk and knowledge - at a deep level, but I'm not going to attempt it right now.)

I want to have a notion of loose coupling that applies to sociotechnical systems of systems - and therefore needs to cover organizational interdependencies and interoperability as well as technical. I have previously proposed a definition of Loose Coupling based on Karl Weick's classic paper on loosely coupled organizations.

The trouble with proclaiming the wonders of loose coupling is that it sounds as if tight coupling was just a consequence of stupid design and/or stupid technology. It fails to acknowledge that there are sometimes legitimate reasons for tight coupling.

Ron Ten-Hove puts forward a more sophisticated argument for loose coupling. He acknowledges the advantages of what he calls a mixed service model, namely that "it allows for creation of a component model that combines close-coupling and loose-coupling in a uniform fashion". But he also talks about the disadvantages of this model, in terms of reduced SOA benefits and increased developer complexity, at least with the current technology.

Loose coupling is great, but it is not a free lunch. It is not simply a bottom-up consequence of the right design on the right platform. Sometimes loose coupling requires a top-down forcing-apart. I think the correct word for this top-down forcing-apart is deconfliction, although when I use this word it causes some of my colleagues to shudder in mock horror.

Deconfliction is a word used in military circles, to refer to the active principle of making one unit independent of another, and this will often include the provision of redundant supplies and resources, or a tolerance of reduced utilization of some central resources. Deconfliction is a top-down design choice.

Deconfliction is an explicit acceptance of the costs of loose coupling, as well as the benefits. Sometimes the deconflicted solution is not the most efficient in terms of the economics of scale, but it is the most effective in terms of flexibility and interoperability. This is the kind of trade-off that military planners are constantly addressing.


Sometimes coupling is itself a consequence of scale. At low volumes, a system may be able to operate effectively in asynchronous mode. At high volumes, the same system may have to switch to a more synchronous mode. If an airport gets two incoming flights per hour, then the utilization of the runway is extremely low and planes hardly ever need to wait. But if the airport gets two incoming flights per minute, then the runway becomes a scarce resource demanding tight scheduling, and planes are regularly forced to wait for a take-off or landing slot. Systems can become more complex simply as a consequence of a change in scale.

(See my earlier comments on the relationship between scale and enterprise architecture: Lightweight Enterprise.)

In technical systems, loose coupling carries an overhead - not just an operational overhead, but a design and governance overhead. Small-grained services may give you greater decoupling, but only if you have the management capability to coordinate them effectively. In sociotechnical systems, fragmentation may impair the effectiveness of the whole, unless there is appropriate collaboration.

In summary, I don't see loose coupling as a principle of SOA. I prefer to think of it as a design choice. I think it's great that SOA technology give us better choices, but I want these choices to be taken intelligently rather than according to some fixed rules. SOA entails just-enough loose coupling with just-enough coordination. What is important is getting the balance right.

Thursday, March 09, 2006

Business Modelling for SOA

I have developed an abstract business metamodel, which underlies the CBDI's approach to business modelling for SOA.






I have described some of the pieces of this approach in my recent articles for the CBDI Journal. It is also covered more comprehensively in the Business Modelling for SOA workshop I'm teaching for CBDI in association with Ameritas and Everware.

The emphasis is on understanding (from a business perspective) the potential decoupling of WHAT from HOW, in order to generalize the WHAT (which may be implemented as shared services) and differentiate the HOW (which may be implemented as one or more separate architectural layers).

That probably sounds a bit theoretical. I plan to describe a few examples in this blog, to show how it works in practice.


Tuesday, September 27, 2005

Efficiency and Robustness - On Central Planning

The debate between Ian Welsh and Chandler Howell has continued since my previous post on Efficiency and Robustness, and the topic of Central Planning has appeared. In this post, I want to set aside the ideological aspects of central planning and discuss some of the practical aspects.

Central Planning - The Ideology

The question of central planning is widely seen as a political one, but it certainly isn't a simple matter of left and right. There are many right-wing libertarians who reject central planning as equivalent to Stalinism, the worst excesses of the Soviet experiment. (See this article Central Planning is Spontaneous Economic Order whose author complains that the Santa Fe Institute has betrayed the principles of complexity.) And there are left-wing libertarians who are puzzled or aghast at the centralizing tendencies of some of the right-wing authoritarians as well as corporate commercial interests.

Intelligent Design is of course a form of central planning - but executed by a perfect being rather than imperfect humans. (Boston Globe, via Snowdeal.) Robin Wilton asks whether it is possible for an atheist to believe in Intelligent Design. Well, it is certainly possible for atheists to believe in central planning. There are science fiction worlds in which central planning is carried out by a supreme computer, sometimes known as Multivac (or perhaps Spaghetti Monster). Science fiction writers often use technology as an oblique way of discussing serious philosophical and moral questions.

But if we put the politics and religion on one side, there is clearly a great deal of imperfect central planning that goes on within large organizations. We should be able to discuss this in pragmatic terms rather than ideological ones.

Central Planning in Practice

In a comment to Chandler Howell's previous post, Stu Berman advocates "horizontal integration rather than vertical" and rejects central planning. Chandler replies that, "Central Planning is the best analogy I have ever seen for how large corporations are run today. The only difference [with the Soviet Union] is that the corporations have a lot better computing power to manage their logistics." Chandler goes on to describe the fragility of systems (including large organizations) based on central planning, and suggests that H5N1 (aka Bird Flu) could have a more devastating effect than Hurricanes Katrina and Rita.

Chandler goes on to talk about coordination (what the military call C3I) in a crisis management context. "The disaster in Katrina may have started with the levees breaching, but lack of coordination at any level is what kept it going for days."

Discussion of central planning is relevant not just to the destruction of New Orleans, but to its subsequent reconstruction. The popular alternatives to central planning come under a range of labels including clusters (SwampFox) dynamic improvisation (Arnold Kling via OceanStateBlogger), organic growth (MCB) and private initiative (CapitalFreedom). These alternatives certainly don't all come from the same end of the political spectrum.

Advocacy for central planning also comes from various quarters, including Environmental Planning (WHY, WHAT, HOW and FOR WHOM).

In the real world, we are not likely to have either total central planning or total anarchy, but a complicated mixture of the two. To make a productive contribution to this debate, we have to be able to reason intelligently about outcomes (evidence-based policy) rather than ideological principles. (MCB also makes this point.) Furthermore, we have to connect outcomes with a specific context. (Not generic outcomes that would be exactly the same for New Delhi, New Orleans and New York.)

Interoperability

Central Planning is an attempt to produce all decisions from a single directing mind. To this extent that this attempt is successful, it puts the emphasis on endo-interoperability - coordination within the scope of a single plan. In contrast, exo-interoperability involves dynamic collaboration between autonomous agencies, whose plans may be formulated in entirely different terms.


Chander Howell, Surge Protectors (21 September 2005), Surge Protectors Part 2 (25 September 2005)

Ian Welsh, The Economics of a Flu Pandemic II (24 August 2005)


Related posts on Efficiency and Robustness

Thursday, September 22, 2005

Efficiency and Robustness - On Tight Coupling

People (including Ian Welsh and Chandler Howell) have been worrying about the ability of a tightly coupled world to withstand shocks - including Hurricane Katrina and SARS. Here are some key quotes from their blogs.
  • Our society, as a whole, has no surge protection - no ability to take shocks. We have no excess beds, no excess equipment, no excess ability to produce vaccines or medicines, nothing. Everybody has worshipped at the altar of efficiency for so long that they don't understand that if you don't have extra capacity you have no ability to deal with unexpected events. (IW)
  • As we have now seen with Hurricane Katrina, even if the capacity were there, the United States’ ability to manage and allocate that capacity is essentially non-existent. (CH)
Welsh quotes some analysis from Sherry Cooper and Donald Coxe, comparing a possible SARS outbreak with the flu pandemic.
  • Because our society and economy is so much more integrated and so much more connected (for example the flu had to spread by ship back then), and so much more "just on time" that it isn't really a model you can use. We'll likely get hit harder, faster and because many locations have such limited inventories, relying on getting it as they need it, the supply disruptions are likely to be much worse.
Telecoms analyst Martin Geddes offers a potentially more upbeat perspective, at least in relation to Hurricane Katrina, asking us to distinguish between an emergency and a disaster.
  • In an emergency, a distributed piece of information calls for a central response. A disaster, the converse. Those best informed are in the field; those best equipped, in the field. The best disaster response system is the one in your hand when the disaster strikes.
But Geddes's optimism is tempered by his distrust of central committees, and his fear they will abuse their power.
  • But the changes needed to make things better are politically painful and resistent by incumbent powers. ... I suspect that central committees will determine we need more central response systems, and weaken the economy by taxing everyone hard to pay for it. The exact opposite of the medicine a “network edge” response would dictate.
There are some interesting (and sometimes shocking) details from the Hurricane Katrina experience, and several different levels of incapacity can be detected. From a systems-of-systems perspective, what I find particularly interesting are the interoperability failures.
  • Inability of FEMA to work with medical professionals unless they are part of the National Disaster Medical Team. Inability of FEMA to orchestrate external / autonomous agents. (Overlawyered Blog, via Ernie the Attorney)
  • Inability of FEMA to provide appropriate support for people with special needs. Inability of FEMA to collaborate with agencies with specialist knowledge and resources. (Conmergence Blog)
My colleagues and I are currently talking to some large organizations about some of the strategic aspects of interoperability, with particular reference to SOA, and I hope to be permitted to publish some of this material one day. In the meantime, I am keen to collect and analyse more public domain examples of interoperability failures. Please contact me, or link to this blog posting.

The FEMA response to Hurricane Katrina

Related blogposts:
Efficiency and Robustness - On Central Planning (September 2005)
Efficiency and Robustness - Processing New Orleans mail after Katrina (November 2005)
Technological Progress (October 2005)

Sunday, September 04, 2005

Inscription and Loose Coupling

While searching for something else, I happened upon a dissertation on Business Processes and IT in the Pharmaceutical Industry, by Kai A. Simon. The dissertation contains a lot of useful material, including a comparison between the change management practices of several large consulting houses. (PDF is available via the Waellisch website.)

But what I want to talk about here is a theoretical aspect of loose coupling. Simon derives a theory of loose coupling from Madeleine Akrich's concept of inscription.

The relation between global and local aspects of an infrastructure can be analyzed through the concept of inscription (Akrich 1992). ... Taking this point of departure, we can describe how inscription occurs at technology and organizational level and what impact it has on the relation between IT and organization.
  • Technology inscription can be defined as the rigidity of the technology in constraining the users in the way they are related to the technical object. In other words, it refers to the way technological systems can be used within or outside their design and which forms of work-arounds the system allows or prevents.
  • Organizational inscription, on the other hand, reflects the level of freedom or rigidity in organizational procedures or, in other words, the extent to which organizational agents are allowed to reshape the ways in which the technical object are used with respect to organizational rules.




To the extent that technology inscription and organizational inscription are orthogonal, we get a 2x2 matrix.


Here is Simon's account of the four quadrants, including some comments on the implications for BPR and process improvement.

 

Strict alignment 

In this case, the design of organizational procedures leaves no room for local adaptation. At the same time, technology is rigid: There is no option for use outside the defined context. Standardization of technology and organizational procedures and strict alignment between these elements typically characterize the infrastructure. 

In most process improvement initiatives, the aim is to develop and implement a strictly aligned organizational and technical infrastructure, following a pre-defined process design and using information systems that are supporting this design efficiently. 

 

Rigid Technology

Organizational procedures are open for local adaptation, while technology does not permit changes in use. Infrastructure is characterized by tensions between global and local organization procedures aiming at satisfying the same objectives, but differing in the means for their achievement. 
[Simon describes a change management project that fell into this category, despite the original intention to develop a strictly aligned infrastructure.] The reason can be found in the lack of control that was exercised with regard to process compliance. It was assumed that all monitors would comply with the globally designed process and senior management was not aware of the local adaptations that took place.

 

Loose coupling

Organizational procedures and technology use can be redefined and adapted locally. The infrastructure allows adaptation to internal and environmental dynamics and is typical of knowledge intensive organizations. 
[Simon describes a change management project with this intention, which was discontinued after the organization underwent a merger.] 

 

Rigid organization

In this context, organizational procedures are strictly defined at global level, while technology is open for modifications. The infrastructure is characterized by tensions between different technologies adopted at local level, or local variations in technology use. This context is typical for a post-merger situation, where the merging firms are aiming at developing a common and standardized set of organizational procedures, but maintain their individual technical infrastructures.


As described here, loose coupling gives you THREE different types of flexibility - technological flexibility, organizational flexibility AND flexibility in the relationship between technology and organization.


Reference: AKRICH M., 1992, «The De-scription of Technical Objects» , in BIJKER W., LAW J., (ed.), Shaping Technology/Building Society. Studies in Sociotechnical Change, Cambridge Mass., MIT Press, p.205-224.


UPDATE: I have just received an email from Kai Simon, who writes:

The version of my dissertation you have found on the Waellisch site is not the final one, but a draft that was modified in several aspects before it was published. E.g., the final version does not contain a comparison of four consulting approaches, but only of those that were used in the case study company. 

However, the section you analyse has not been changed. You can find the final version in the publication section of my web site (http://www.informatik.gu.se/~kai 

The section on inscription and loose coupling is based on the work that I have conducted with Antonio Cordella, one of my colleagues from my time at the Viktoria Institute in Sweden (http://www.viktoria.se). So it was a joined idea and I would like to give Antonio the credits he deserves.

Our concept was the result of our research, but we also received valuable input from the late Claudio Ciborra. He was in the lead for the book "From Control to Drift - The Dynamics of Corporate Information Infrastructures" (Oxford University Press) that summarizes the results of multiple case studies on the manageability of IT infrastructures.
 

Thanks Kai

Tuesday, July 12, 2005

Loose Coupling

Karl Weick has been writing about loosely coupled organizations for around thirty years. The following definition comes from his paper Managing Change Among Loosely Coupled Elements, which is now reprinted in Making Sense of the Organization.


Loose coupling exists if A affects B ... Weick's example
1 suddenly (rather than continuously) threshold function
2 occasionally (rather than constantly) partial reinforcement
3 negligibly (rather than significantly) damping of response (constant variable)
4 indirectly (rather than directly) a superintendant can affect a teacher only by first affecting a principal
5 eventually (rather than immediately) lag between a legislator's voting behaviour and the response by the electorate


These characteristics describe aspects of the causal relationship between A and B. Whereas Weick was thinking primarily about people and organizations, we can extend this to situations where A and B are sociotechnical entities of any kind.

Flexibility, Tolerance and Reuse. One way of thinking about loose/tight coupling is the extent to which A (or the behaviour of A) determines the behaviour of B. How much scope / leeway does B have in responding to A? Is B's response to A overdetermined (constrained) or underdetermined (flexible)?

Let's say I'm expecting a delivery from the online grocery store. If the delivery time is specified as 4pm, then the grocery store is constrained to deliver at this time. If the delivery time is specified as between 4pm and 5pm, then the grocery store has greater leeway in making the delivery: optimizing its own schedule while satisfying the defined service level. If I am not too bothered about the exact time of a given service, I can pass this flexibility on to the service provider. This is called tolerance. The delivery time is not so tightly coupled.

But by giving flexibility to the service provider, I am giving up some flexibility I don't expect to need myself. I may regret this later, if something else comes up unexpectedly and I need to leave home at 4:30. The situation is now reversed - I can't leave home until the delivery van has come - my movements are tightly coupled to the grocery store.

More generally, we can underdetermine a service by strengthening the preconditions (I shall be ready to receive the delivery any time between 4pm and 5pm) and weakening the postconditions (the delivery will be between 4pm and 5pm).

Conversely, a tolerant service provider can offer greater flexibility to the consumer by weakening the preconditions and strengthening the postconditions.

Industrial use of interchangeable components has always relied on some degree of engineering tolerance. In software and services, tolerance promotes interoperability and reuse. Accurate orchestration calls for careful matching between these tolerances, at design-time and/or at run-time.

Uptight. The opposite of tolerant is uptight. (This word was used by Bateson to describe systems that are operating close to the limit, and are therefore not able to flex any further. The word is also useful to describe people whose emotions are in this state.)

A contractual relationship can shift from tolerant (loosely coupled) to uptight (tightly coupled) as a result of other changes. As volumes and business pressures increase, resources that were once lightly used may become heavily used. This forces tighter synchronization of the use of these resources.

For example, in a railway system, it is possible to decouple the management of the trains from the management of the track only at relatively modest traffic volumes. As traffic volumes increase, this decoupling becomes increasingly dysfunctional. Similarly, there are huge differences between a major airport in which all the staff and facilities are constantly utilized (and where planes often need to circle in the air until there is space to land) and a minor airport where staff and facilities are unoccupied for large chunks of time. Although the business services may be the same in large and small airports alike, the degree of coupling between the services is not the same. (This will of course be reflected in the orchestration of these services.) And if a minor airport tries to turn itself into a major airport, it will soon discover that the IT architecture inhibits change rather than supporting it.

Paradigm Shift. When systems are decoupled into loosely coupled subsystems, the subsystems are now tightly coupled to the interface between them. Each subsystem can be freely changed, as long as the interface remains the same.

For example, it is common in manufacturing to define a Bill of Materials, which specifies the assembly of parts into products. This is created by a Design or Engineering function, and referenced by various functions including Logistics and Production. There may be a Publish/Subscribe relationship to release new versions of the Bill of Materials.

That's all very well, but it involves some structural coupling at the next level of abstraction. The bill of materials determines the manufacturing process which determines the bill of materials - we are locked into a particular mode of manufacturing. This is okay until there is a paradigm shift in production methods. But as soon as you want to do something radical (such as moving from a Ford-style production line to Volvo-style teamworking production), the architecture starts to inhibit change instead of supporting it.

Changing Requirements / Spiragile / Evolutionary Acquisition. The software architect can easily dismiss these examples as unfair. What can you expect if the business changes the requirements? (Casual shrug of shoulders, palm-upward gesture.)

But SOA sells itself on adaptability. This means that we can't just back away from these challenges ...




Related posts: Loose Coupling 2 (April 2006), Boundaryless Information Flow (June 2011)

See also: Richard Veryard, Business Adaptability and Adaptation in SOA (CBDI Journal, February 2004)

Friday, May 06, 2005

Buffering

In many activities, there is a highly variable relationship between the quantity of input and the quantity of output.

For example, Naba Barkakati discusses the nonlinearity of creative endeavors, and argues for buffering as a form of decoupling or desynchronization. Buffering is a way to bridge between an internal world with variable levels of production, and an external world with "linear expectations".

Naba's point doesn't just apply to creative endeavours. It also applies to any activities involving other human beings, such as sales and marketing. On a day-to-day basis, there is no linear relationship between the input (quantity of sales effort, time) and the output (quantity of sales). So sales people (and sales organizations) build in exactly the kind of buffers Naba is talking about, to smooth away the visible peaks and troughs. This may include booking sales in the following period.

Buffers are also used to provide some smoothing between a fixed set of spending budgets, and a variable (and unpredictable) set of spending requirements.

But this mechanism has several negative effects.
  1. It makes it much harder to identify and execute system improvements. A writer may need more active support from an editor or publisher, but the buffers work as a defence mechanism,with the result that the writer doesn't get this support.
  2. The writer (the productive agent) carries more responsibility and takes a greater risk. This may increase the stress on the writer, who may be operating above his/her bearing limit.
  3. The attempted smoothing may be counter-productive, particularly if the buffers get larger and larger, resulting in a small number of major disappointments instead of more frequent minor disappointments.
In many contexts, buffering is regarded as borderline malfeasance. Senior management, investors and regulators can take a particularly dim view of buffering that misrepresents the true state of an operation.

So what's the answer? We may wish to implement various forms of loose coupling and asynchronicity, but this needs to be done within an appropriate governance framework, with shared attention to the economic and ethical behaviour of the whole system.

Technorati Tags:

Monday, August 02, 2004

Tethering

Tethering is the creation of a dependence (tight coupling, binding) by a supplier, in a situation where a consumer may have a reasonable expectation of independence (loose coupling, zero coupling).

Example 1. RealNetworks has recently launched a service to download music onto Apple iPods. Apple regards this as an invasion of its tethering. The blogs I read are generally supportive of RealNetworks.

Example 2. It is alleged that some brands of printer automatically reduce print quality when they detect third-party ink cartridges.

Example 3. Off-shore software developers may put in unauthorized coupling between software modules/components, in order to increase subsequent maintenance revenues.

Tethering represents a clash between the supplier's view of the world and the consumers' view. (We call this Asymmetric Demand.) Sometimes it stems from a deliberate plan to exploit a commercial position or from bloodymindedness. But sometimes the supplier feels that the position is morally defensible and in the best interests of consumers (if they but knew it.) Surely iTunes provides all the flexibility and functionality anyone could want??

 

See also Defeating the Device Paradigm (October 2015), The New Economics of Manufacturing (November 2015)

Tuesday, July 06, 2004

Mobile Computing

One of the things that interests me about mobile computing are the architectural implications of decoupling. If speech is even a possibility for the medium term, then anything speakable needs to be put into a separate layer. (Does this mean that the remaining layers are unspeakable?) Just as, if there is a possibility that someone will be accessing your business process via a WAP phone, then you may need to reconsider precisely what aspects of the application should be isolated into the interface layer. (Thus it's not enough simply to label the layers of an architecture - it's necessary to specify the contents of each layer to some degree of precision. This is a question of stratification.)

New technologies potentially expose the limitations of yesterday's architectures. Structures that we thought were technology-independent turn out to have unexplored technical assumptions. We need to develop more robust structures and solutions, which would be more capable of accommodating tomorrow's technologies.

Speech recognition raises the question - where is the conversion going to be? We might expect a long debate on the relative advantages of fat phones (where the speech/data conversion and a fair amount of processing is done in the handset) and thin phones (where everything is done at the server end). But for me one of the lessons of the client/server saga was that much too much time was wasted trying to determine and fix the optimum degree of client fatness; and of course a lot of software architects now regret past decisions on fatness/thinness. In my view, what makes more sense is to structure a solution so that the fatness/thinness is itself isolated from the rest of the solution and can be varied tactically - we should be able to migrate functionality to and from the handset as the costs and point technologies change over time.

One of the key pieces of functionality that could be placed in the telephone handset (supported by the telco) is your identity service. When I ring my insurance company, my telephone might already knows my customer number, as well as the various policies I hold with this company. This would short-cut a lot of the automatic menus I currently have to pass through before talking to a callcentre employee. My phone might even be able to perform some degree of authentication on my behalf. Vodaphone is now offering a service called m-pay, where your credit card details are held securely at the server end and don't have to be spoken or sent when you buy something over the phone.

What we can celebrate is that some of the old strategic concerns have gone away. Voice versus data (or pictures). Fat versus thin. Microsoft versus IBM. Microsoft versus Sun. Pay-by-bank versus pay-by-phone. These are ceasing to be strategic issues, and are now becoming merely tactical. The strategic issues are now somewhere else.

CBDI Telecoms Report
More discussion on the fat/thin client issue

Thursday, June 17, 2004

Loose Coupling

I have some quarrel with the definition of loose coupling in the W3C Web Services Glossary. This distinguishes between two kinds of dependency – real and artificial – and defines loose coupling in terms of reducing artificial dependency. Real dependency, according to W3C, cannot be reduced.

Except of course it can – by changing the primary decomposition of the problem space into services.

What puzzles me here is that so many people seem to regard the decomposition as fixed and unalterable, and merely focus on the technical characteristics of the wiring. Of course there is some degree of adaptability you can get by that route - but that's far from the whole story.