Saturday, February 04, 2012
Under the pavement
So that's one sense of the word "foundation".
@pbmobi also quotes Frank Gehry via @nate_berg "There’s a lot of layers of bureaucracy that make it impossible to do creative work in cities."
So that's a completely different sense of the word "layer".
Bruno Latour gave a brilliant lecture at Brunel University in April 1998. Among other things, he talked about the (usually invisible) stuff under the pavements of Paris. As I recall, he showed some slides of various control rooms, each providing a different slice (are these layers or perspectives?) of the Parisian infrastructure. The point here is that there isn't one homogenous infrastructure, but a complex system of infrastructure systems that barely talk to each other except in an emergency. Sadly, the lecture is no longer available on the Brunel website, but I found a transcript on a Hungarian website.
Nate Berg, Frank Gehry on City Building, Atlantic Cities, 9th Jan 2012.
Bruno Latour, "Thought Experiments in Social Science: from the Social Contract to Virtual Society"
1st Virtual Society? Annual Public Lecture Brunel University 1st April 1998. [transcript] See also [Invisible Paris, pdf].
Alex Marshall, The Works Reveals City's Essential Systems. Spotlight Vol. 5, No. 2. January 26, 2006. Review of Kate Ascher, The Works: Anatomy of a City. Penguin Press 2005
See also my post OrgIntelligence in the Control Room (October 2010)
Friday, December 23, 2011
Three or Four Schools of Enterprise Architecture
The schools are distinguished along two dimensions: scope and ends (purpose).
Scopes | Ends |
Enterprise wide IT platform (EIT). All components (software, hardware, etc.) of the enterprise IT assets. | Effective enterprise strategy execution and operation through IT-Business alignment. The end is to enhance business strategy execution and operations. The primary means to this end is the aligning of the business and IT strategies so that the proper IT capabilities are developed to support current and future business needs. |
Enterprise (E). The enterprise as a socio-cultural—techno-economic system; hence ALL the facets of the enterprise are considered – the enterprise IT assets being one facet. | Effective enterprise strategy implementation through execution coherency. The end is effective enterprise strategy implement. The primary means to this end is designing the various facets of the enterprise (governance structures, IT capabilities, remuneration policies, work design, etc.) to maximize coherency between them and minimize contradictions. |
Enterprise-in-environment (EiE). Includes the previous scope but adds the environment of the enterprise as a key component as well as the bidirectional relationship and transactions between the latter and its environment. | Innovation and adaption through organizational learning. The end is organizational innovation and adaption. The primary means is the fostering of organizational learning by designing the various facets of the enterprise (governance structures, IT capabilities, remuneration policies, work design, etc.) as to maximize organizational learning throughout the enterprise. |
Besides scope and purpose, I have always considered it important to identify a third dimension of perspective (viewpoint). (For example, I talk about these three dimensions in my 1992 book on Information Modelling, pages 16-22.)
Among other things, perspective helps us to address the question: What kind of system is the enterprise being understood as? For example, the (micro-)economic perspective views the enterprise as a production system (value chain or value network), while the management cybernetic perspective (such as Stafford Beer's Viable Systems Model) views the enterprise as a thinking system or brain. Gareth Morgan's book Images of Organization contains a good survey of several contrasting perspectives.
Most enterprise architects in the first school adopt the traditional IT perspective of regarding the enterprise as an information processing system. Most of the well-known EA frameworks (such as those listed on the ISO 42010 website) are solidly within the first school.
Lapalme's second school explicitly invokes the socio-cultural perspective, and calls for all facets of the enterprise to be considered - this clearly implies going beyond the traditional IT perspective.
However, there is a considerable body of work that looks at the enterprise-in-environment, but remains within the IT perspective. This would include the Open Group work on the extended enterprise, as well as the Systems-of-Systems community. A key scoping question here is the exercise of governance over large distributed systems of systems. Mark Maier distinguished between directed and emergent systems (or we might think about directed and emergent enterprises), and this has been developed into a four-part schema by the US Department of Defense: Directed, Acknowledged, Collaborative and Virtual. Some useful work at the SEI, where this thinking has been connected into work on SOA and enterprise architecture.
Lapalme's article identifies James Martin as one of the leaders of the third school, based on a minor work published in 1995, but most of Martin's work belongs solidly within the first school. In his 1982 book, Strategic Data-Planning Methodologies, Martin shows how IBM's BSP methodology could be used to decompose the activities of the organization, as a precursor to planning IT systems. The primary aim of such methodologies from the 1980s onwards was to identify opportunities to install more computers and develop more software, and I think it is no coincidence that a number of the pioneers of enterprise architecture (from Martin to John Zachman) had worked for IBM. See my note on The Sage Kings of Antiquity.
So I think it makes sense to divide Lapalme's third school into two distinct sub-schools. There is clearly a lot of work in School Three A, which extends the scope of architecture without introducing the socio-cultural or other perspectives which Lapalme associates with School Two. There is as yet very little formal work in School Three B.
School OneSingle EnterpriseIT Perspective | School TwoSingle EnterpriseMultiple Perspective |
School Three AExtended EnterpriseSystem of Systems | School Three BEcosystemMultiple Perspective |
James Lapalme, "3 Schools of Enterprise Architecture," IT Professional, 14 Dec. 2011. IEEE computer Society Digital Library. IEEE Computer Society, http://doi.ieeecomputersociety.org/10.1109/MITP.2011.109
Thursday, March 17, 2011
Emergent Architecture
In August 2009, Gartner produced a definition of
Emergent Architecturewith two synonyms (Middle-Out EA and Light EA), two characteristic practices (modelling lines not boxes, modelling relationships as interactions) and seven characteristic properties (non-deterministic and decentralized; autonomous, rule-bound, goal-oriented and locally influenced actors; dynamic or adaptive systems; resource constraints) [Gartner Identifies New Approach for Enterprise Architecture, August 2009].
At the same time, Dion Hinchcliffe produced a similar list of properties and one of his trademark diagrams [Pragmatic new models for enterprise architecture take shape, August 2009]. Meanwhile Dion was also talking a lot about WOA (which he credits to Nick Gall of Gartner), and there was clearly a link in their thinking between WOA and some notion of emergence [A Web-Oriented Architecture (WOA) Un-Manifesto, December 2009].
Overall, the
emergentand
evolvingdefinition of Emergent Architecture across the internet is pretty muddled: although other writers in the Enterprise Architecture world may reference Gartner and/or Hinchcliffe, they don't always pick up the full richness and power of their definitions. In addition, these writers may be influenced by the agile community, where
emergentis contrasted with
upfrontand seems to mean something like
making it up as you go along.
For example
The architect should collaborate with the development team to define and code higher-level contexts, responsibilities, interfaces, and interactions, as needed, and leave the details to the team. The development team, through the rigorous use of automated unit and story tests via continuous integration, is then able to improve the system design incrementally and continually—both within and across model-context boundaries— without compromising system functionality. Gartner uses the termEmergent Architectureto describe this practice. Keeping Architectures Relevant, Microsoft Architecture Journal
The practice described in that article can only be described as
emergenton a fairly narrow interpretation of the word, presumably based on the
rule-boundcriterion and ignoring the other characteristics.
My own view is that all of the characteristics Gartner and Dion Hinchcliffe originally proposed (with the possible exception of resource constraints, which are nothing new) could be regarded as emerging, in the sense of being new and trendy and representing a departure from the modernist paradigm of
traditionalenterprise architecture. (See my post on Modernism and Enterprise Architecture.) But not all of the characteristics they proposed are directly associated with emergence, in the complex systems sense.
Wikipedia defines emergence as
the way complex systems and patterns arise out of a multiplicity of relatively simple interactions[Wikipedia: Emergence]. As I see it, the notion of emergence leads to a key distinction for enterprise architecture, between a planned order (which Hayek called Taxis) and an emergent spontaneous order based on self-organization (which Hayek called Cosmos).
Although some writers like to use biological and ecological analogies when talking about emergence, the sociopolitical analogies are probably more relevant to enterprise architecture because they include the notion of human intentionality. (And some people use the terms
top-downand
bottom-up, but nowadays I try to avoid these terms because of their potential ambiguity. See my note on Service Planning.)
The distinction between planned and emergent architecture is akin to the distinction introduced by Henry Mintzberg between deliberate and emergent strategy, the latter referring to strategies that originate in the interaction of an organization with its environment [Wikipedia: Strategy dynamics].
This distinction also maps onto some recent work in the system-of-systems (SoS) domain. Mark Maier introduced a distinction between directed and collaborative systems (1998), and this has been developed by the U.S. Department of Defense (DoD) into a more complicated four-part schema (directed, acknowledged, collaborative and virtual). See for example System Engineering Guide for System-of-Systems Engineering (Version 1, August 2008). Planned architectures tend to assume directed or acknowledged systems-of-systems, while emergent architectures are associated with collaborative and virtual systems-of-systems.
Boxer and Garcia write:
In the complex systems-of-systems contexts supporting distributed collaboration, the architecture of the collaborative enterprise has to be approached as emergent, created through an alignment of individual architectures.[Enterprise Architecture for Complex System-of-Systems Contexts, 3rd Annual IEEE Systems Conference in Vancouver March 23-26 2009 (abstract) (pdf)] See also Type III Agility and Ideologies of Architecture.
The
acknowledgedcategory of systems of systems allows for some flexibility and autonomy for local development of component systems, while remaining under central oversight and direction. This is surely where
rule-bound actorswould fit, and would include the practices described in the Microsoft Architecture Journal article mentioned above (Keeping Architectures Relevant). There is also some scope in the
acknowledgedcategory for dynamic or adaptive systems and adaptive processes. But this is some way short of a genuinely emergent architecture.
Gartner is closer to the mark when it talks about decentralized decision-making (which it calls
non-deterministic) involving autonomous goal-oriented actors, responsive to local influence. Similarly, Dion Hinchcliffe talks about community-driven architecture run by autonomous stakeholders, producing decentralized solutions with emergent outcomes.
I don't know whether Gartner is still promoting the idea of emergent architecture, or whether its definition has evolved since 2009, given that Nick Gall's recent work (such as his latest piece on Panarchy) doesn't seem to use the term at all. However, Gartner's original piece on emergent architecture remains available on its website, and continues to be referenced as one of the primary sources for the term.
What are the practical implications of emergence for enterprise architecture?
One of the most important practical differences between planned architecture and emergent architecture is that we tend to think of planned architecture as a design-time artefact - something that is envisioned, designed and then implemented by architects. So architecting is regarded as a special kind of designing. Implementation may be directly managed by the architects, or indirectly by means of a set of architectural policies to govern acquisition, development and use.
The implementation of a planned architecture always involves a gap between plan and reality. Plans take time to realise, some bits of the plan may never get realised. There is always a defacto architecture, which we may call architecture-in-use, which is a structural description of what-is-going-on, paying attention to certain structural and sociotechnical aspects of a (run-time) system of systems from a given observer position.
Devotees of planned architecture see this in terms of a simple transformation from AS-IS to TO-BE. Their attention to the existing defacto architecture is largely motivated by the need to define a business case and a technical transition strategy for replacing AS-IS with TO-BE, possibly in discrete phases or chunks.
But emergence tells us this: that the architecture-in-use emerges from a complex set of interactions between the efforts and intentions of many people. The architects cannot anticipate, let alone control, all of these interactions. There may be some areas in which the architects are able to carry out something that looks a bit like design, either autonomously or collaboratively, but there will be other areas in which the architects are simply trying to understand the structural implications of some messy situation and make some useful interventions. The primary activity of architects is therefore following not a design loop but an intelligence loop.
However that depends what we mean by design. There is a renewed interest in a generalized notion of
designin the enterprise architecture world, especially with the current popularity of Design Thinking (and such derivatives as
Hybrid Thinking). But that's a subject for another post.
Sources
Gartner identifies new approach for enterprise architecture (Aug 2009)
Commentary on Gartner's emergent architecture concept by Abel Avram, Leo de Sousa, Adrian Grigoriu and Mike Rollings (then with Burton Group).
Dion Hinchcliffe, Pragmatic new models for enterprise architecture (Aug 2009)
Mark Maier, Architecting Principles for Systems of Systems (InfoEd June 1997)
See also
Peter Cripps, Enterprise Architecture is Dead. Long Live Interprise Architecture (Oct 2010)
Tom Graves, Hybrid-thinking, enterprise architecture and the US Army (May 2010)
Three or Four Schools of Enterprise Architecture (December 2011)
Wednesday, October 13, 2010
Brownfield Requirements Engineering
Chairman's Introduction
Ian Alexander opened the meeting by pointing out the gulf between the demands of real engineering projects (95% brownfield) and the body of knowledge about requirements engineering offered by academics and authors (95% greenfield), and sketching some of the characteristic features of brownfield requirements. The importance of this topic is demonstrated by an excellent turnout for this meeting.Brownfield Systems Engineering
Ian Gallagher of Altran Praxis presented some experience from several large systems projects, both military and civilian. The typical scenario is a major upgrade of existing systems to improve performance and deal with obsolescence; the upgrade may therefore include different types of requirement- doing new things
- doing old things in new ways
- migrating away from ageing technologies, proprietary systems or restricted technologies (e.g. those subject to export licenses)
With greenfield engineering, the requirements of the engineering project are pretty much the same as the requirements of the TO-BE system, but brownfield engineering introduces a critical choice: whether to write total requirements for the TO-BE system or to write requirements for the project (or separate increments) based on the difference(s) between AS-IS and TO-BE. IanG expressed a strong preference for the former, but acknowledged that this isn't always acceptable to key stakeholders, especially if the effort would be perceived as excessive. But in any case it's obviously important to be clear what kind of requirements we are talking about.
Beyond the scope of the TO-BE system, there may be a larger system-of-systems context, in which other equally large systems are the subject of equally large brownfield engineering activity. This raises questions of the interoperability of the increments, and the coordination between autonomous brownfield engineering projects, which is another important issue for brownfield requirements engineering.
A key issue is the pressure to reuse and repurpose existing assets. There are three possible reasons for this.
- Rational economics - genuine cost-saving based on lifetime exploitation of assets
- Irrational inertia - desire to justify past investment decisions, resistance to change
- Commercial - vested interests from key stakeholders (such as contractors) to maintain proprietary dependencies
Managing Brownfield Financial Software
Phil Cantor of Smartstream then talked about his past and present experience managing and maintaining software products in the financial sector. (Most of his examples came from previous companies he had worked for.) The requirements of such products don't come exclusively (or even primarily) from the "users" but from a body of knowledge that the package vendor is expected to master. Phil gave the example of an obscure piece of financial calculation that is now only required in the Philippines: users elsewhere in the world may not know about this, but a package that doesn't cater for this requirement will be unacceptable to a global bank.The package vendor then has to balance a wishlist of enhancement requests and bug reports from hundreds of user organizations, with the practical constraints of maintaining - and hopefully improving the quality of - an existing body of software.
For me, one of the most interesting issues raised by Phil was when he talked about the platform and its requirements. There is an internal platform supporting the user-facing components of the product suite, containing common services and suchlike, and the requirements for this platform cannot be derived purely from the requirements of the package as a whole. Furthermore, the package as a whole serves as a platform for the business processes of the user organizations (Phil's customers), so the same question arises at that level as well. However, this is not particularly a brownfield issue.
Phil also mentioned the challenges of training. From a sociotechnical perspective this is a major brownfield issue, since the performance of the TO-BE system will be affected by the user habits and expectations (e.g. knowing where to find things) carried forward from the AS-IS system. Training is a solution (and often not a very good one) to some set of sociotechnical requirements, and there may be better ways of conveying a new set of habits and expectations to the users and technical staff, but clearly we need to have an understanding of these requirements as well.
Discussion
Lots of further issues came up in the discussion, and I didn't manage to capture half of them. Someone raised the question of scale - what do you do with a brownfield situation with a complex network of interdependent pieces of hardware and software, with the need for upgrades constrained by small budgets and massive complexity. This led to the question of timing - how do you decide whether to upgrade something now or to defer the upgrade until next year. (Nobody actually mentioned the concept of technical debt, but that is clearly relevant here.)
Finally, my memory and notes can be supplemented by a few choice tweets.
@rmonhem much debate over how much to document requirements with significant re-use of existing applications
@rmonhem most difficult challenges have often been posed by cultural, commercial and legal factors.
@jiludvik Bit too much focus on techniques documenting and signing off detailed reqs. This is not specific for brownfield projects!
@jiludvik Phil Cantor's pres has been very entertaining and spot on for brownfield req's eng. I can certainly relate to many challenges he mentioned
Friday, September 03, 2010
Zero-Based Requirements
Brown-Field Requirements
One view of requirements engineering is that its purpose is to produce a complete and coherent statement of what some system-of-systems is required to do - in other words, its behaviour in the broadest sense, including "functional", "non-functional" and "commercial" requirements.In a "Green-Field" scenario, we might imagine that this statement of requirements would result in the procurement and installation of a system of systems meeting the stated requirement. Requirements engineering is focused on understanding exactly what is required, and specifying it in an unambiguous and testable form.
But in almost all engineering projects, some system of systems - technical or sociotechnical - already exists, and the practical purpose is to make some planned changes to it. So people in the RE community are starting to talk more about "Brown-Field" requirements.
(RESG Event: Managing Brownfield Project Requirements, London, October 12th 2010)
Gap Analysis
An obvious starting point for brownfield requirements analysis would seem to be the identification of a gap between desire and reality. People often produce two models - an AS-IS model that describes the existing system, and a TO-BE model that describes its future replacement. The engineering requirements are then derived from the differences between the AS-IS model and the TO-BE model. This will typically result in a solution, possibly involving rebuilding some of the subsystems, replacing or upgrading some of the component parts, adding some new stuff or stripping out old stuff, rewiring the network, retraining the people, resetting system policies and parameters, and so on. In addition to a solution blueprint, showing how all these elements are to be configured, there will also be a transition strategy, indicating how (and in what sequence) all these changes will be installed. There are usually operational constraints - for example, a requirement to keep critical business processes running at an acceptable level during the transition period.Imagine you want to rebuild your kitchen. You have to think about fitting new units into the existing space, or possibly moving a wall to give yourself more space. You have to decide whether you are going to keep the existing fridge (which you only bought last year) or buy a new one. And you have to think about how long you can manage without being able to cook. Moving the wall, or deciding to keep the old fridge, belong to the solution domain. But if you are going to do requirements analysis properly, there needs to be something in the statement of requirements that helps you determine these aspects of the solution.
In all but the smallest and most simple projects, there will be many solution variants. The decision to retain or replace a particular component may be based on a technical calculation of its likely performance and capacity within the new configuration, or may be based on a political calculation as to the most convenient budget from which to fund the replacement (in other words, preferably someone else's budget, if we can get away with not replacing it now).
Ten years ago this month, I wrote a piece about this in relation to Component-Based Software Engineering. Supply and Fit (CBDI Journal, September 2000).
But there is a more fundamental reason why there are many possible solutions - because making sustainable changes to complex systems is a tough challenge. Large and complicated change programmes aren't always the most effective; a small intelligent fix is often far better (and less risky) than any amount of optimistic meddle.
So before we can get to a solution blueprint and a transition strategy, we need an intervention strategy. This takes us out of the comfort zone of requirements engineering into general systems thinking.
Leverage Points
Donella Meadows identified twelve leverage points for making changes in complex systems, and suggested that these could be ranked according to their power. See original paper by Donella Meadows. A version is included in her posthumously published book Thinking in Systems (2008).If the intervention strategy can be expressed as a combination of leverage points, then this raises the question for requirements engineering - how do we work through the requirements of changing a complex adaptive system in a way that could produce this kind of intervention strategy?
Zero Based Procurement
Finally, I wanted to make a comment about one of the (many) dysfunctional aspects of prevailing procurement practices. In his blogpost Was this NHS IT tender a stitch-up? (Computer World, September 2010), Tony Collins talks about the difficulties of referencing a specific product or system in a tender document. "If a user organisation has a system it’s happy with, and wants to keep and enhance, why would it want to go through the needless expense of an EC tendering, rather than simply renew the contract?"Procurement rules may have been designed to prevent cosy and uncompetitive relationships between public sector organizations and their suppliers. They appear to have the effect of forcing each procurement to be treated as a separate exercise, starting each time from a blank sheet of paper, so that there is at least the theoretical possibility of giving new suppliers a chance. (This is similar to the principle of Zero-Based Budgeting.) Many people doubt that these mechanisms actually have any real effect on competition or value-for-money; but meanwhile, these mechanisms appear to have a strongly negative impact on through-life capability management. How can brownfield requirements engineering be done properly under these constraints?
Thursday, June 10, 2010
Ecosystem SOA 2
When I started writing about SOA and the service-based business over ten years ago, I defined two "cuts" across the service ecosystem. One cut separates inside from outside, while the other cut separates supply from demand.
(This diagram was included in my 2001 book on the Component-Based Business, and frequently referenced in my work for the CBDI Forum. For a brief extract from the book, see my Slideshare presentation on the Service Ecosystem.)
The inside/outside cut is sometimes called encapsulation. It decouples the external behaviour of a service from its internal implementation, and can be described in terms of knowledge - the outside has limited knowledge of the inside, and vice versa. (The cut is also sometimes called transparency - for example location transparency, which means that external viewers can't see where something is located.)
The supply/demand cut is about delegation, and can be described in terms of responsibility. Getting these two cuts right may yield economics of scale and scope; and the business case for SOA as a development paradigm is often formulated in terms of reusing and repurposing shared services.
For relatively small and simple SOA projects, it may be feasible to collapse the difference between these two cuts, and treat them as equivalent. (The inside/outside relationship and the supply/demand relationship are sometimes both described as "contracts", although they are clearly not the same kind of contract.) However, enterprise-scale SOA requires a proper articulation of both cuts: confusing them can result in suboptimal if not seriously dysfunctional governance and procurement. Many people in the SOA world still fail to understand the conceptual importance of these cuts, and this may help to explain why some organizations have had limited success with enterprise-scale SOA.
Going beyond enterprise SOA as it is generally understood, there is a third cut separating two views of a system: the system-as-designed (whose structure and behaviour and rules can perhaps be expressed in some formal syntax such as UML, BPMN or ArchiMate) and the system-in-use (whose actual performance is embedded/situated in a particular social or business context). This cut is critical for technology change management, because of the extent to which the designed system underdetermines the pragmatics of use. I have been talking about this cut for over twenty years, but only more recently working out how to articulate this cut in composition with the other two cuts.
One important reason for looking at the pragmatics of use is to understand the dimensions of agility. In many settings, we can see a complex array of systems and services forming a business platform, supporting a range of business activities. If no agility is required in the business, then it may not matter if the platform is inflexible, forcing the business activities to be carried out in a single standardized manner. But if we assume that agility is a critical requirement, then we need to understand how the flexibility of the platform supports the requisite variety of the business.
More generally, understanding the pragmatics of use leads to the recognition of a third kind of economic value alongside the economics of scale and the economics of scope: the economics of alignment. The value of a given system-of-systems depends on how it is used to deliver real (joined-up) business outcomes, across the full range of business demands. (I'm afraid I get impatient with people talking glibly and simplistically about business/IT alignment, without paying attention to the underlying complexity of this relationship.)
Understanding these three cuts (and analysing their implications) is critical to understanding and managing a whole range of complex systems problems - not just SOA and related technologies, not even just software architecture, but any large and complex sociotechnical systems (or systems-of-systems). If the three cuts are not understood, the people in charge of these systems tend not to ask the right questions. Questions of pragmatics are reduced to questions of platform design; while questions of the cost-justification and adoption of the platform are reduced to a simple top-down model of business value. Meanwhile the underlying business complexity (requisite variety) will be either misplaced (e.g. buried in the platform) or suppressed (e.g. constrained by the platform).
So there are three challenges I face as a consultant, attempting to tackle this kind of complex problem. The first challenge is to open up a new way of formulating the presenting problem, based on the three cuts. The second challenge is to introduce systematic techniques for analysing the problem and visualizing the key points. And the third challenge is to identify and support any organizational change that may be needed.
With thanks to Philip Boxer and Bernie Cohen. For a different formulation of the three cuts, together with a detailed example, see their new paper "Why Critical Systems Need Help to Evolve" Computer, vol. 43, no. 5, pp. 56-63, May 2010, doi:10.1109/MC.2010.150. See also Philip Boxer, When is a stratification not a universal hierarchy? (January 30th, 2007)
Related post Ecosystem SOA (October 2009)
Saturday, November 28, 2009
Complexity and Power
Obviously we have to be clear as to what counts as an element (for example functions), and what counts as a connection. Using the SIP lens, it is possible to see how certain architectural styles (including those popular in the SOA world, such as hub-and-spoke or layered) only deliver simplicity (and the benefits of simplicity) if we can assume that only certain kinds of connection are significant. Roger's view is that this assumption is unwarranted and invalid.
In general, the so-called functional requirements are associated with the elements and the logical connections between them. In my view, architects also need to pay attention to the nature of the connections (coupling) because these will have important consequences on the structure and behaviour of the system as a whole. For example, synchronous versus asynchronous. At present, Roger's complexity calculations don't differentiate between different kinds of connection, so it would be interesting to investigate the costs and risks associated with different kinds of connections, to see how much difference it could make.
Roger's primary interest is in IT systems, but the same principles would appear to apply to processes and organizations. If you are running a factory, you have an architectural choice about the connection between say the moulding shop and the paint shop. With an asynchronous flow you have two loosely coupled operations separated by a buffer of work-in-progress; with a synchronous flow you have two tightly-coupled operations connected on a just-in-time basis. The former is a lot easier to manage, but it has an overhead in terms of inventory cost, storage cost, increased elapsed time, slower response to changes in demand, and so on. The latter may be more efficient under certain conditions, but it can be more volatile and the impact is much greater when something goes wrong anywhere in the process.
Intuitively, there seems to be a difference in complexity between these two solutions. The first is simpler, because the connection between the two systems is weaker; the second is more complex. With greater complexity comes greater power but also greater risk. Surely this is exactly the kind of architectural trade-off that enterprise architects should be qualified to consider. Roger's SIP methodology does give the architect a very simple lens to try and understand system-of-system complexity. Not everyone agrees with Roger's definition of complexity, and we can find some radically different notions of complexity for example in the Cynefin world, but at least Roger is raising some important issues. The EA world certainly needs to pay a lot more attention to questions like these.
Sunday, January 25, 2009
SOA and Holism
Holism ... is the idea that all the properties of a given system (biological, chemical, social, economic, mental, linguistic, etc.) cannot be determined or explained by its component parts alone. Instead, the system as a whole determines in an important way how the parts behave.To what extent does this concept apply to SOA? My own view is that SOA needs to be understood from the Systems-of-Systems Engineering paradigm rather than from the Systems Engineering or Software Engineering paradigm. This helps us to deal with a range of system level phenomena including Feature Interaction.
In my writings I've drawn on the recent work of Christopher Alexander, from A New Theory of Urban Design to The Nature of Order, where Alexander talks about something he calls structure-preserving transformations.
According to the New Theory (or at least my interpretation of it, which I call Organic Planning) each act of transformation should be a step within a larger and open-ended evolutionary development, and should have three aspects.
- Produce something at some level
- Complete something (larger) that was already part-developed - typically by linking smaller (lower-level) things and peer (same-level) things that already existed, or were created in previous steps.
- Create new opportunities - Alexander calls this hinting-at.
I published a very simplified version of this in the CBDI Journal in 2004, suggesting that the service designer needed to look in four directions (upwards, downwards, sideways, inwards). I understand that this was picked up and referenced by the ArchiMate people, for example in a 2005 book called Enterprise Architecture at Work, and it has been implemented in the Telelogic tool. My colleague Tony Bidgood has recently published an article on ArchiMate, with some further examples.
- Richard Veryard, Business Driven SOA (CBDI Journal, May 2004), Business Driven SOA Governance (CBDI Journal, June 2004)
- Tony Bidgood, Archimate Standard for Enterprise Service Modelling (CBDI Journal, August 2008)
1. Inwards: Functional correctness
2a. Downwards: Integrating and composing smaller stuff
2b. Sideways: Interoperability
3. Upwards: Larger whole
Organic Planning is described in my 2001 book on the Component-Based Business, and there is a short version on my website, but I didn't make this explicit in my 2004 article.
My expectation has always been that a series of transformations (whether structure-preserving or otherwise) would be expressed as a series of model pairs, in which the nth TO-BE becomes the (n+1)th AS-IS. But some alternative modelling notations (such as Michael Bell's SOMF) allow both AS-IS and TO-BE to be expressed in a single model, so it would be very interesting to see how a series of transformations could be expressed and analyzed.
Sunday, February 24, 2008
Complex Event Processing
Processing of Complex Events:
According to Wikipedia, the goal of CEP is to identify meaningful events from an event cloud. I interpret this as detecting complex events from a mass of atomic events.Complex Processing of Events:
But in some of the examples cited by CEP experts, the events themselves don't seem to be particularly complex. The complexity comes from the need to mobilize an efficient and effective response to a large number of relatively simple events.- Tiffin Box Delivery (Paul Vincent)
Complex Systems:
A complex system is buffeted by events from the environment, and must respond in efficient, effective and flexible ways to these events. This requirement is addressed by a considerable body of work on complex systems and cybernetics - including Stafford Beer's Viable Systems Model (VSM), which I believe to be of particular relevance to CEP - but there is surprisingly little reference to this work in the CEP field. (Searching the Internet for links between CEP and VSM, I only find one person aside from myself who has made this connection - one David HC Soul, who has posted a number of cybernetics reading lists and other material to Squidoo and elsewhere.)If you just read the Wikipedia articles on CEP and VSM, you may not see much connection between them, so let me try and explain. In my view, the most useful contribution from VSM to CEP is the notion of a transducer. VSM describes a system of systems, with messages or signals passed between the systems. Some of the systems are relatively simple, while others are more complex, handling higher levels of variety. So you need some way of producing a high-variety stream of events from a low-variety stream of events - this is known as an amplifier. And you need some way of converting a high-variety stream of events to a low-variety stream of events - this is known as an attenuator. Both the amplifier and the attenuator convert a stream of events from one form to another - the general word for such a mechanism is transducer.
(In event-driven SOA, events are transmitted through message-oriented middleware or ESB. Some transduction can usually be performed directly by this platform; more complex transduction can be implemented by utility services sitting on the platform.)
Stafford Beer then proposes an elaborate architecture for distributing management functions (such as coordination and planning) between the systems. I don't think we necessarily have to adopt this architecture exactly as Beer lays it out, but we should certainly adopt some of Beer's architectural principles.
Notes:
Wikipedia: Complex Event Processing, Event Stream Processing, Event-Driven Architecture, Viable System ModelTuesday, September 27, 2005
Efficiency and Robustness - On Central Planning
The debate between Ian Welsh and Chandler Howell has continued since my previous post on Efficiency and Robustness, and the topic of Central Planning has appeared. In this post, I want to set aside the ideological aspects of central planning and discuss some of the practical aspects.
The question of central planning is widely seen as a political one, but it certainly isn't a simple matter of left and right. There are many right-wing libertarians who reject central planning as equivalent to Stalinism, the worst excesses of the Soviet experiment. (See this article Central Planning is Spontaneous Economic Order whose author complains that the Santa Fe Institute has betrayed the principles of complexity.) And there are left-wing libertarians who are puzzled or aghast at the centralizing tendencies of some of the right-wing authoritarians as well as corporate commercial interests.
Intelligent Design is of course a form of central planning - but executed by a perfect being rather than imperfect humans. (Boston Globe, via Snowdeal.) Robin Wilton asks whether it is possible for an atheist to believe in Intelligent Design. Well, it is certainly possible for atheists to believe in central planning. There are science fiction worlds in which central planning is carried out by a supreme computer, sometimes known as Multivac (or perhaps Spaghetti Monster). Science fiction writers often use technology as an oblique way of discussing serious philosophical and moral questions.
But if we put the politics and religion on one side, there is clearly a great deal of imperfect central planning that goes on within large organizations. We should be able to discuss this in pragmatic terms rather than ideological ones.
In a comment to Chandler Howell's previous post, Stu Berman advocates "horizontal integration rather than vertical" and rejects central planning. Chandler replies that, "Central Planning is the best analogy I have ever seen for how large corporations are run today. The only difference [with the Soviet Union] is that the corporations have a lot better computing power to manage their logistics." Chandler goes on to describe the fragility of systems (including large organizations) based on central planning, and suggests that H5N1 (aka Bird Flu) could have a more devastating effect than Hurricanes Katrina and Rita.
Chandler goes on to talk about coordination (what the military call C3I) in a crisis management context. "The disaster in Katrina may have started with the levees breaching, but lack of coordination at any level is what kept it going for days."
Discussion of central planning is relevant not just to the destruction of New Orleans, but to its subsequent reconstruction. The popular alternatives to central planning come under a range of labels including clusters (SwampFox) dynamic improvisation (Arnold Kling via OceanStateBlogger), organic growth (MCB) and private initiative (CapitalFreedom). These alternatives certainly don't all come from the same end of the political spectrum.
Advocacy for central planning also comes from various quarters, including Environmental Planning (WHY, WHAT, HOW and FOR WHOM).
In the real world, we are not likely to have either total central planning or total anarchy, but a complicated mixture of the two. To make a productive contribution to this debate, we have to be able to reason intelligently about outcomes (evidence-based policy) rather than ideological principles. (MCB also makes this point.) Furthermore, we have to connect outcomes with a specific context. (Not generic outcomes that would be exactly the same for New Delhi, New Orleans and New York.)
Central Planning is an attempt to produce all decisions from a single directing mind. To this extent that this attempt is successful, it puts the emphasis on endo-interoperability - coordination within the scope of a single plan. In contrast, exo-interoperability involves dynamic collaboration between autonomous agencies, whose plans may be formulated in entirely different terms.
Chander Howell, Surge Protectors (21 September 2005), Surge Protectors Part 2 (25 September 2005)
Ian Welsh, The Economics of a Flu Pandemic II (24 August 2005)
Related posts on Efficiency and Robustness
Thursday, September 22, 2005
Efficiency and Robustness - On Tight Coupling
- Our society, as a whole, has no surge protection - no ability to take shocks. We have no excess beds, no excess equipment, no excess ability to produce vaccines or medicines, nothing. Everybody has worshipped at the altar of efficiency for so long that they don't understand that if you don't have extra capacity you have no ability to deal with unexpected events. (IW)
- As we have now seen with Hurricane Katrina, even if the capacity were there, the United States’ ability to manage and allocate that capacity is essentially non-existent. (CH)
- Because our society and economy is so much more integrated and so much more connected (for example the flu had to spread by ship back then), and so much more "just on time" that it isn't really a model you can use. We'll likely get hit harder, faster and because many locations have such limited inventories, relying on getting it as they need it, the supply disruptions are likely to be much worse.
- In an emergency, a distributed piece of information calls for a central response. A disaster, the converse. Those best informed are in the field; those best equipped, in the field. The best disaster response system is the one in your hand when the disaster strikes.
- But the changes needed to make things better are politically painful and resistent by incumbent powers. ... I suspect that central committees will determine we need more central response systems, and weaken the economy by taxing everyone hard to pay for it. The exact opposite of the medicine a “network edge” response would dictate.
- Inability of FEMA to work with medical professionals unless they are part of the National Disaster Medical Team. Inability of FEMA to orchestrate external / autonomous agents. (Overlawyered Blog, via Ernie the Attorney)
- Inability of FEMA to provide appropriate support for people with special needs. Inability of FEMA to collaborate with agencies with specialist knowledge and resources. (Conmergence Blog)
The FEMA response to Hurricane Katrina
Related blogposts:
Technological Progress (October 2005)
Saturday, January 15, 2005
A Brief History of Methods
Background
The Cynefin Centre spun off from IBM in July 2004 and describes itself as a network focusing on the application of complexity science to management and organisational practice. "At the heart of the Cynefin Centre is a distinction between ordered and unordered systems, and the consequent recognition that systems with fundamentally different qualities require the application of contextually differentiated methods for both diagnosis and intervention."
The Cynefin Sensemaking Framework has five domains, four of which are named, and a fifth central area, which is the domain of disorder. The right-hand domains are those of order, and the left-hand domains those of un-order.
For a full description of the framework, see paper by Cynthia Kurtz and Dave Snowden: The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal Vol 42 No 3, 2003 (html) (pdf). See also weblog by Willem van den Ende (Dec 6, 2004) (Jan 17 2005).
Development of Methods
Chaos | Uncoordinated building work
No method |
Known | Design of a single software system
Methods include RUP and XP. |
Knowable | Design of a software-intensive solution, as a system of systems (e.g. RUP/SE)
Twin-track/ multi-track development (e.g. Select Perspective) Assumption of overall design authority - directed composition. |
Complex | Design of a software-intensive experience, within a service-based ecosystem - true SOA. Complex systems engineering calling for collaborative composition.
Methods could include XB. |
RUP is the flagship method for IBM Rational. There is a plug-in for systems engineering called RUP/SE, which goes some way towards the demands of SOA.
There is a difference of opinion as to where extreme Programming (XP) belongs. If it is merely a software engineering method focused on the efficient production of small-scale software, then it belongs in the domain of the known. If it moves beyond software productivity into business agility, then it transforms into extreme Business (XB) and belongs in the domain of the complex.
Related Posts
RUP/SE (January 2005)
The Authorship of Method (February 2011)