Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts

Wednesday, April 22, 2015

Agile and Wilful Blindness

@ruthmalan challenges @swardley on #Agile









Some things are easier to change than others. The architect Frank Duffy proposed a theory of Shearing Layers, which was further developed and popularized by Stewart Brand. In this theory, the site and structure of a building are the most difficult to change, while skin and services are easier.

Let's suppose Agile developers know how to optimize some of the aspects of a system, perhaps including skin and services. So it doesn't matter if they get the skin and services wrong, because these can be changed later. This is the basic for @swardley's point that you don't need to know beforehand exactly what you are building.

But if they get the fundamentals wrong, such as site and structure, these are more difficult to change later. This is the basis for John Seddon's point that Agile may simply build the wrong things faster.

And this is where @ruthmalan takes the argument to the next level. Because Agile developers are paying attention to the things they know how to change (skin, services), they may fail to pay attention to the things they don't know how to change (site, structure). So they can throw themselves into refining and improving a system until it looks satisfactory (in their eyes), without ever seeing that it's the wrong system in the wrong place.

One important function of architecture is to pay attention to the things that other people (such as developers) may miss - perhaps as a result of different scope or perspective or time horizon. In particular, architecture needs to pay attention to the things that are going to be most difficult or expensive to change, or that may affect the lifetime cost of some system. In other words, strategic risk. See my earlier post A Cautionary Tale (October 2012).

Read Ruth's further comment here (Journal November 2015)


Wikipedia: Shearing Layers

Monday, October 22, 2012

A Cautionary Tale

Heard an interesting story recently, from a software developer advocating Scrum. I shall omit the names of the companies and individuals, and I may get a few of the details wrong, to avoid embarrassing anyone.

Our hero was working for a mobile device company, responsible for defining software requirements. At one point in this job, he was sent to negotiate an interface with a social networking company. The programmers at the social networking company were very keen that the social networking application (app) should be zero-rated - in other words, the users should not have to pay network charges for using the app - so our hero dutifully recorded this as a requirement and built it into the app.

When our hero returned to his company's offices, he discovered to his dismay that the sales and marketing people were very cross with this arrangement. They felt that this damaged the interests of the device company and their network partners (who usually want to generate network revenue from the use of such apps).

Our hero felt bad. Clearly he had let his company down, by failing to consult all the stakeholders before agreeing to this arrangement.

For my part, I thought our hero was being too hard upon himself. After all, the device company was operating in a highly competitive multi-sided market, and deciding how to balance the interests of different stakeholders was therefore a delicate strategic judgement. Our hero was sent off to perform this delicate task without any proper briefing or guidance or policy. Or architecture. I thought it significant that it was the sales and marketing department that got cross, because this implied the absence or abdication of any strategic planning or architectural function who, if they were doing their job properly, should have anticipated this difficulty. How was our hero supposed to know who all the possible stakeholders were, if he didn't have a decent model of the ecosystem?

So this is also a story about the proper balance between agile development (with all its potential advantages) and architecture (with its attention to strategic risk). A proper architecture will look at the business and its ecosystem from several viewpoints including the Motivation View, which highlights value conflicts and differences of interest between different partners.



In response to @allankellynet who commented "Scrum/Agile irrelevant, guy was not briefed, had not spoken to stakeholders. Could happen on any method", I should make clear that I'm not blaming Scrum as such, I'm bemoaning the lack of effective architecture. There is a great deal of guidance about combining Scrum/Agile with architecture - for example, see this Linked-In discussion Does Agile/Scrum negatively impact software architecture?
 


@brwalsh, Monetization: Money for Nothing (Cisco Communities, June 2011)

@mutlu82, Facebook Does A Deal With Mobile Operators To Produce New Mobile Site With Zero Data Charges! (Mobile Inc, May 2010) 



updated October 23rd 2012

Thursday, March 17, 2011

Emergent Architecture

#entarch #emergence #systemsthinking What is emergent architecture, and what are the practical implications for enterprise architecture?

In August 2009, Gartner produced a definition of Emergent Architecture with two synonyms (Middle-Out EA and Light EA), two characteristic practices (modelling lines not boxes, modelling relationships as interactions) and seven characteristic properties (non-deterministic and decentralized; autonomous, rule-bound, goal-oriented and locally influenced actors; dynamic or adaptive systems; resource constraints) [Gartner Identifies New Approach for Enterprise Architecture, August 2009].

At the same time, Dion Hinchcliffe produced a similar list of properties and one of his trademark diagrams [Pragmatic new models for enterprise architecture take shape, August 2009]. Meanwhile Dion was also talking a lot about WOA (which he credits to Nick Gall of Gartner), and there was clearly a link in their thinking between WOA and some notion of emergence [A Web-Oriented Architecture (WOA) Un-Manifesto, December 2009].

Overall, the emergent and evolving definition of Emergent Architecture across the internet is pretty muddled: although other writers in the Enterprise Architecture world may reference Gartner and/or Hinchcliffe, they don't always pick up the full richness and power of their definitions. In addition, these writers may be influenced by the agile community, where emergent is contrasted with upfront and seems to mean something like making it up as you go along.

For example

The architect should collaborate with the development team to define and code higher-level contexts, responsibilities, interfaces, and interactions, as needed, and leave the details to the team. The development team, through the rigorous use of automated unit and story tests via continuous integration, is then able to improve the system design incrementally and continually—both within and across model-context boundaries— without compromising system functionality. Gartner uses the term Emergent Architecture to describe this practice. Keeping Architectures Relevant, Microsoft Architecture Journal

The practice described in that article can only be described as emergent on a fairly narrow interpretation of the word, presumably based on the rule-bound criterion and ignoring the other characteristics.

My own view is that all of the characteristics Gartner and Dion Hinchcliffe originally proposed (with the possible exception of resource constraints, which are nothing new) could be regarded as emerging, in the sense of being new and trendy and representing a departure from the modernist paradigm of traditional enterprise architecture. (See my post on Modernism and Enterprise Architecture.) But not all of the characteristics they proposed are directly associated with emergence, in the complex systems sense.

Wikipedia defines emergence as the way complex systems and patterns arise out of a multiplicity of relatively simple interactions [Wikipedia: Emergence]. As I see it, the notion of emergence leads to a key distinction for enterprise architecture, between a planned order (which Hayek called Taxis) and an emergent spontaneous order based on self-organization (which Hayek called Cosmos).

Although some writers like to use biological and ecological analogies when talking about emergence, the sociopolitical analogies are probably more relevant to enterprise architecture because they include the notion of human intentionality. (And some people use the terms top-down and bottom-up, but nowadays I try to avoid these terms because of their potential ambiguity. See my note on Service Planning.)

The distinction between planned and emergent architecture is akin to the distinction introduced by Henry Mintzberg between deliberate and emergent strategy, the latter referring to strategies that originate in the interaction of an organization with its environment [Wikipedia: Strategy dynamics].

This distinction also maps onto some recent work in the system-of-systems (SoS) domain. Mark Maier introduced a distinction between directed and collaborative systems (1998), and this has been developed by the U.S. Department of Defense (DoD) into a more complicated four-part schema (directed, acknowledged, collaborative and virtual). See for example System Engineering Guide for System-of-Systems Engineering (Version 1, August 2008). Planned architectures tend to assume directed or acknowledged systems-of-systems, while emergent architectures are associated with collaborative and virtual systems-of-systems.

Boxer and Garcia write:
In the complex systems-of-systems contexts supporting distributed collaboration, the architecture of the collaborative enterprise has to be approached as emergent, created through an alignment of individual architectures. [Enterprise Architecture for Complex System-of-Systems Contexts, 3rd Annual IEEE Systems Conference in Vancouver March 23-26 2009 (abstract) (pdf)] See also Type III Agility and Ideologies of Architecture.

The acknowledged category of systems of systems allows for some flexibility and autonomy for local development of component systems, while remaining under central oversight and direction. This is surely where rule-bound actors would fit, and would include the practices described in the Microsoft Architecture Journal article mentioned above (Keeping Architectures Relevant). There is also some scope in the acknowledged category for dynamic or adaptive systems and adaptive processes. But this is some way short of a genuinely emergent architecture.

Gartner is closer to the mark when it talks about decentralized decision-making (which it calls non-deterministic) involving autonomous goal-oriented actors, responsive to local influence. Similarly, Dion Hinchcliffe talks about community-driven architecture run by autonomous stakeholders, producing decentralized solutions with emergent outcomes.

I don't know whether Gartner is still promoting the idea of emergent architecture, or whether its definition has evolved since 2009, given that Nick Gall's recent work (such as his latest piece on Panarchy) doesn't seem to use the term at all. However, Gartner's original piece on emergent architecture remains available on its website, and continues to be referenced as one of the primary sources for the term.



What are the practical implications of emergence for enterprise architecture?

One of the most important practical differences between planned architecture and emergent architecture is that we tend to think of planned architecture as a design-time artefact - something that is envisioned, designed and then implemented by architects. So architecting is regarded as a special kind of designing. Implementation may be directly managed by the architects, or indirectly by means of a set of architectural policies to govern acquisition, development and use.

The implementation of a planned architecture always involves a gap between plan and reality. Plans take time to realise, some bits of the plan may never get realised. There is always a defacto architecture, which we may call architecture-in-use, which is a structural description of what-is-going-on, paying attention to certain structural and sociotechnical aspects of a (run-time) system of systems from a given observer position.

Devotees of planned architecture see this in terms of a simple transformation from AS-IS to TO-BE. Their attention to the existing defacto architecture is largely motivated by the need to define a business case and a technical transition strategy for replacing AS-IS with TO-BE, possibly in discrete phases or chunks.

But emergence tells us this: that the architecture-in-use emerges from a complex set of interactions between the efforts and intentions of many people. The architects cannot anticipate, let alone control, all of these interactions. There may be some areas in which the architects are able to carry out something that looks a bit like design, either autonomously or collaboratively, but there will be other areas in which the architects are simply trying to understand the structural implications of some messy situation and make some useful interventions. The primary activity of architects is therefore following not a design loop but an intelligence loop.

However that depends what we mean by design. There is a renewed interest in a generalized notion of design in the enterprise architecture world, especially with the current popularity of Design Thinking (and such derivatives as Hybrid Thinking).  But that's a subject for another post.


Sources

Gartner identifies new approach for enterprise architecture (Aug 2009)

Commentary on Gartner's emergent architecture concept by Abel Avram, Leo de Sousa, Adrian Grigoriu and Mike Rollings (then with Burton Group).

Dion Hinchcliffe, Pragmatic new models for enterprise architecture (Aug 2009)

Mark Maier, Architecting Principles for Systems of Systems (InfoEd June 1997)

See also

Peter Cripps, Enterprise Architecture is Dead. Long Live Interprise Architecture (Oct 2010)

Tom Graves, Hybrid-thinking, enterprise architecture and the US Army (May 2010) 

Three or Four Schools of Enterprise Architecture (December 2011)

Friday, March 04, 2011

A Twin-Track Approach to Government IT

#ukgovit @instituteforgov has just published a report called System Error: Fixing the Flaws in Government IT.

The report recommends a twin-track approach to government IT, based on the two concepts of Agile and Platform.

"The platform must standardise and simplify core elements of government IT. For any elements of IT outside the platform, new opportunities should be explored using agile principles. These twin approaches should be mutually reinforcing: the platform frees up resource to focus on new opportunities while successful agile innovations are rapidly scaled up when incorporated into the platform."

The report acknowledges the tension between these two concepts ...

"Treating items as commodities reduces cost but can limit flexibility; coordinating elements of IT across departments frees up resources but may move them further from frontline users; common standards support interoperability but also restrict the freedoms to innovate."
... and offers some general ideas for managing this tension.
  • To act fully in the interests of government, an agile approach requires a light touch form of coordination at a system level. 
  • To minimise duplication of effort in solving the same problems, there needs to be system-wide transparency of agile initiatives. 
  • Existing elements of the platform also need periodic challenge. ... Transparency, publishing feedback and the results of experiments openly, will help to keep the pressure on the platform for continual improvement as well as short-term cost savings.
Trouble is, some of this stuff is really hard. The report talks glibly about "a less than intelligent customer", referring first to business users having an inadequate conception of the possible, and then to the public sector as a whole lacking the collective knowledge and skills to negotiate effectively with suppliers. This lack of intelligence is apparently blamed on the V-model development process, which creates the impression that the adoption of Agile methods would solve this problem. But the idea of Agile as a silver bullet is a dangerous one, as many people have already pointed out on the Linked-In discussion group.

One way of understanding the twin track approach is to think of the different kinds of economics involved.
  • 'Platform' means delivering economies of scale and economics of scope.
  • 'Agile' means delivering economies of alignment.

Combining the two introduces some complex architectural challenges, as I've written about here and elsewhere before. We call this Asymmetric Design. For an example of this approach applied to public sector IT, see an analysis of the CSA Case by Philip Boxer and myself. See also The Impact of Governance Approaches on SoS Environments (pdf) by Philip Boxer and others.


In the current economic situation, the public sector as a whole is charged with making massive cost savings, and it is crazy to imagine that cost savings of this scale would not be associated with significant structural change, including IT systems. This kind of disruptive innovation goes way beyond the economies of scale and scope, and introduces some serious questions about the economics of alignment.

The word "architecture" is mentioned a few times in the Institute for Government report, but only in passing as something that the Government CIO will look after. (Mostly technology or solution architecture, I only found one single reference to business architecture.) So there is an implicit idea of central thinking and hierarchical governance. But there are some architectural challenges here that are some way beyond the current practices of enterprise architecture.

Governance is also a significant problem. The report comments on the pendulum swings between centralized and decentralized provision, which is something we noted in the CSA case, and was also present in the case of ContactPoint (which we were in the middle of writing up when it was cancelled). Such pendulum swings are often a characteristic symptom of weak or unsustained governance.

Not only is this stuff structurally complicated, but there are some commercial stakeholders that have every incentive to maintain the complicated status quo, thanks to a grossly dysfunctional procurement process.

And there is an even bigger problem with the report, which is that it looks at government IT exclusively from within government - in other words, from the perspective of civil servants. For example, the report adopts a supply-side notion of "joined-up government", understood largely in terms of internal linkages and efficiencies between systems, and fails to mention the demand-side notion of "joined-up government" that involves a coherent experience for the citizen. (See my post on Joined-Up Government from December 2005.)

Meanwhile the notion of "user" appears to refer mainly to civil servants and other public sector workers. Surely the purpose of government IT is not to provide direct value to civil servants but to provide various forms of indirect value to individual citizens and socioeconomic communities.

The report regrets that "government IT [is] falling further and further behind the fast-paced and exciting technological environment that citizens interact with daily" and indicates "the potential for IT ... fundamentally changing the relationship between citizen and state". "Around the world governments are using technology to help them deliver better services, be more transparent and accountable, and connect more directly with their citizens." (Examples are cited from Canada, USA and Malaysia.)

And yet the report fails to explain how "agile" can adequately represent the demand side requirements of citizens, interacting with a broad range of government services while going about their public business. There is a completely different notion of "platform" required here - government as a platform, which Tim O'Reilly and others have been talking about for a couple of years. And a different notion of agility, which goes a lot further than agile software development.


Other commentary

See Linked-In discussion group

Harry Metcalfe (2 March 2011) observed that many of the recommendations in the report were really hard, and was one of the first to complain about the insufficient attention to procurement in the report.

Friday, April 23, 2010

Architect Certification and Trust

@mattdeacon @wendydevolder @karianna @flowchainsensei @gojkoadzic @unclebobmartin .

Lots of good comments on Twitter and elsewhere about certification, in various contexts (enterprise architecture, agile, ...).

The purpose of a certificate is to enable you to trust the bearer with something. So we need to understand the nature of trust. In their book Trust and Mistrust, my friends Aidan Ward and John Smith identify four types of trust ...
  • authority
  • network
  • commodity
  • authentic
... and we can apply these four types to the different styles of certification that might be available.

In his attack on the World Agile Qualifications Board, @gojkoadzic quotes the Agile Alliance position on certification: employers should have confidence only in certifications that are skill-based and difficult to achieve. Yet, as Gojko continues, "most of the certificates issued today are very easy to achieve and take only a day or two of work, or even just attending the course".

If a certificate is issued by a reputable professional organization, then the value of the certificate is underwritten by the reputation of the issuing organization, so this counts as authority trust. In my post Is Enterprise Architecture a Profession? I have already stated my view that claims for professional status for enterprise architecture are at best premature, so there is no organization today that has sufficient authority to issue certificates of professional competence. However, if you can acquire a certificate simply by attending a short course and/or memorizing some document (such as TOGAF), then this is a commodity-based form of trust. Basically, such certificates will only be regarded as valuable if just enough people have them. (Which seems to be why some large consultancies have put all their practitioners through TOGAF training.)

Bob Marshall (@flowchainsensei) prefers vouching

Just found http://wevouchfor.org - Should keep me busy vouching (why oh why "certifying???") for capable folks for some time.

which is a form of network trust. If someone receives a lot of vouchers from his friends, that could either mean he is very popular or that he is involved in a lot of reciprocal back-scratching. (This kind of mutual recommendation is easily visible on Linked-In, where the list of incoming recommendations often exactly matches the list of outgoing recommendations.)

The trouble with all these mechanisms is that they are both one-sided and lacking context. The certificate purports to tell us about a person's strengths (but not weaknesses), in some unspecified or generic arena. This can only go so far in supporting a judgement about a person's qualifications (strengths and weaknesses) for a specific task in a specific context. What if anything would serve as an authentic token of trust?


Aidan Ward and John Smith, Trust and Mistrust - Radical Risk Strategies in Business Relationships. John Wiley, 2003

Wednesday, March 05, 2008

Agility and Variation 2

In a post called Kitchen Sink Variability, Harry "Devhawk" Pierson (Microsoft) lays into a research note by Ronald Smeltzer (Zapthink): Why Service Consumers and Service Providers Should Never Directly Communicate.

As Harry explains, there is a contradiction between Variability and the principles of Agile Development. (I discussed some aspects of this contradiction in my post on Agility and Variation.) Harry thinks Zapthink is advocating what he calls Kitchen Sink Variability - let everything vary except the kitchen sink.

Without architecture, this would indeed be a crazy idea.

Harry appeals to the ExtremeProgramming principle of YouArentGonnaNeedIt, which says "don't build in extra functionality". But does this principle apply to flexibility - does flexibility count as a kind of functionality? Should we really avoid building in flexibility?

If building in flexibility means constructing specific additional stuff (mechanisms, abstractions) to anticipate non-specific future variation, then this would seem to entail additional cost (development overhead, maintenance overhead, runtime overhead) with no clear benefits. We might reject this kind of thing as "over-engineering".

But if building in flexibility means using appropriate architectural patterns and existing technologies that allow you to ignore future variation, that seems to be a different thing altogether. Zapthink is talking about a specific pattern - decoupling the service consumer from the service provider - that is supported by current SOA platforms. That seems to make a lot of sense to me. But Zapthink justifies this pattern by appealing to a general principle of variability, and this is what has aroused Harry's scorn.

For my part, I don't think SOA means uncritically embracing all aspects of variation and variability. I have always said Loose Coupling is a choice, not a mandate. (See my earlier post Loose Coupling 2). But let's understand where the variation comes from. It is not because enterprise architects are going around promoting supply-side variation. More likely because many enterprise architects are struggling with increasing levels of demand-side variation. The traditional choice has always been to suppress as much variation as possible, but for many organizations (and their customers) this choice is no longer acceptable.

The relationship between variation and complexity is not linear. A truly agile architecture should be able to accommodate some increase in variation without increasing complexity by the same amount. But of course there is still some cost. SOA improves the trade-off, but there is still a trade-off.

The bottom line is that architects need to be able to reason intelligently about variation and variability. Not abstraction for the sake of abstraction, not decoupling for the sake of decoupling, but abstraction and decoupling driven by the needs of the enterprise. Loose Coupling is not a mandate but a choice.

Monday, March 20, 2006

SPARK 2 - Innovation or Trust?

Day 2 of the SPARK workshop was an attempt to develop frameworks that encapsulated the architectural response to the challenges identified in Day 1. The photoset on Flickr includes some dreadful pictures of me (here with Michael Platt of Microsoft and here with Nick Gall of Gartner), but it also includes copies of most of the flip charts.

The group I was in (which included Glen Harper, Dion Hinchcliffe and Michael Putz) tried to summarize some of the issues around the business stack.

Business Stack

Note the differential rate of change, as well as the gradients of innovation and trust. Note also the questions of horizontal and vertical coupling, which the group discussed but did not resolve.

This is a framework not a fixed solution. For example, in some cases the trust/compliance regime may be stricter (or at least different) at the top of the stack (think healthcare and HIPPA), but in most cases the greatest perceived risks will be associated with the major business assets (legacy) at the bottom of the stack. And (as Anne Thomas Manes reminded us) enterprise innovation isn't always going to be focused exclusively on customer-facing stuff, but may be focused on supply chain or product development or elsewhere.

Different technologies will be appropriate for different levels of the stack - for example we might expect to see SOAP and WS-* at the bottom of the stack (high trust, high engineering) and REST at the top of the stack (low trust, agile).

Of course, stratified architectures and stack diagrams are not new, but they have traditionally been produced from a purely technological perspective (client/service, 3-tier, n-tier computing, and so on). To my mind, the new architectural challenge here is to manage the stratification of layers in a way that responds in an agile and effective way to (the complexity of) the business/user challenges. (Hence Asymmetric Design.)


See also Beyond Bimodal (May 2016)

Wednesday, November 09, 2005

Twin-Track Governance

Twin-track development involves a management separation between the development of components and services, and the development of larger artefacts that use these components and services. It was a characteristic feature of the more advanced CBSE methods, such as Select Perspective, and is obviously relevant to service engineering as well.

Clarification Update: Select Perspective has always been more than just a CBSE method, and now qualifies as a full-fledged SOA method. Even as a CBSE method, it was one of the first to embrace service as a first class construct. Although this may not be immediately obvious from the Select website, it is clear from an article contributed by Select consultants (then part of Aonix) to the CBDI Journal in 2002.

The twin-track pattern can be superimposed on a traditional lifecycle process, such as that recently propagated by OASIS for the web service lifecycle (pdf). Even though the OASIS process is presented as applying to a single web service, it doesn't take much reframing to see this kind of process applying to an artefact that is larger than a single service - such as a whole system or subsystem - provided it is specified and designed in one place at one time. So we can start to see the OASIS process (or a suitably generalized version of it) applying to either of the twin tracks - but not at the same time. Both tracks may be following some version of the OASIS process, but they don't talk to each other.

The twin-track pattern is sometimes interpreted in a simple way, with an IT organization divided into a service-creation track and a business project track. The service track produces services with web service interfaces; the business projects produce applications with user interfaces (UI). In this interpretation, use of BPEL belongs exclusively in the service track, because it doesn't produce applications with UI.

However, we can interpret twin-track in a much more powerful way than this, by generalizing the pattern. We simply specify that supply of services is in one track, and the consumption of these services (even if this is in the context of using BPEL to build larger artefacts that are themselves services) is in another track. The point of the twin-track pattern is that the supply and consumption can be decoupled and managed separately - possibly even in separate organizations. Of course, this pattern can be applied more than once, yielding more than two tracks in total.

Meanwhile, the possession of a UI is probably of secondary importance here. With SOA, we can (and probably should) build applications that give the user a choice between interacting directly or via some user-built secondary application. Thus for example, I want my online bank to offer me a set of services rendered in two alternative ways: firstly via a UI (probably browser-based) and secondly as a series of webservices (or equivalent) that I can invoke from within some desktop money management program. Think of Google and eBay delivering the same services via browser and via web services. In the service economy, I want all interfaces to have something like a webservice or REST or RSS/Atom alternative.

And perhaps if lots of people are using desktop money management programs, it might be cheaper for the bank to give all remaining customers a low-functionality desktop program (or recommend a suitable bit of freeware) and then decommission the UI altogether. I'm not saying this is always going to be advisable, but it's certainly an option. So we might see a growing number of serious business applications with no traditional UI at all.

In architecture, if you build windows everywhere, it makes it harder to join buildings together. Think of a university campus, which grows piecemeal over many decades. If every new block has a blank wall, then it is easier to build another block next to it. If you put windows on every available wall, you have to put useless space between the blocks, and then build silly walkways to save people keep going down to the ground floor. The evolution of complex SOA raises some of the same issues.

Even with single-track development, there is a governance question here. How can we maintain order over a complex and evolving system, if we cannot simply outline all the requirements at the beginning? And with twin-track, a critical function of SOA governance is governing the relationship between the tracks, however many tracks there may be.

How do we get (and keep) all these loosely coupled development processes and operations processes in alignment with each other, and also in alignment with the business. And alignment with IT accounting (financial and otherwise) would be nice too. Obviously the whole point of twin-track is that you can decouple the tracks to some extent, but if you decouple them too much then you throw baby (reuse, interoperability, flexibility) out with the bathwater (agile=fragile).

Some sources advocate frequent synchronization between development teams. While synchronization does not necessarily rule out federated, distributed development, but this would only be possible with a great deal of horizontal coordination. And this would introduce lots of other challenges.

SOA governance governs what kind of development is appropriate, and how it should be coordinated. For example, SOA governance provides ground-rules for participants to agree interfaces before they go off and do their own thing, and for enforcing these agreements. In the real world, we know that all agreements are subject to being reneged and renegotiated as requirements and conditions change. But who incurs this risk, and who shall bear the cost of any change? If you decide you need to change the interface, am I forced to respond to this, and if so how quickly?

Change management (e.g. avoiding uncontrolled demand for service specification change) cannot be managed within either one track, but implies governance of the relationship between the tracks.

If we assume that the two tracks report to a single IT manager within a single organization, then vertical line management may provide all the governance you need. But if the two tracks are in separate organizations, then the question of governance becomes a matter for negotiation between the two organizations.

A general governance framework for SOA must support federated, distributed development. Obviously if an enterprise chooses to remain (for the time being) with a simpler development style, then it should not be forced to adopt all the elements of the framework it doesn't yet need. Why should an enterprise adopt principles that are not relevant to the way it is currently doing development? But sometimes it might be correct for an enterprise to adopt principles for federated, distributed development even where the development team is not. For example, to provide some horizontal compatibility with the way that its partners are doing development, or to provide upwards compatibility to the way it intends to do development in future.

Technorati Tags:

Tuesday, April 19, 2005

Agility and Variation

There are several strands of agile development and quality management that are concerned with reducing (some measure of) variation in order to maximize (some measure of) performance and (some measure of) quality. Much of this thinking is derived from the quality guru W. Edwards Deming.

David J. Anderson, a follower of Goldratt's Theory of Constraints (TOC), has been discussing this recently in his Agile Management Weblog.
So I was interested to read Anderson's account of a Sushi Lunch, where he complained about the inflexible and unresponsive service provided in a top Japanese restaurant, and the dissatisfaction experienced especially by a junior member of the Anderson family. Anderson senior interprets this in terms of the Theory of Constraints:

This restaurant had a broken organizational structure and poor separation of responsibilities. The "Anderson lunch project" should rightly have been the responsibility of the waiter who should have been playing the project manager role (and maybe the program manager role). The waiter should have analyzed our requirements and understood our priorities. This should have been communicated to the chefs and the order of production of our sushi should have been negotiated against the competing orders at the time. The sushi chefs should have been purely responsible for the production of sushi. They should not have had any project management, program management or scheduling responsibility.


But I don't think this tells the whole story. It seems to me that the restaurant was unable and/or unwilling to accommodate the demand variation caused by the Anderson toddler. A supply-side optimization led directly to a poor demand-side experience, and indirectly to a supply-side disruption.

Traditional operational managers often believe their primary task is to achieve supply-side productivity by reducing variation. This view of the primary task encourages them to suppress demand-side variation, and to ignore demand-side signals that would complicate or contradict this view of the primary task.

What are the implications of this for service-oriented architectures? We now have technologies that support greater flexibility in managing demand-side variation in relation to supply-side variation. From a business point of view, this is one of the main benefits of SOA technologies.

 

See also The Universe at the End of the Restaurant (April 2009)


Links updated 18 October 2016, 18 October 2020.

Wednesday, June 30, 2004

The Planning Dilemma

One of the key problems faced by planning (in IT and elsewhere) has been the dilemma – top-down or bottom-up. Top-down methods produce grand schemes without addressing the problems on the ground (including legacy), while bottom-up methods produce local solutions without any overall order, coherence or reuse.

Bottom-Up Approach (Point Projects) Top-Down Approach (Area Projects)
Local short-term initiative. No mandate to pay attention to broader, longer-term opportunities and effects. Broader, longer-term initiative
Building a solution against immediate requirements (where “building” means design, construct or assemble) Focus on system properties across a whole area (e.g. business domain, technical domain, infrastructure)
Strongly aligned to local objectives. Direct link between (local) benefits, costs and risks. Indirect links between benefits (across area), costs and risks
  • Often difficult to create/maintain business case for adequate investment in resources and infrastructure
  • Often difficult to demonstrate return on investment
Cost-effective use of conveniently available resources (improvisation or “bricolage”) Creating value by establishing (procuring or building) conveniently available resources

One way of addressing this dilemma is to introduce a twin track process, involving a top-down stream of activity and a bottom-up stream of activity.

Obviously for this twin-track process to be effective, we need clear allocation of responsibility, authority, expertise and work (RAEW). This is an aspect of governance - making sure the right things are done in the right way. Twin-track development exposes the inevitable tensions between business goals and service needs. And in federated/distributed development, these tensions are replicated across multiple business entities; governance then becomes a question of negotiation between two separate organizations, rather than simple management resolution within a single right framework.


This is a modified extract from an article on Business-Driven SOA published in the CBDI Journal, June 2004. For further extracts from this article, please see my Slideshare presentation on Organic Planning. See also our papers in the Microsoft Architecture Journal on Metropolis and SOA Governance (July 2005) and Taking Governance to the Edge (August 2006).

The notion of twin-track development is included in the Practical Guide to Federal SOA, published by the CIO Council in 2008. See also


See also: What does Top-Down mean? (September 2011)
For other posts on twin-track: browse, subscribe.