Showing posts with label pace layering. Show all posts
Showing posts with label pace layering. Show all posts

Friday, March 27, 2020

Data Strategy - More on Agility

Continuing my exploration of the four dimensions of Data Strategy. In this post, I bring together some earlier themes, including Pace Layering and Trimodal.

The first point to emphasize is that there are many elements to your overall data strategy, and these don't all work at the same tempo. Data-driven design methodologies such as Information Engineering (especially the James Martin version) were based on the premise that the data model was more permanent than the process model, but it turns out that this is only true for certain categories of data.

So one of the critical requirements for your data strategy is to manage both the slow-moving stable elements and the fast-moving agile elements. This calls for a layered approach, where each layer has a different rate of change, known as pace-layering.

The concept of pace-layering was introduced by Stewart Brand. In 1994, he wrote a brilliant and controversial book about architecture, How Buildings Change, which among other things contained a theory about evolutionary change in complex systems based on earlier work by the architect Frank Duffy. Although Brand originally referred to the theory as Shearing Layers, by the time of his 1999 book he had switched to calling it Pace Layering. If there is a difference between the two, Shearing Layers is primarily a descriptive theory about how change happens in complex systems, while Pace Layering is primarily an architectural principle for the design of resilient systems-of-systems.

In 2006, I was working as a software industry analyst, specializing in Service-Oriented Architecture (SOA). Microsoft invited me to Las Vegas to participate in a workshop with other industry analysts, where (among other things) I drew the following layered picture.

SPARK Workshop Day 2

Here's how I now draw the same picture for data strategy. It also includes a rough mapping to the Trimodal approach.











Giles Slinger and Rupert Morrison, Will Organization Design Be Affected By Big Data? (J Org Design Vol 3 No 3, 2014)

Wikipedia: Information Engineering, Shearing Layers 

Related Posts: Layering Principles (March 2005), SPARK 2 - Innovation or Trust (March 2006), Enterprise Tempo (October 2010), Beyond Bimodal (May 2016), Data Strategy - Agility (December 2019)

Friday, July 27, 2018

Standardizing Processes Worldwide

September 2015
Lidl is looking to press ahead with standardizing processes worldwide and chose SAP ERP Retail powered by SAP HANA to do the job (PressBox 2, September 2015)

November 2016
Lidl rolls out SAP for Retail powered by SAP HANA with KPS (Retail Times, 9 November 2016)

July 2018
Lidl stops million-dollar SAP project for inventory management (CIO, in German, 18 July 2018)

Lidl cancels SAP introduction after spending 500M Euro and seven years (An Oracle Executive, via Linked-In, 20 July 2018) 
Lidl software disaster another example of Germany’s digital failure (Handelsblatt Global, 30 July 2018)

I don't have any inside information about this project, but I have seen other large programmes fail on because of the challenges of process standardization. When you are spending so much money on the technology, people across the organization may start to think of this as primarily a technology project. Sometimes it is as if the knowledge of how to run the business is no longer grounded in the organization and its culture but (by some form of transference) is located in the software. To be clear, I don't know if this is what happened in this case.

Also to be clear, some organizations have been very successful at process standardization. This is probably more to do with management style and organizational culture than technology choices alone.

Writing in Handelsblatt Global, Florian Kolf and Christof Kerkmann suggest that Lidl's core mentality was "but this is how we always do it". Alexander Posselt refers to Schicksalsgemeinschaften, which can be roughly translated as collective wilful blindness. Kolf and Kerkmann also make a point related to the notion of shearing layers.
Altering existing software is like changing a prefab house, IT experts say — you can put the kitchen cupboards in a different place, but when you start moving the walls, there’s no stability.
But at least with a prefab house, it is reasonably clear what counts as Cupboard and what counts as Wall. Whereas with COTS software, people may have widely different perceptions about which elements are flexible and which elements need to be stable. So the IT experts may imagine it's cheaper to change the business process than the software, while the business imagines it's easier and quicker to change the software than the business process.

What will Lidl do now? Apparently it plans to fall back on its old ERP system, at least in the short term. It's hard to imagine that Lidl is going to be in a hurry to burn that amount of cash on another solution straightaway. (Sorry Oracle!) But the frustrations with the old system are surely going to get greater over time, and Lidl can't afford to spend another seven years tinkering around the edges. So what's the answer? Organic planning perhaps?


Thanks to @EnterprisingA for drawing this story to my attention.

Slideshare: Organic Planning (September 2008), Next Generation Enterprise Architecture (September 2011)

Related Posts: SOA and Holism (January 2009), Differentiation and Integration (May 2010), EA Effectiveness and Process Standardization (August 2012), Agile and Wilful Blindness (April 2015).


Updated 31 August 2018

Wednesday, May 04, 2016

Beyond Bimodal

Ten years ago (March 2006) I attended the SPARK workshop in Las Vegas, hosted by Microsoft and inspired by Christopher Alexander. (Every participant received a copy of Alexander's Timeless Way of Building beforehand.) One of the issues we debated extensively was the apparent dichotomy between highly innovative, agile IT on the one hand, and robust industrial-strength IT on the other hand. This dichotomy is often referred to as bimodal IT.

In those days, much of the debate was focused on technologies that supposedly supported one or other mode. For example SOA and SOAP (associated with the industrial-strength end) versus Web 2.0 and REST (associated with the agile end).

But the interesting question was how to bring the two modes back together. Here's one of the diagrams I drew at the workshop.

Business Stack

As the diagram shows, the dichotomy involves a number of different dimensions which sometimes (but not always) coincide.
  • Scale
  • Innovation versus Core Process
  • Different rates of change (shearing layers or pace layering)
  • Top-down ontology versus bottom up ontology ("folksonomy")
  • Systems of engagement versus systems of record
  • Demand-side (customer-facing) versus supply side
  • Different levels of trust and security

Even in 2006, the idea that only industrial-strength IT can handle high volumes at high performance was already being seriously challenged. There were some guys from MySpace at the workshop, handling volumes which were pretty impressive at that time. As @Carnage4Life put it, My website is bigger than your enterprise.


Bimodal IT is now back in fashion, thanks to heavy promotion from Gartner. But as many people are pointing out, the flaws in bimodalism have been known for a long time.

One possible solution to the dichotomy of bimodalism is an intermediate mode, resulting in trimodal IT. Simon Wardley has characterized the three modes using the metaphor of Pioneers, Settlers, and Town Planners. A similar metaphor (Commandos, Infantry and Police) surfaced in the work of Robert X Cringely sometime in the 1990s. Simon reckons it was 1993.



Trimodal doesn't necessarily mean three-speed. Some people might interpret the town planners as representing ‘slow,’ traditional IT. But as Jason Bloomberg argues, Simon's model should be interpreted in a different way, with town planners associated with commodity, utility services. In other words, the town planners create a robust and agile platform on which the pioneers and settlers can build even more quickly. This is consistent with my 2013 piece on hacking and platforms. Simon argues that all three (Pioneers, Settlers, and Town Planners) must be brilliant.

Characterizing a mode as "slow" or "fast" may be misleading, because (despite Rob England's contrarian arguments) people usually assume that "fast" is good and "slow" is bad. However, it is worth recognizing that each mode has a different characteristic tempo, and differences in tempo raise some important structural and economic issues. See my post on Enterprise Tempo (Oct 2010).



Updated - corrected and expanded the description of Simon's model.  Apologies for Simon for any misunderstanding on my part in the original version of this post.


Jason Bloomberg, Bimodal IT: Gartner's Recipe For Disaster (Forbes, 26 Sept 2015)

Jason Bloomberg, Trimodal IT Doesn’t Fix Bimodal IT – Instead, Let’s Fix Slow (Cortex Newsletter, 19 Jan 2016)

Jason Bloomberg, Bimodal Backlash Brewing (Forbes, 26 June 2016)

Rob England, Slow IT (28 February 2013)

Bernard Golden, What Gartner’s Bimodal IT Model Means to Enterprise CIOs (CIO Magazine, 27 January 2015)

John Hagel, SOA Versus Web 2.0? (Edge Perspectives, 25 April 2006)

Dion Hinchcliffe, How IT leaders are grappling with tech change: Bi-modal and beyond (ZDNet, 14 January 2015)

Dion Hinchcliffe, IT leaders inundated with bimodal IT meme (ZDNet, 1 May 2016)

Dare Obasanjo, My website is bigger than your enterprise (March 2006)

Richard Veryard, Notes from the SPARK workshop (March 2006), Enterprise Tempo (October 2010), A Twin-Track Approach to Government IT (March 2011),

Richard Veryard, Why hacking and platforms are the future of NHS IT (The Register, 16 April 2013)

Richard Veryard and Philip Boxer, Metropolis and SOA Governance (Microsoft Architecture Journal, July 2005)

Simon Wardley, Bimodal IT - the new old hotness (13 November 2014)

Simon Wardley, On Pioneers, Settlers, Town Planners and Theft (13 March 2015)

Lawrence Wilkes and Richard Veryard, Extending SOA with Web 2.0 (CBDI Forum for IBM, 2007)


updated 27 June 2016

Wednesday, April 22, 2015

Agile and Wilful Blindness

@ruthmalan challenges @swardley on #Agile









Some things are easier to change than others. The architect Frank Duffy proposed a theory of Shearing Layers, which was further developed and popularized by Stewart Brand. In this theory, the site and structure of a building are the most difficult to change, while skin and services are easier.

Let's suppose Agile developers know how to optimize some of the aspects of a system, perhaps including skin and services. So it doesn't matter if they get the skin and services wrong, because these can be changed later. This is the basic for @swardley's point that you don't need to know beforehand exactly what you are building.

But if they get the fundamentals wrong, such as site and structure, these are more difficult to change later. This is the basis for John Seddon's point that Agile may simply build the wrong things faster.

And this is where @ruthmalan takes the argument to the next level. Because Agile developers are paying attention to the things they know how to change (skin, services), they may fail to pay attention to the things they don't know how to change (site, structure). So they can throw themselves into refining and improving a system until it looks satisfactory (in their eyes), without ever seeing that it's the wrong system in the wrong place.

One important function of architecture is to pay attention to the things that other people (such as developers) may miss - perhaps as a result of different scope or perspective or time horizon. In particular, architecture needs to pay attention to the things that are going to be most difficult or expensive to change, or that may affect the lifetime cost of some system. In other words, strategic risk. See my earlier post A Cautionary Tale (October 2012).

Read Ruth's further comment here (Journal November 2015)


Wikipedia: Shearing Layers

Saturday, March 09, 2013

Danish metamodels

My Danish friends @gotze and @aojensen comment on the latest release of OIO EA, which is a national enterprise architecture framework and meta-model published by the Danish Government Agency for Digitization.

Both John and Anders feel that certain key artefacts have been placed at the wrong layer of abstraction. John writes

"In my view, Business Rules should not be located at the strategic level at all. I would argue that Business Rules primarily “belongs” to the Business sub-architecture domain."

What is the basis for this argument? Anders points out a consequence

"business rules are located in the government strategy layer and thus tightly coupled to the long term vision of the government agency"

"Business rules are operationalisations of the long term strategy and strategic intent.
Whilst the vision, mission, and purpose of the enterprise do not change very often (i.e. provide the best available services our citizens), the business rules and processes involved in realising this will definitely change."

and therefore

"Business rules belong in the business architecture."

Thus Anders is basing his argument on a statement about the frequency of certain classes of change.

This statement appears to be empirically testable, although I know from my own experience that it is a lot difficult than one might think to gather data to test this kind of statement.

Part of the problem of measuring rates of change is that we don't have a particularly robust theory of change in the first place. Let's look at an example. From time to time, perhaps every year, Steve Ballmer restates the vision of Microsoft. Obviously he doesn't use exactly the same words every year. And of course Microsoft-watchers will seek to interpret even the slightest change of wording or emphasis as a sign of a strategic change in direction. So even if Ballmer himself insists that the vision hasn't changed, we might not believe him. Looking back in time, we might find that major changes in direction had already been hinted at in previous years. So at what point does an apparently minor change in wording become a substantially new vision?

Conversely, when a company has been exposed as unethical, the CEO will go public with an apology and an assertion of a new ethical vision. (Recent example: Barclays Bank.) We might not believe him either.

In both cases, we will probably judge whether there is a new vision or not by observing whether the company behaviour and rules changes or not. (And this is not just external observers - Microsoft and Barclays employees and managers are also making these judgements.) So the rate of change of vision might be epistemologically indistinguishable from the rate of change of behaviour.

However, despite the difficulties in conceptualizing and measuring change, I think it does make sense to derive architectural layers from the idea that certain things have a characteristic rate of change, and that things with a different rate of change should be in different layers. This means that there is at least a possibility of subjecting an architecture to empirical evaluation. I have published this idea in articles for the CBDI Forum, and suggested that architectural theory needs to be based on the Pace Layering principle

In contrast, Anders' appeal to the IAF seems to be purely an argument from authority. The IAF establishes some "fundamental" categories, and so any framework that deviates from these categories must be wrong. I think this line of argumentation is weaker. Even though you may assert some attractive consequences of following IAF, I cannot see any reason for believing that these consequences follow only from IAF and not from any rival framework.

Frameworks and categories may be embedded in metamodels. But how do we know what is the basis for choosing between alternative metamodels?


John Gøtze, Metamodels (January 2013)
Anders Ø. Jensen, Enterprise Architecture and abstraction layers (February 2013)

Ethics, Barclays and totalitarianism (Catholic Commentary January 2013)
Barclays boss tells staff 'sign up to ethics or leave' (BBC News January 2013)
Did Barclays suffer an Ethics Meltdown? (CSR Zone, July 2012)
Sure Kamhunga, Barclays to re-examine its core values (October 2012)
Naven Johal, Barclay’s Does Something Right! (January 2013)

Updated 29 April 2013

Friday, June 29, 2012

Where were the architects at RBS?

#entarch Some interesting architectural implications of the recent embarrassing failure of banking systems at RBS-NatWest Bank, which has caused financial stress and distress for millions of customers.

A banking software expert quoted in the Guardian offered an interesting architectural analogy.

"Banking systems are like a huge game of Jenga [the tower game played with interlaced blocks of wood]. Two unrelated transactions might not look related now, but 500,000 transactions from now they might have a huge relation. So everything needs to be processed in order."

This analogy suggests that the problem is one of architectural knowledge and governance. This is always a problem for any large and complex enterprise, but outsourcing typically amplifies such problems. From the press reports, it seems that the implementation of the RBS-NatWest application architecture has been delegated to a bunch of relatively inexperienced Indians with little knowledge of the RBS-NatWest business.

The finger of blame is being pointed to CA-7, which I understand to be a middleware product responsible for the orchestration of complex batch runs. As recently as February, there were job adverts in Inda urgently seeking people with CA-7 experience for the RBS contract.
Distributes or centralize job submission, management and monitoring as you choose and simplify job management by automating as much as possible and provides a simple-to-use interface to manage your environment. CA 7® Workload Automation is a mainframe-hosted, fully-integrated workload automation engine that coordinates and executes job schedules and event triggers across the enterprise.

http://www.ca.com/us/products/detail/ca-7-workload-automation.aspx


The Guardian continues
It seems whoever made the update to CA-7 managed to delete or corrupt the files which hold the schedule for the overnight jobs, so they did not run, or ran incorrectly.

ComputerWorld quotes an RBS spokesman.
The focus right now is on fixing the problem, which was triggered during a software system upgrade.
and BBC's Robert Peston adds

the software update that went so badly wrong last Tuesday night was fairly quickly identified and patched by Royal Bank; it is the absence of a contingency plan to deal with the knock-ons from the initial computer failure that many will see as deeply troubling

I presume that CA-7 expertise involves the ability to create and maintain these control files. But these control files essentially contain executable metadata that describe how the applications must be joined up, which must ultimately be based on a rigorous view of the application architecture - in other words, a model of the application layer.

In my discussion of business capabilities, I have always said that the most troublesome capabilities (and the ones overlooked by most business analysts) are the coordination capabilities, and these are the ones that need the most care when outsourcing. The RBS-NatWest incident illustrates this point.

@davidsprott uses the incident to illustrate the need for application modernization. But was the problem in the core application systems, or was it in the platform layer?  

To the extent that application coordination is being managed via CA-7, it looks suspiciously as if the model of the application layer was embedded in the platform layer, and managed as if it was merely technical infrastructure. This suggests a fundamental architectural flaw in RBS systems - a failure to maintain a clean separation of concerns between the application layer and the platform layer.

This is one of the reasons why enterprise architecture is important. With clean separation and robust interfaces between the architectural layers (business, application, platform), we can carry out modernization, innovation and continuous change in each layer separately. This follows the principle of pace layering, based on the notion that each layer has a different characteristic rate of change. Without clean separation between layers, the layers shear apart, resulting in misalignment and system failure. And as @davidsprott points out, service enabling has exactly this (layer separation) outcome.

Conclusions
  • It's risky outsourcing the core systems unless the architecture is clearly understood and controlled.
  • Good outsourcing‬ requires a good service architecture, which may include business, app and/or platform services.
  • Modernization requires good architecture.
  • In complex systems of systems, coordination is a core business capability. Outsource with extreme caution.




Charles Arthur, How NatWest's IT meltdown developed (Guardian 25 June 2012)

Anh Nguyen, CA 'helps' RBS resolve tech problem that led to massive outage (ComputerWorld 25 June 2012)

Robert Peston, Is outsourcing the cause of RBS debacle? (BBC News 25 June 2012)

David Sprott, RBS Crash - Management Prefer Offshoring to Modernization? (25 June 2012)

See also Architecture as Jenga (September 2012)

Friday, February 17, 2012

Location, Location, Location

#pacelayering In Stewart Brand's theory, originally titled Shearing Layers and subsequently relabelled as Pace Layering, the slowest moving element of physical architecture was location, or what he (for the sake of alliteration) called Site.

An interesting example of the persistence of site has recently been published in the British Medical Journal. @DouglasNobleMD has found a remarkable similarity between two maps produced 120 years apart.

1. Here is a modern map showing the occurrence of diabetes in parts of East London.



Then and now: Charles Booth's Victorian map from 1889, right, highlights the most-poverty hit areas of East London, while the modern-day equivalent, left, shows that the exact same areas have the highest risk of diabetes 
2. And here is Booth's 1889 analysis of poverty in exactly the same streets.
Then and now: Charles Booth's Victorian map from 1889, right, highlights the most-poverty hit areas of East London, while the modern-day equivalent, left, shows that the exact same areas have the highest risk of diabetes

Even though the symptoms and immediate causes are different (diabetes and junk food versus malnutrition), the root causes of poverty remain in exactly the same locations.


Douglas Noble et al, Feasibility study of geospatial mapping of chronic disease risk to inform public health commissioning. BMJ Open 2012;2:e000711 doi:10.1136/bmjopen-2011-000711

Via Daily Mail, 17 February 2012

Wednesday, December 15, 2010

An Architectural History of Social Networking

@ruskin147 is in California to meet the networking pioneers, including Stewart Brand.

In 1985, Stewart Brand was one of the founders of an online community called the Well. As Rory reports, the Well provided "inspiring stories of the power of online communication, as its members used its forums to share their lives, their thoughts, and their passions - whether it be for obscure technology questions or discussions about the meaning of life".

In 1994, Brand wrote a brilliant and controversial book about architecture, How Buildings Change, which among other things contained a theory about evolutionary change in complex systems based on earlier work by the architect Frank Duffy and the ecologist Robert O'Neill. The theory was then known as Shearing Layers; Brand now prefers to call it Pace Layering. If there is a difference between the two, Shearing Layers is primarily a descriptive theory about how change happens in complex systems, while Pace Layering is primarily an architectural principle for the design of resilient systems of systems.

In the original Shearing Layers theory, the slowest-moving layer is known as Site. In the context of physical buildings, this is the geographical setting, the urban location, and the legally defined lot, whose boundaries and context outlast generations of ephemeral buildings [via Wikipedia].

An interesting question for Brand then is whether social networking continues to occupy the same "Site" as early initiatives such as the Well. In other words, the boundaries and context outlasting generations of ephemeral technology.

Brand told Rory that he saw some of the same principles in Facebook that had governed The Well: "I'm really impressed at a lot of the instincts that Zuckerberg has had. Taking non-anonymity as an absolutely fundamental value of his company and thereby beating off the competition. A Facebook identity is one of the most valuable things his company offers. The lack of anonymity is what gives it value." [Friendster, Facebook and the Well]

But that's not quite the same as asserting a historical path from the Well to the present day. The critical question here is not how aware Zuckerberg and his associates were with the history of social networking, but to what extent this history directly or indirectly influenced their actions and choices. For example, our collective understanding of "anonymity" is coloured by the past couple of decades of internet and pre-internet activity. Zadie Smith (a near contemporary of Zuckerberg at Harvard) talks about Zuckerberg's idea about what a person is, or should be, and worries that her own idea of personhood is nostalgic, irrational, inaccurate [New York Review, 25 Nov 2010]. And of course ironic.

Where exactly did Zuckerberg's idea of personhood come from? There may be a sense in which echoes of the Well may persist in Facebook through the way such concepts are understood and managed. And although we wouldn't necessarily take Brand's opinion of this at face value (or for that matter Zuckerberg's) there can be few people whose opinion would be so interesting and well-informed.


See also

Roger Hudson, The Evolving Web - A Pace Layering view of the development of the Web and the W3C (March 2008)

Howard Silverman, Panarchy and Pace in the Big Back Loop (originally published on People and Place, March 2009, republished on Solving for Pattern, Oct 2012)

Matt Webb et al, Twitter thread discussing the provenance of the Shearing Layers concept (April 2021)

I have written several other posts on this blog discussing various aspects and applications of pacelayering.


Links updated 11 January 2018, 23 April 2021. Inserted reference to O'Neill.

Friday, October 22, 2010

Enterprise Tempo

An enterprise operates at several different tempi. For example

  • A retail chain has one tempo aligned to the customer visiting the store, a longer tempo for purchasing and logistics, and a longer one still for planning and establishing new stores
  • A military organization operates a campaign tempo (perhaps measured in months) and an acquisition tempo (possibly measured in decades). See my post on the Economics of Agility.
  • A consultancy firm has a job tempo (solving specific problems for its clients) and a capability tempo (developing the knowledge, skills and practices to maintain its competitive edge).

Differences in tempo also exist between an enterprise and key external stakeholders. For example, the customers of a retail grocery chain may wish to eat three meals a day, but usually visit the store on a more infrequent tempo. Meanwhile, some of the suppliers may base their production on an annual harvest.

Such differences in tempo raise some important structural and economic issues.

  • Alignment - how are operations coordinated across different tempi, and what are the costs of alignment? (In their paper Agility and Value for Defence, Nicholas Whittall and Philip Boxer consider this question in relation to the military scenario outline above.)
  • Resource bargaining - how do we divide scarce assets (time, money, management attention, capacity for risk-taking and innovation) between different tempi?
  • Intelligence - how are knowledge flows and learning loops affected by the different tempi?


In ecology, it is usually thought that the slow-moving processes dominate the faster-moving processes. This is one of the axioms of the architectural theory of pace layering.

 

See also Beyond Bimodal (May 2016)

Wednesday, September 02, 2009

Economics of agility 2

In my previous post on the Economics of Agility, I noted how little material has been published on this topic.

As Nicholas Whittall and Philip Boxer point out in their contribution to the recent debate on The Meaning of Value-for-Money in Defence Acquisition (RUSI, February 2009), there is an important link between agility and alignment. See also their earlier piece on Agility and Innovation in Acquisition (RUSI, February 2008).

The first observation is that defence acquisition - just like systems acquisition most anywhere - operates on a much slower tempo than the requirements of the business. The "business" of a military organization is running military campaigns; thus when writing for the defence community, Whittall and Boxer refer to the Campaign Tempo and the Acquisition Tempo.

The second observation is that there is a complex set of activities (such as orchestration, customization, and improvisation) involved in bridging between Demand (the demands of the campaign or business) and Supply (the procurement of specific systems and devices). These activities operate on an intermediate tempo, which Whittall and Boxer call the Alignment Tempo.

"Meeting the campaign tempo then depends on the alignment tempo possible, which in turn depends on the acquisition tempo at which gaps can be filled. Any slowness in acquisition tempo leads to increased bricolage and process short cuts to enable the alignment tempo to keep up with the campaign tempo. Thus, ‘agility’ finds its richest expression in the ability of the alignment tempo to meet the required campaign tempo at the lowest cost – i.e. to maximise the value-for-defence."


The challenge is then to produce just enough variety within the acquisition to optimize the economics of alignment. Boxer has developed a technique of Cohesion-Based Costing (not yet published), which "offers a means to attach a value to the cost of introducing flexibility". This kind of technique will clearly be of enormous benefit within the SOA world.

 

Related post Enterprise Tempo (October 2010)

Monday, May 25, 2009

What's Wrong with Layered Service Models?

@JohanDenHaan pointed me at a couple of articles by Bill Poole

Bill takes a particular layered service model, a plausible combination of layers such as you might find in any number of popular SOA books, and shows how he can use this model to produce an inefficient and inflexible design. 

Of course, all this shows is that there are problems with a particular layered service model, as interpreted by Bill. (The authors of the books from which Bill has taken this model might claim that Bill had misunderstood their thinking, but whose fault is that?) It doesn't show that all layered service models are bad. 

As Bill points out, one reason commonly cited in support of the layered service model approach is the hope that services will be highly reusable. But there is a much more important reason for layering - based on the expectation that each layer has a different characteristic rate of change. (This principle is known as pace layering.) 

When done properly, layering should make things more flexible. When done badly (as in Bill's example) then layering can have the opposite effect. 

One of the problems is that the people inventing these layered service models often confuse classification with layering. We can identify lots of different types of service, therefore each type must go into a separate layer. A deeper problem with these models is that their creation is based purely on clever thinking rather than systematic and empirical comparison of alternatives. Too much architectural opinion is based on "here's a structural pattern that seems to make sense" rather than "here's a structural pattern that has been demonstrated to produce these effects". 

See my post on Layering Principles. You can't just take an SOA principle ("loose coupling is good", "layering is good"), apply it indiscriminately and expect SOA magic to occur.

Tuesday, October 21, 2008

Timeless Way 2

Excellent report from James Governor (Redmonk) on SAP's view of Timeless Software, and I am very glad to see Stewart Brand's work getting wider exposure. See also Nick Hortovanyi.

But I am puzzled at SAP's use of the word "timeless". This was apparently introduced by SAP CTO Vishal Sikka (via Mark Finnern) Vishal explains it further in the comments to James's blog.

"I have indeed coined the phrase timeless software to refer to our strategy of continuously evolving our software ... [and allowing for] constant and furious change across all layers of the technology stack."
What is timeless here is not the software itself but the process of continuous evolution. I have previously used the term "timeless software" to mean something closer to the original work of Christopher Alexander on the Timeless Way of Building. (See posts by Dion Hinchcliffe and myself from March 2006 )

Note also this quote from Barry Boehm's magisterial View of 20th and 21st Century Software Engineering (pdf) (via Andrew Newman)

The theory underlying software process models needs to evolve from purely reductionist “modern” world views (universal, general, timeless, written) to a synthesis of these and situational “postmodern” world views (particular, local, timely, oral)
In other words, Boehm is contrasting the timeless and the situated, and calling for a synthesis of both. I think the Brand notion of shearing layers (or pace layering as he is now calling it) allows us to get the best of both worlds - BOTH timeless AND situated. The "timeless" has a very slow pace of change, while the "situated" is much more dynamic. See my post on Lightweight Enterprise.

This is certainly consistent with material that has been coming out of SAP for many years about the stratification of business. See my review of Shai Agassi's talk on Achieving Enterprise Agility from 2004. At that time, Shai was talking about some really interesting ideas in this area that I wasn't hearing from any of the other major vendors. It is good to see that SAP is still pushing some of these ideas.


Footnote

As my regular readers will know, I've been talking about Brand's book for many years, and I note the emergence of the term pace layering for Brand's principle that stratification should be based on the differential rate of change. See Long Now Blog. I previously referred to this principle as "shearing layers", but this refers to what happens when this principle is not followed.

Monday, November 13, 2006

Service-oriented security 2

Form Follows Function.

In a recent post, Bruce Schneier makes some interesting points about the relationship between Architecture and Security [via Confused of Calcutta].
  • "Security concerns have always influenced architecture."

  • "The problem is that architecture tends toward permanence, while security threats change much faster. Something that seemed a good idea when a building was designed might make little sense a century -- or even a decade -- later. But by then it's hard to undo those architectural decisions."

  • "It's dangerously shortsighted to make architectural decisions based on the threat of the moment without regard to the long-term consequences of those decisions."
  • End-to-End Process.

    In a separate post on Voting Technology and Security, Bruce Schneier describes the steps in ensuring that the result of an election properly represents the intentions of the voters.
    "Even in normal operations, each step can introduce errors. Voting accuracy, therefore, is a matter of 1) minimizing the number of steps, and 2) increasing the reliability of each step."
    Whether this is strictly true depends on the architecture of the process - whether it is a simple linear process with no redundancy or latency, or whether there is deliberate redundancy built in to provide security of the whole over and above the security of each step. Bruce himself advocates a paper audit trail, which can be used retrospectively if required to verify the accuracy of the electronic voting machines.

    Shearing Layers.

    Security management doesn't necessarily operate on the same timescale as other elements of architecture. Our appproach to service-oriented security - indeed, to SOA generally - is based on the notion of a layered architecture, in which each layer has a different rate of change. (This is based on the Shearing Layers principle (now known as the Pace Layering principle). Thus the security layer is decoupled from the core business layer, and also from the user experience layer.

    Previous Posts: Adaption and Adaptability, Business IT Alignment 2, Service-Oriented Security

    Monday, March 20, 2006

    SPARK 2 - Innovation or Trust?

    Day 2 of the SPARK workshop was an attempt to develop frameworks that encapsulated the architectural response to the challenges identified in Day 1. The photoset on Flickr includes some dreadful pictures of me (here with Michael Platt of Microsoft and here with Nick Gall of Gartner), but it also includes copies of most of the flip charts.

    The group I was in (which included Glen Harper, Dion Hinchcliffe and Michael Putz) tried to summarize some of the issues around the business stack.

    Business Stack

    Note the differential rate of change, as well as the gradients of innovation and trust. Note also the questions of horizontal and vertical coupling, which the group discussed but did not resolve.

    This is a framework not a fixed solution. For example, in some cases the trust/compliance regime may be stricter (or at least different) at the top of the stack (think healthcare and HIPPA), but in most cases the greatest perceived risks will be associated with the major business assets (legacy) at the bottom of the stack. And (as Anne Thomas Manes reminded us) enterprise innovation isn't always going to be focused exclusively on customer-facing stuff, but may be focused on supply chain or product development or elsewhere.

    Different technologies will be appropriate for different levels of the stack - for example we might expect to see SOAP and WS-* at the bottom of the stack (high trust, high engineering) and REST at the top of the stack (low trust, agile).

    Of course, stratified architectures and stack diagrams are not new, but they have traditionally been produced from a purely technological perspective (client/service, 3-tier, n-tier computing, and so on). To my mind, the new architectural challenge here is to manage the stratification of layers in a way that responds in an agile and effective way to (the complexity of) the business/user challenges. (Hence Asymmetric Design.)


    See also Beyond Bimodal (May 2016)

    Wednesday, March 02, 2005

    Layering Principles

    Martin Fowler reports on a workshop in which people (a good group of people, [but] it was hardly a definitive source of enterprise development expertise) vote on principles for software layering.

    Some people take this exercise seriously. JohnLim describes the result of the vote as excellent guidelines, while Paul Gielens adds his own vote. Others are more critical, including David Anderson and Rob Diana.

    To my mind, even if you could collect up the most experienced people in the software world, I distrust the idea that you could get a coherent and meaningful set of architectural principles from a vote.

    Architectural principles must come from reflective practice and/or grounded theory. For example, I can derive layering from a differential theory of change, as follows.


    Purpose - What is layering supposed to achieve?

    A well-layered artefact or system is more adaptable, because some types of variation can be accommodated within one layer without significantly affecting adjacent layers.


    Form - What is the underlying structure of layering?

    Boundaries between layers represent step changes with respect to some form of variation, from some perspective:
    • Differential change over time
    • A split between relatively homogeneous and relatively heterogeneous 

    Process - How do layers get established? Layers emerge from an evolutionary process, in which a series of small alterations affect the architectural properties of a system or (often unplanned and unremarked by the so-called architects).

    • Redundant layers (where there is insufficient difference in variation between two adjacent layers) tend to gradually fuse together. 
    • Flexibility that is not used or exercised will attenuate. 
    • Engineers under time pressure will take shortcuts that compromise the official separation between layers. 
    • Where there is excessive differentiation within a single layer, this will tend to split apart, initially in an incoherent way.

    Material - What is the source of a particular layering?

    Layering comes from the experience of variation.



    Related post: What's wrong with layered service models? (May 2009), Data Strategy - More on Agility (March 2020)

    Friday, February 11, 2005

    Value of Emptiness

    One of the perceived benefits of SOA is business agility and system adaptability. In this post, I am going to discuss some aspects of this.


    There is a lot of vague and woolly talk about business agility - especially from the software world, where a wide variety of software products, platforms and paradigms are optimistically supposed to have some magical effect on flexibility. But try selling flexibility to the CFO of a large company. ("Excuse me sir, would you like to buy some flexibility? It will only cost you ten million dollars." "Exactly how much flexibility do I get for ten million dollars?" "Ooo, loads and loads, honest. Look, here's a graph we've just made up. And a 2x2 matrix.")


    Among other things, business agility means keeping your options open.
    • Options are worth more when conditions are uncertain.
    • In financial circles, the value of options is calculated using the Brook-Scholes formula. Stock options increase in value with the volatility of the underlying stock. This encourages risk-taking by executives.
    • Real-Option theory applies the same financial logic to management options. See book Real Options by Amram and Kulatilaka.
    In the "plug-and-play" service economy, the business may gain option-value in several ways:
    1. the ability to replace specific partners
    2. the ability to flex the boundary - moving specific services inwards or outwards across the organization boundary
    3. the ability to access heterogeneous domains
    4. the ability to change the configuration (geometry)
    To go beyond bland optimism, we need to think rigorously about the business value of openness and extensibility, and how this value can be achieved by suitable configurations of organizational and technical systems - including SOA.


    Technical artefacts (such as computers) often have empty expansion slots. Ben Hyde discusses the option value of empty slots - who benefits from this unused real-estate?

    When the vendor creates a slot he’s relinquishing control over some amount of value-creating energy implicit in his offering.

    If there is anything consistent about how they get filled in - for example they all start sporting graphics cards - that’s not stable. The industry will absorb any consistent slot usage into the core.

    The space of empty slots [... appears to be ...] a long tail [... which ...] actually has negative value. Since the majority of buyers don’t use the slots they are getting no value from them. The hardware makers are including them not because of buyer demand but because the dominant players in the market want them. The dominant players in the market want them because they tap into the value created by the generative process that all that empty real estate creates.

    In other words, the expansion slots provide some temporary free space for innovation. But the big players monitor how this free space is used, and will seek to recapture any rich pickings.

    Martin Geddes discusses a number of other technologies (past and present) that have added value by providing options. Many of these were initially dismissed as inefficient. Martin's list includes relational databases and XML - we might add web services.


    Compare this with the adaptability of buildings.

    Lots of old houses were built with outside toilets and no bathroom. Most of those that survive today have had inside toilets and bathrooms added. Many people convert loftspace to make extra rooms, or add an extension into the back garden. Old houses are often relatively easy to alter or extend, subject to planning regulations.

    Construction companies build new houses on tight estates with shallow roofs - so there is insufficient space for expansion. Bathrooms are attached to so-called master bedrooms, thus constraining alternative ways of using the space. It is as if the construction companies deliberately set out to build houses that are pre-adapted to a specific domestic norm, thus forcing people to move house every time their domestic requirements changed.

    See Stewart Brand: How Buildings Learn.


    The human being is a highly inefficient machine, with few natural advantages over other species and practically no innate skills. It takes many years before it can find its own food, or run half as fast as an animal. It has a large brain, consuming vast amounts of energy and other nutrients. The exceptionally long period of dependence, together with its feeble skills in relation to other animals, represents a considerable biological burden.

    How on earth does this gross inefficiency survive? In complex and dynamic environments, there is a compensating evolutionary advantage - the human has lots of expansion slots - can learn new skills, including collective skills. It can develop new languages - both natural languages and problem-specific vocabularies. (DSL anybody?)


    So what are the expansion slots of a business? Obviously new products, new suppliers, new business partners, new channels. But more importantly, new responses to changing demand. With a stratified model of the demand-side, and with appropriate supply-side platforms, we should be able to (re)design a business to deliver requisite levels of agility. And this will naturally include space for innovation.


    Update: Stewart Brand has now introduced the term pace layering for the principle that stratification should be based on the differential rate of change. The term shearing layers refers to what happens when this principle is not followed.

    See also Andrew K Johnston, Business Flexibility - An Analogy (2005)

    Thursday, October 14, 2004

    Adaptation and Adaptability

    Adam Brown from John McAslan and Partners gave a talk recently about his restoration/renovation work with Swiss Cottage Library. Dan Hill has some notes on his blog.

    Interesting aside on how difficult it is to work with modernist buildings given their focus on functionality - if the function changes over time, the building can resist change. With the library, Brown kept alluding to how difficult it was to work with certain layers, given the amount of change required (not just in contemporary services etc, but in building in the modern notion of what a library is (internet access, coffee shops, DVD lending - as well as books).

    This leads to a discussion about the adaptability of a modernist architecture.

    1. Basil Spence is undoubtedly a great architect in many ways, and the Swiss Cottage Library remains beautiful, even if its functionality is now somewhat dated.

    2. Functionalism involves a high degree of adaptation (to a given conception of function/functionality).

    3. Adaptation conflicts with adaptability. The more you optimize to the present, the more you close off alternative futures.

    4. Great buildings should be able to accommodate change, age gracefully. Perhaps one of the errors of modernism was to imagine that change would no longer be necessary.


    The modernist attitude is widespread in software engineering. Model-driven architecture represents a type of modernism: form should follow function.

    Stewart Brand's book on How Buildings Learn contains the notion of Shearing Layers - functional layers that have a different natural rate of change. A flexible structure allows each layer to be changed independently. But where the layers are too tightly coupled, the differential rates of change tear the structure apart.

    how buildings learn

    The Shearing Layers concept is highly relevant to SOA.


    Update: Stewart Brand has now introduced the term pace layering for the principle that stratification should be based on the differential rate of change. The term "shearing layers" refers to what happens when this principle is not followed.

    Thursday, September 09, 2004

    SOA Change Management

    With a layered picture of SOA, it is clear that each layer undergoes a lifecycle – but at different cycle times. Some of the layers have a real-time lifecycle measured in seconds, while others have a lifecycle measured in weeks, months or years. The real-time lifecycles are managed by the SOA platform/tools, while the slower lifecycles should be managed by change management tools and disciplines. Should be. Our observation is that some layers are well-managed, while others aren't.

    SOA change management calls for each layer to be properly managed, and the links between layers to be properly managed. We need to know when a change in one layer impacts the next layer. (For example, if the functionality or performance or cost of a service changes, this might prompt retesting of a service composition or closer monitoring of system performance, and may even require redesigning the use of this service.)

    And because each layer may be managed independently, change management across multiple layers becomes an exercise in collaboration. We need a publish/subscribe model not just for the services themselves, but for changes to the services.

    Impact analysis may be both downwards and upwards. Sometimes the service provider needs some assurance that the service user is using the service properly. (For example, eBay demands certification.)

    With proper encapsulation, some changes are private to the service provider and should not need to be notified to the service users. But it is not always clear exactly where to draw the dividing line between public changes and private changes. The service user may be forced to trust the service provider (and the whole service supply chain) to manage this encapsulation correctly, and to publish all changes that may impact the service use. But there are always going to be some service providers who get the encapsulation wrong, and there are always going to be some service uses that are too critical (business critical, safety critical) to rely on network trust alone.

    The solution to this is partly organizational and partly technical. 360 degree intelligence tools can monitor the service network, identify patterns of behaviour that indicate significant changes, and publish independent change notifications. Thus you are not solely dependent on the service provider to tell you that the service has changed – you may get an alert because some other user of the same service has started to hit problems. Implementing this kind of intelligence is hard, because it not only requires different bits of technology to be integrated, but also good collaboration between multiple organizations to make the process work effectively.

    CBDI Newswire (public access)
    CBDI Report SOA LifeCycle (restricted access)


    Update

    Steven Cohn describes some SOA lifecycle requirements that he says go beyond the present capabilities of the Microsoft platform.