Showing posts with label principles. Show all posts
Showing posts with label principles. Show all posts

Sunday, April 21, 2019

How Many Ethical Principles?

Although ethical principles have been put forward by philosophers through the ages, the first person to articulate ethical principles for information technology was Norbert Wiener. In his book The Human Use of Human Beings, first published in 1950, Wiener based his computer ethics on what he called four great principles of justice.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom.”

Meanwhile, Isaac Asimov's Three Laws of Robotics were developed in a series of short stories in the 1940s, so this was around the same time that Wiener was developing his ideas about cybernetics. Many writers on technology ethics argue that robots (or any other form of technology) should be governed by principles, and this idea is often credited to Asimov. But as far as I can recall, in every Asimov story that mentions the Three Laws of Robotics, some counter-example is produced to demonstrate that the Three Laws don't actually work as intended. I have therefore always regarded Asimov's work as being satirical rather than prescriptive. (Similarly J.K. Rowling's descriptions of the unsuccessful attempts by wizard civil servants to regulate the use of magical technologies.)

So for several decades, the Wiener approach to ethics prevailed, and discussion of computer ethics was focused on a common set of human values: life, health, security, happiness, freedom, knowledge, resources, power and opportunity. (Source: SEP: Computer and Information Ethics)

But these principles were essentially no different to the principles one would find in any other ethical domain. For many years, scholars disagreed as to whether computer technology introduced an entirely new set of ethical issues, and therefore called for a new set of principles. The turning point was at the ETHICOMP1995 conference in March 1995 (just two months before Bill Gates' Internet Tidal Wave memo), with important presentations from Walter Maner (who had been arguing this point for years) and Krystyna Górniak-Kocikowska. From this point onwards, computer ethics would have to address some additional challenges, including the global reach of the technology - beyond the control of any single national regulator - and the vast proliferation of actors and stakeholders. Terrell Bynum calls this the Górniak hypothesis.

Picking up the challenge, Luciano Floridi started to look at the ethical issues raised by autonomous and interactive agents in cyberspace. In a 2001 paper on Artificial Evil with Jeff Sanders, he stated "It is clear that something similar to Asimov's Laws of Robotics will need to be enforced for the digital environment (the infosphere) to be kept safe."

Floridi's work on Information Ethics (IE) represented an attempt to get away from the prevailing anthropocentric worldview. "IE suggests that there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy." He therefore articulated a set of principles concerning ontological equality (any instance of information/being enjoys a minimal, initial, overridable, equal right to exist and develop in a way which is appropriate to its nature) and information entropy (which ought not to be caused in the infosphere, ought to be prevented, ought to be removed). (Floridi 2006)

In the past couple of years, there has been a flood of ethical principles to choose from. In his latest blogpost, @Alan_Winfield lists over twenty sets of principles for robotics and AI published between January 2017 and April 2019, while AlgorithmWatch lists over fifty. Of particular interest may be the principles published by some of the technology giants, as well as the absence of such principles from some of the others. Meanwhile, Professor Floridi's more recent work on ethical principles appears to be more conventionally anthropocentric.

The impression one gets from all these overlapping sets of principles is of lots of experts and industry bodies competing to express much the same ideas in slightly different terms, in the hope that their version will be adopted by everyone else.

But what would "adopted" actually mean? One possible answer is that these principles might feed into what I call Upstream Ethics, contributing to a framework for both regulation and action. However some commentators have expressed scepticism as to the value of these principles. For example, @InternetDaniel thinks that these lists of ethical principles are "too vague to be effective", and suggests that this may even be intentional, these efforts being "largely designed to fail". And @EricNewcomer says "we're in a golden age for hollow corporate statements sold as high-minded ethical treatises".

As I wrote in an earlier piece on principles:
In business and engineering, as well as politics, it is customary to appeal to "principles" to justify some business model, some technical solution, or some policy. But these principles are usually so vague that they provide very little concrete guidance. Profitability, productivity, efficiency, which can mean almost anything you want them to mean. And when principles interfere with what we really want to do, we simply come up with a new interpretation of the principle, or another overriding principle, which allows us to do exactly what we want while dressing up the justification in terms of "principles". (January 2011)

The key question is about governance - how will these principles be applied and enforced, and by whom? What many people forget about Asimov's Three Laws of Robotics was that these weren't enforced by roving technology regulators, but were designed into the robots themselves, thanks to the fact that one corporation (U.S. Robots and Mechanical Men, Inc) had control over the relevant patents and therefore exercised a monopoly over the manufacture of robots. No doubt Google, IBM and Microsoft would like us to believe that they can be trusted to produce ethically safe products, but clearly this doesn't address the broader problem.

Following the Górniak hypothesis, if these principles are to mean anything, they need to be taken seriously not only by millions of engineers but also by billions of technology users. And I think this in turn entails something like what Julia Black calls Decentred Regulation, which I shall try to summarize in a future post. Hopefully this won't be just what Professor Floridi calls Soft Ethics.

Update: My post on Decentred Regulation and Responsible Technology is now available.



Algorithm Watch, AI Ethics Guidelines Global Inventory

Terrell Bynum, Computer and Information Ethics (Stanford Encyclopedia of Philosophy)

Luciano Floridi, Information Ethics - Its Nature and Scope (ACM SIGCAS Computers and Society · September 2006)

Luciano Floridi and Tim (Lord) Clement-Jones, The five principles key to any ethical framework for AI (New Statesman, 20 March 2019)

Luciano Floridi and J.W. Sanders, Artificial evil and the foundation of computer ethics (Ethics and Information Technology 3: 55–66, 2001)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)

Alan Winfield, An Updated Round Up of Ethical Principles of Robotics and AI (18 April 2019)


Wikipedia: Laws of Robotics, Three Laws of Robotics


Related posts: The Power of Principles (Not) (January 2011), Data and Intelligence Principles from Major Players (June 2018), Ethics Soft and Hard (February 2019), Upstream Ethics (March 2019), Ethics Committee Raises Alarm (April 2019), Decentred Regulation and Responsible Technology (April 2019), Automation Ethics (August 2019)

Link corrected 26 April 2019

Friday, June 08, 2018

Data and Intelligence Principles From Major Players

The purpose of this blogpost is to enumerate the declared ethical positions of major players in the data world. This is a work in progress.




Google

In June 2018, Sundar Pinchai (Google CEO) announced a set of AI principles for Google. This includes seven principles, four application areas that Google will avoid (including weapons), references to international law and human rights, and a commitment to a long-term sustainable perspective.

https://www.blog.google/topics/ai/ai-principles/


Also worth noting the statement on AI ethics and social impact published by DeepMind last year. (DeepMind was accquired by Google in 2014 and is now a subsidiary of Google parent Alphabet.)

https://deepmind.com/applied/deepmind-ethics-society/research/



IBM

In January 2017, Ginni Rometty (IBM CEO) announced a set of Principles for the Cognitive Era.

https://www.ibm.com/blogs/think/2017/01/ibm-cognitive-principles/

This was followed up in October 2017, with a more detailed ethics statement for data and intelligence, entitled Data Responsibility @IBM.

https://www.ibm.com/blogs/policy/dataresponsibility-at-ibm/



Microsoft

In January 2018, Brad Smith (Microsoft President and Chief Legal Officer) announced a book called The Future Computed: Artificial Intelligence and its Role in Society, to which he had contributed a forward.

https://blogs.microsoft.com/blog/2018/01/17/future-computed-artificial-intelligence-role-society/



Twitter


@Jack Dorsey (Twitter CEO) asked the Twitterverse whether Google's AI principles were something the tech industry as a whole could get around (via The Register, 9 June 2018).



Selected comments

These comments are mostly directed at the Google principles, because these are the most recent. However, many of them apply equally to the others. Commentators have also remarked on the absence of ethical declarations from Amazon.


Many commentators have welcomed Google's position on military AI, and congratulate those Google employees who lobbied for discontinuing its work with the US Department of Defense analysing drone footage, known as Project Maven. @kateconger, Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program (Gizmodo 1 June 2018) Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance, (Gizmodo 7 June 2018)

Interesting thread from former Googler @tbreisacher on the new principles (HT @kateconger)

@EricNewcomer talks about What Google's AI Principles Left Out (Bloomberg 8 June 2018). He reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises", complains that the Google principles are "peppered with lawyerly hedging and vague commitments", and asks about governance - "who decides if Google has fulfilled its commitments".

@katecrawford(Twitter 8 June 2018) also asks about governance. "How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'. Are they... autonomous ethics?" And @mer__edith (Twitter 8 June 2018) calls for "strong governance, independent external oversight and clarity".

Andrew McStay (Twitter 8 June 2018) asks about Google's business model. "Please tell me if you spot any reference to advertising, or how Google actually makes money. Also, I’d be interested in knowing if Government “work” dents reliance on ads."

Earlier, in relation to DeepMind's ethics and social impact statement, @riptari (Natasha Lomas) suggested that "it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts" (TechCrunch October 2017). See also my post on Conflict of Interest (March 2018).

@rachelcoldicutt asserts that "ethical declarations like these need to have subjects. ... If they are to be useful, and can be taken seriously, we need to know both who they will be good for and who they will harm." She complains that the Google principles fail on these counts. (Tech ethics, who are they good for? Medium 8 June 2018)


Related posts

Conflict of Interest (March 2018), Why Responsibility by Design Now? (October 2018), Leadership versus Governance (May 2019)


Updated 11 June 2018. Also links added to later posts.

Saturday, March 30, 2013

Three Notions of Maturity

Many enterprise architecture frameworks contain some notion of maturity, usually with some kind of nod in the direction of the SEI CMMI maturity model. I'm puzzled about this, because these notions of maturity don't much resemble the SEI's notion and sometimes have a completely different set of levels.

The SEI has produced several different maturity models, but they are all presented in terms of the maturity of capability or process - doing the right things right according to a standardized schema of What The Right Things Are. Applying a process-oriented notion of maturity to EA can only really refer to maturity of the EA process. But I think it is much more relevant to think about EA maturity of the organization as a whole, which calls for an outcome-oriented notion of maturity (perhaps defined in terms of some kind of "alignment") that is closer to Richard Nolan's Stages of Growth model, where maturity is seen as a state of perfect alignment between the business and its information systems. (I know Nolan originally formulated his ideas in terms of IT, but then so did lots of people in the 1980s including Zachman.)

Maturity can also be defined in terms of an alignment between espoused theory (what you think you are supposed to be doing) and theory-in-use (what you are actually doing). Obviously this kind of alignment is not restricted to IT. Maturity doesn't just mean being better at achieving your goals, it also means having more realistic goals in the first place. Some people might think that innovation feeds upon the unbounded vision and ambition of the immature, and that too much maturity (in this sense) could just result in negativity and middle-aged stagnation. See my note on EA Archetypes (EA-as-visionary, EA-as-realist).

Meanwhile, some enterprise architecture frameworks contain notions of maturity that are defined not in terms of process but in terms of knowledge. At the Unicom EA Forum on 21st March 2013, Kevin Smith presented his new PETF framework for Enterprise Transformation, whose four levels of maturity are based on the Four Stages of Competence.

  1. Unconsciously incompetent
  2. Consciously incompetent
  3. Consciously competent
  4. Unconsciously competent

Thus the highest level of maturity is where the knowledge has been internalized. I think this notion of maturity looks much closer to Nonaka's SECI model or Boisot's I-Space, which both contain notions of internalization. I-Space also contains the insight that "intellectual property" doesn't last for ever, because all useful knowledge eventually becomes public knowledge. Competence relative to one's competitors is always based on proprietary knowledge, while knowledge that is in the public domain cannot form the basis of competitive advantage.

Meanwhile I'm uncomfortable with the notion of unconscious competence for another reason: in a dynamic world it can quickly develop into complacency and arrogance. Whatever happened to continuous learning and improvement?

In order to maintain competitive advantage, both SECI and I-Space are essentially cyclic models, so that one can cycle back from level 4 (unconscious competence) to level 1 (unconscious incompetence). In my summary last week, I invoked the Black-Belt-to-White-Belt metaphor: even the most experienced black belt needs to return to the basics, needs humility and the desire to learn, needs to always think like a beginner rather than strutting around arrogantly like an expert. Obviously there are circumstances in which an enterprise may need to cycle round from competence to incompetence, before attaining a higher level of competence. Sometimes transformation must take an enterprise out of its comfort zone.

Finally, I wonder whether "maturity" is the right word at all. It is sometimes argued that different levels of maturity are suitable for different organizations, which is basically a form of contingency theory. In other words, some organizations never need to be fully mature.

Instead of maturity, I prefer a notion of excellence that includes (1) selecting the appropriate approach for a particular situation/context, and then (2) carrying out this approach both effectively and cost-effectively. I think this notion of excellence is supported by business excellence frameworks such as Baldrige and the European Quality Award. These frameworks don't say HOW you should do anything, merely that you should know WHAT you are doing, and WHY, and WHETHER it is working, and that you should be constantly striving to improve those things that matter most. Hopefully this gets around the innovation/maturity paradox I mentioned earlier.


Scroll down for discussion with Len Fehskens.

Related post: From Enabling Prejudices to Sedimented Principles (March 2013)


Thanks to Len Fehskens (in comments below) for pointing out the original source of the Four Stages of Competence.

Post last updated 31 August 2018

Friday, March 29, 2013

From Sedimented Principles to Enabling Prejudices

I have often asserted (on this blog and elsewhere) that principles are over-rated as a driver for intelligent action. However, that doesn't mean principles are completely worthless. In this post, I wish to explore some of the ways in which principles may have some limited use within enterprise architecture.

I am going to identify four rough categories of principle. There may be other categories, and the categories may overlap.

1. Universal Truths
2. Governance
3. Style Preferences
4. Enabling Prejudices

This is a long post, and I think the final category is the most interesting one, so if you are short of time, please read that one first.



Universal truths. This kind of principle is something one has to accept and believe, because the evidence in its support is overwhelming. For example, all computers of a given construction must obey certain logical principles, as rigorously proven by Gödel, Turing, Church, von Neumann, and others.

(There aren't many universal truths in the enterprise architecture domain, and the word "proven" is widely abused to mean "something that everyone believes" or "something that nobody has convincingly disproved" or "something that is vaguely associated with a bit of maths or science". I'm not impressed by "proofs" that haven't been published anywhere, are based on wishful thinking and received wisdom, or turn out to prove something fairly trivial.)


Governance. An enterprise or ecosystem may be regulated by governing principles, which are essentially high level rules or policies controlling the structure and behaviour, and constraining certain classes of decision. For example, an enterprise software architecture may be based on certain global principles about accessibility or security, which ultimately link to some specific outcome.

Some EA frameworks arrange principles, policies and rules into a pseudo-hierarchy, but the dividing line between principles and policies can be pretty arbitrary.

Governing principles may be reinforced by standards, but usually the principle is broader than simply conforming to the standard. For example, a principle of accessibility may reference various accessibility standards, but software designers may be expected to strive for enhanced accessibility beyond the minimum standards.


Style. An enterprise or ecosystem may have some stylistic preferences. For example, applications within the Apple ecosystem tend to have a predictable look and feel. In some areas, these stylistic preferences may be elevated to the status of principles.

Some people think that all systems within an enterprise or ecosystem should have a common look and feel. This is a bit like insisting that all the rooms in your house should be painted the same colour, from the kitchen to the bedrooms. Feng Shui practitioners preach the opposite idea; they say that a building should have different zones, with different "look and feel". They then have some complicated rules associating different colours with different compass points, which we don't need to go into here.

In some cases, there are strong reasons for making systems look and feel different. When a pilot is flying a plane, it's probably not a good idea for the flight controls to operate in the same way as the accounting system she uses on the ground for submitting her expenses. A system may even have a different look and feel according to context, thus a database operator may be presented with a different interface when working with the live database, forcing her to slow down and consider every action twice.

In other areas, differences between systems are annoying and may increase levels of error. On Apple computers, and inside some applications, command-D means Duplicate. On Windows, command-D means Delete. I'm sure I'm not the only one who gets these mixed up, and I am grateful that the Windows command at least has a Confirm/Cancel step. In some other systems, the Delete function wipes the file immediately, which is tough if you're used to a Confirm/Cancel/Restore. Other stylistic differences may make navigation harder. For example, regular users of Microsoft Office may detect some minor style differences between the different applications. Just because you've used a particular function in Word doesn't mean you can find it in Excel or Powerpoint. Perhaps it's there, but hidden under a different name in a different submenu. Of course, Microsoft has cleaned up many of these style differences over the years, but perhaps inevitably some remain.

Style often originates with arbitrary choices or one person's aesthetic preferences. Someone once thought it would be a good idea to have X for cut and V for paste, but it could equally have been two other letters. But nowadays users are so accustomed to X and V for cut and paste, that it would be perverse for a designer to use any other letters.

Overarching stylistic rules and guidelines may be presented as local principles, valid for a particular suite of applications. They represent a set of often originally arbitrary choices and preferences, which have been adopted by or imposed onto a community of designers.

I see a subtle difference between style principles and governance principles, and one clue is the nature of the argument that emerges when people don't want to follow them. Style arguments tend to be presented in aesthetic terms - the old design is old-fashioned, younger users want something cool, etc, etc. Governance arguments are presented in terms of outcomes - this would not produce the outcomes we want, or has some other side-effect.


Finally, I get to what is possibly the most controversial one.

Enabling Prejudices. One of the key insights of the early work on Design Thinking (Bryan Lawson, Peter Rowe) was the importance of heuristics, or what Rowe (following Gadamer) calls enabling prejudices, which will hopefully get us to a good-enough solution more quickly. 

As Christopher Alexander notes:

At the moment when a person is faced with an act of design, he does not have time to think about it from scratch. The Timeless Way of Building p 204
We always approach a problem with a set of prejudices or prejudgements. Depending on the situation, these may either help us to solve the problem more quickly (enabling), or may lead us astray (disabling). The acid test of a set of heuristics or design principles is that they are mostly enabling most of the time.


Perhaps one important difference between education and training is that good education encourages the students to think more deeply, whereas good training teaches students to solve problems more quickly. (Or reliably or effectively, or some other adverb.) Thus training initiates and indoctrinates the students into a set of enabling prejudices, which will give them greater practical power.

Inexperienced practitioners may rely heavily on the principles they were taught in training, or have learned from books. As practitioners gain experience, their prejudices may become more refined, more personalized, and more deeply embedded. A lot of the principles articulated for enterprise architecture reflect this kind of enabling prejudice.

Obviously there is nothing wrong with an enabling prejudice when it is correct. A basketball coach might have a reasonable expectation that tall people are generally better at basketball than short people. But in an ideal world, a good basketball coach would be open-minded enough to notice a shorter player who is exceptionally good. It would be doubly stupid to impose a minimum height rule, firstly because you might exclude a really good player, and secondly because you will never get any feedback to tell you to revise the rule.

Now here's the twist. Many enterprise architecture frameworks suggest that practitioners should agree a common set of principles. But here's a contrary thought. Maybe a healthy enterprise or ecosystem encourages diversity of prejudice. (The shorter guy only needs one basketball coach to spot his talent and potential.) If we all have different approaches to problem-solving, then we are collectively more likely to find good solutions to difficult problems, and more likely to spot possible pitfalls. The value of diversity applies both when we are collaborating and when we are competing. Whereas a community with a single (attenuated) set of enabling prejudices would lack resilience, forming a dangerous form of intellectual monoculture.


Some might complain that allowing heuristic variation introduces fragmentation, inconsistency and incoherence into the enterprise or ecosystem, but I think that's incorrect. As I see it, maintaining integrity and consistency is the job of the governance and style principles. The heuristics (enabling prejudices) perform an entirely different job, and I think it is wrong to try standardizing and enforcing them in the same way as the other principles.




Christopher Alexander, The Timeless Way of Building (New York: Oxford University Press, 1979)

Dan Klyn, Skirmishing With Ill-Defined and Wicked Problems (TUG, 5 July 2013) - review of Rowe

Bryan Lawson, How Designers Think (1980, 4th edition 2005)

Peter Rowe, Design Thinking (MIT Press 1987)

Stanford Encyclopedia of Philosophy: Gadamer and the Positivity of Prejudice


Related posts: What's wrong with principles (Feb 2010), What's wrong with principles 2 (July 2010), The Power of Principles - Not (Jan 2011),  From Enabling Prejudices to Sedimented Principles (March 2013)

Updated 11 November 2018

Monday, August 06, 2012

Resistance to Architecture

A few weeks ago, I was at a meeting to discuss the UK National Health Service reforms. One of the speakers, a senior NHS administrator, used the word "architecture" in her presentation. Twice. Clearly referring to business/organizational architecture rather than physical architecture.

Encouraged by this, a couple of us approached her afterwards and asked her what kind of architectural work was going on, but we were treated with diplomatic hostility. (She didn't want to talk to us, and clearly wanted to escape as quickly as possible.) As far as we could make out, what she had meant by architecture was that she had some complicated organizational design in her head, but she definitely wasn't going to take the political risk of making this design clear to anyone else.

There may well be some proper business architecture work going on around the NHS reforms, but we have been looking for a while and haven't found much yet. (Which is a bit worrying, given the scale of the structural change that is underway.) If you know of anything I'd be delighted to hear from you.

There have perhaps always been individuals within large organizations who see some personal political advantage in maintaining obscure and complicated organizations that only they can understand and manipulate, and these individuals are probably always going to see good business architecture as a threat. But is there any reason why an organization as a whole should resist business architecture? Are there some organizations where that kind of small-p-political approach to management is so deeply ingrained in the culture that good business architecture is incompatible?

In his book, The Systems Approach and Its Enemies, C West Churchman identified four enemies of the systems approach: Politics, Ethics and Morality (the dominance of the "Big Idea"), Religion (irrational attachment to traditional forms), and Aesthetics (intuition, irrational optimism). Although as Churchman himself acknowledges, the notion of "enemy" is itself problematic from a systems perspective.

If politics may be one source of resistance to business architecture within the NHS, another source may be the "Big Idea", also known as "Principles". One of the reasons I am less enthusiastic about principles than many of my fellow architects is that I often see principles being used not as a starting point for hard work but as a substitute for it. In the public sector, we often see broad principles such as Choice or Competition being bandied about, with no serious attempt to work out how these so-called principles might work in practice. And if the structure and behaviour of the NHS is completely determined by these abstract principles, as some would have us believe, then there is obviously no point wasting time getting business architects involved. See my post On the misuse of general principles (Jan 2012).

A third reason for steering clear of business architecture might be because everyone believes that the fundamental structural problems are so deeply embedded in existing institutions that there is no point hiring business architects who will merely tell us what we already know. So we go on tinkering with the details, shifting responsibility for commissioning from one bunch of bureaucrats to another bunch of bureaucrats, but not making any real inroads into the institutional separations between primary and secondary care, or between healthcare and social care.

The final reason for thinking that business architecture is unnecessary is because of the Faustian pact with senior staff. The woman I heard talking recently presents an attractive and optimistic vision of transformation, plausible to nearly everyone except a few politically motivated snipers. And of course business architects. No wonder they regard us as the enemy.

See also The Illusion of Architecture (September 2012)

Saturday, March 10, 2012

Structure Follows Strategy?

#entarch In his talk to the BCS Enterprise Architecture Group this week, Patrick Hoverstadt suggested that traditional enterprise architecture obeyed Alfred Chandler's principle: Structure Follows Strategy. In other words, first the leadership defines a strategy, and then enterprise architecture helps to create a structure (of sociotechnical systems) to support the strategy.

Chandler's principle was published in 1962, and is generally regarded nowadays as much too simplistic. In 1980, Hall and Saias published a paper asserting the converse principle Strategy Follows Structure! (pdf), and most modern writers now follow Henry Mintzberg in regarding the relationship between strategy and structure as reciprocal.

What are the implications of this for enterprise architecture? Patrick offered us a simple syllogism: if enterprise architects determine structure, and if structure determines strategy, then enterprise architects are (consciously or unconsciously) determining strategy. In particular, the strategies that are available to the enterprise are limited by the information systems that the enterprise uses (a) to understand what is going on both internally and externally, and (b) to anticipate future developments. In many situations, the information isn't readily accessible in the form that managers would need to mobilize a strategic response to the complexity of the demand ecosystem.

Of course it isn't as simple as this. The defacto structure (including its information systems) is hardly ever as directed by enterprise architecture, but is created by countless acts of improvisation by managers and workers just trying to get things done. (The late Claudio Ciborra wrote brilliantly about this.) People somehow get most of the information they need, not thanks to the formal information systems but despite them. Thus the emergent structures are a lot more powerful and rich than the official structures that enterprise architects and others are mandated to produce. Patrick cites the example of a wallpaper factory where productivity was markedly reduced after smoking was banned; a plausible explanation for this was that smoking had provided a pretext for informal communication between groups.

Meanwhile, Minztberg drew our attention to a potential gulf between the official strategy and the defacto emergent strategy. (I have always especially liked his example of the Canadian Film Board, published in HBR July-Aug 1987.)

Nevertheless, Patrick's mission (which I endorse) is to connect enterprise architects with the strategy processes in an enterprise. He is a strong advocate of Stafford Beer's Viable Systems Model (VSM). Using his approach, which he encourages enterprise architects to adopt, VSM provides a unique lens for viewing the structure of enterprise, and for recognizing some common structural errors, which he calls pathological; I encourage enterprise architects to read his book on The Fractal Organization.


Related post: Co-Production of Strategy and Execution (December 2012)

Friday, January 27, 2012

On the misuse of general principles

#entarch There is a common fallacy among enterprise architects that radical structural and behavioural change can and should be driven by a few simple and powerful ideas. Alas, the public sector is strewn with the disastrous consequences of this fallacy.

We can find countless examples from the National Health Service (NHS) in the UK. For Steve Harrison, Honorary Professor of Social Policy at the University of Manchester, the idea that NHS reorganisations can be triggered by a few general ideas is one of the Seven Fallacies of English Health Policy. He points out that high levels of abstraction (beloved by academics and architects alike) do not allow proper assessment of the plausibility of claims about benefits of reorganisation and how the system will work. (HT @mellojonny)

Where do health reorganization principles come from? I asked a popular search engine, and was led to a paper called Basic Principles of Information Technology Organization in Health Care Institutions (JAMIA 1997); (I suppose from the high search ranking of this paper that it is a widely used source for such principles.) The paper concludes that all organizations MUST have certain characteristics, based on a single case study where these characteristics seemed to be beneficial; in other words, arguing from the particular to the general. (I'm sure there must be some more rigorous studies, but they don't seem to get as good search rankings for some reason.)

But many of the principles that govern sweeping architectural reforms of the public sector aren't even derived by thinly based generalization from such observed vignettes, but are derived from purely abstract concepts such as "choice" and "competition" and "justice", to which each may attach his or her own politically motivated interpretation.

This leads to several levels of failure - not only failure of execution and planning (because the generalized principles are not sufficiently refined to provide realistic and coherent solutions to complex practical problems) but also failure of intention (because a vague but upbeat set of principles helps to conceal the fact that the underlying vision remains woolly).

Sunday, July 03, 2011

Service Boundaries in SOA

A service has explicit boundaries

This is one of the four tenets of SOA, promulgated by Microsoft and others around 2004. In its (now discontinued) Connected Services Framework (3.0 SP1), it is identified as one of the four principles of service-oriented design.

  • Boundaries are explicit
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatibility is determined based on policy

I found a presentation on the MSDN website by Udi Dahan called SOA in the Real World (ppt), in which this tenet is expanded as follows.
Services run in a separate process from their clients
A boundary must be crossed to get from the client to the service – network, security, …

Also on the MSDN website, I found some design guidance from ramkoth called Service Boundaries, Business Services and Data, in which the same tenet is expanded as follows.

Services explicitly choose to expose certain (business) functions outside the boundary. Another tenet of SOA is that a service - within its boundary - owns, encapsulates and protect its private data.

In SOA, dollar signs and trust boundaries, Rocky Lhotka describes SOA as a mechanism for bridging trust boundaries. He argues that SOA is expensive (in terms of complexity and performance), and can only decrease the overall cost and complexity when used for inter-application communication across trust boundaries. Trust is not just a security issue, but also a future-proofing issue, because we can't always be sure what future applications will do. A trust boundary therefore helps protect us from an uncertain future, and builds some degree of flexibility into the system of systems.

Where did all the boundaries go?

Now here's the funny thing. All of the pieces I've quoted above were published in 2004, as are the vast majority of hits in the first several pages of Internet search. Apart from one Bill Poole, who published a few blogposts in early 2008 on Service Boundaries, the internet seems to have more or less dropped the concept of service boundary. So am I looking in the wrong place, or has the internet been infected by the misleading rhetoric of boundarylessness?

I'm also struck by the apparently rapid disappearance of something that had been promulgated as an SOA principle. If you think that principles are timeless truths, think again.

Update

In his latest post Service Boundaries Aren’t Process Boundaries Udi Dahan replies with a correction of his 2004 presentation. He points out that his later posts (from 2007 onwards) no longer assert that services must run in a separate "system" or "process". (One reason for this is that the concepts of "system" or "process" belong in a different architectural view.) He still appears to believe in explicit service boundaries, but he is now thinks it's more appropriate to base these on business capabilities. Meanwhile in a new post on Service Boundaries, Lawrence Wilkes outlines the CBDI position, explains that the service boundaries are orthogonal to various other boundaries, including resource, technology and organizational boundaries, and shows how services can be used to cross these boundaries.

Monday, December 13, 2010

Can Single Source of Truth work?

@tonyrcollins asks if any healthcare IT system can provide a Single Source of Truth (SSOT)? In his blog (13 December 2010), he discusses a press release claiming that an electronic healthcare record system from Cerner Millennium Solutions is a "single source of truth", citing the Children’s Cancer Hospital Egypt 57357 (CCHE) as a success story (via Egyptian Chamber).

My first observation is that even if we take this success story at face value, it doesn't tell us much about the possibilities of SSOT in an environment such as the UK NHS that is several orders of magnitude more complicated/complex. I'm guessing the Children’s Cancer Hospital Egypt 57357 (CCHE) doesn't have as many different types of "truth" to manage as the NHS.
  • one type of patient (children)
  • one type of condition (cancer)
  • a single building
My second observation is that if a closed organization has a single source of truth, it will never discover flaws in any of these truths. If a child is given the wrong medication, for whatever reason, we can only detect the error and prevent its recurrence by finding a second source of truth. The reason SSOT has not been successfully implemented in the UK is not just because it wouldn't work (after all, lots of things are implemented that don't work) but because there are too many people who know it wouldn't work and are sufficiently powerful to resist it.

My third observation is that single-source of truth may be a bureaucratic fantasy, but responsible doctors will always strive to get best-truth rather than sole-truth. People in bureaucratic organizations don't always stick to the formal channels, and often have alternative ways of finding out what they need to know. So perhaps the Egyptian doctors at CCHE have managed to preserve alternative sources of information, and the "single source of truth" is merely a bureaucratic illusion.

See my previous post What's Wrong with the Single Source of Truth?

Tuesday, July 20, 2010

What's Wrong With Principles 2

@gparasEA suggests that EA Principles Have Little Value. I agree with his title, but his actual text appears to argue that EA Principles have significant potential value and should be taken more seriously.

My view is that EA principles are hugely overrated. In my post What's Wrong With Principles
I observed that principles are usually ill-conceived and generally fail to provide a sound basis for collective decision-making and governance. But instead of trying to fix this by producing better sets of principles and enforcing them better, I think we should simply abandon the fantasy that difficult EA judgements can be driven by any simplistic set of top-down abstract principles.


If we are to regard principles as more important than casual suggestions and slogans, then there are some critical requirements they must satisfy. For one thing, credible principles should be based on concrete evidence that they actually work (true and/or useful), rather than the kind of vague wishful thinking and motherhood statements that we find in countless EA documents and presentations. I have seen very little serious attempt to satisfy these requirements, and the belief that things would be okay if only we had better principles sounds to me like wishful thinking.

In a Linked-In discussion (Considerate EA), Ron Segal agreed that principles to underpin strategy cannot just be wishful thinking, and provided a couple of concrete examples. He suggested that the principle 'achieving long term loyalty' entails excluding the kind of campaigns and services that attract short term customer loyalty. So the application of the principle calls for some empirical knowledge about the kinds of thing that are conducive of long-term loyalty, and some understanding of the relationship between long-term loyalty and short-term loyalty. My guess is that this relationship is not straightforward - some kinds of short-term loyalty may evolve into longer-term loyalty - in which case we can only really understand this relationship properly by observing and analyzing the behaviour of customers over an extended period. What I object to is the notion that this kind of principle can be applied as an exercise in pure reasoning, without reference to empirical knowledge.

As for Ron's principle of 'reducing the effort of compliance' - this presumably has to be interpreted as 'other things being equal'. Thus you probably wouldn't want to reduce the effort of compliance if this had the effect of doubling the downstream costs of legal action and compensation. Again, such a principle can only be meaningfully applied in the context of an empirically verifiable (or refutable) model/theory about the interdependencies between different classes of cost/effort.

But here's the thing. If our architectural reasoning is based on principles, and the principles are derived from practical and empirical knowledge, then architectural reasoning must ultimately be justified in terms of this detailed knowledge rather than abstract principles. A simplified and abstract set of principles may provide a useful summary and reminder, but we should not make a fetish of these principles. And if the principles are not supported by practical and empirical knowledge, why should anyone take them seriously?


See reply by Joe McKendrick, Viewpoint: 'enterprise architecture principles hugely overrated' (ZDNet, July 2010)

Related post The Power of Principles (Not) (January 2011). Includes extended discussion with Nick Gall in the comments.


Friday, March 19, 2010

What's Wrong with the Single Version of Truth

As @tonyrcollins reports, a confidential report currently in preparation on the NHS Summary Care Records (SCR) database will reveal serious flaws in a massively expensive database (Computer Weekly, March 2010). Well knock me down with a superbug, whoever would have guessed this might happen?

"The final report may conclude that the success of SCRs will depend on whether the NHS, Connecting for Health and the Department of Health can bridge the deep cultural and institutional divides that have so far characterised the NPfIT. It may also ask whether the government founded the SCR on an unrealistic assumption: that the centralised database could ever be a single source of truth."

There are several reasons to be ambivalent about the twin principles Single Version of Truth (SVOT) and Single Source of Truth (SSOT), and this kind of massive failure must worry even the most fervent advocates of these principles.

Don't get me wrong, I have served my time in countless projects trying to reduce the proliferation and fragmentation of data and information in large organizations, and I am well aware of the technical costs and business risks associated with data duplication. However, I have some serious concerns about the dogmatic way these principles are often interpreted and implemented, especially when this dogmatism results (as seems to be the case here) in a costly and embarrassing failure.

The first problem is that Single-Truth only works if you have absolute confidence in the quality of the data. In the SCR example, there is evidence that doctors simply don't trust the new system - and with good reason. There are errors and omissions in the summary records, and doctors prefer to double-check details of medications and allergies, rather than take the risk of relying on a single source.

The technical answer to this data quality problem is to implement rigorous data validation and cleansing routines, to make sure that the records are complete and accurate. But this would create more work for the GP practices uploading the data. Officials at the Department of Health fear that setting the standards of data quality too high would kill the scheme altogether. (And even the most rigorous quality standards would only reduce the number of errors, could never eliminate them altogether.)

There is a fundamental conflict of interest here between the providers of data and the consumers - even though these may be the same people - and between quality and quantity. If you measure the success of the scheme in terms of the number of records uploaded, then you are obviously going to get quantity at the expense of quality.

So the pusillanimous way out is to build a database with imperfect data, and defer the quality problem until later. That's what people have always done, and will continue to do, and the poor quality data will never ever get fixed.

The second problem is that even if perfectly complete and accurate data are possible, the validation and data cleansing step generally introduces some latency into the process, especially if you are operating a post-before-processing system (particularly relevant to environments such as military and healthcare where, for some strange reason, matters of life-and-death seem to take precedence over getting the paperwork right). So there is a design trade-off between two dimensions of quality - timeliness and accuracy. See my post on Joined-Up Healthcare.

The third problem is complexity. Data cleansing generally works by comparing each record with a fixed schema, which defines the expected structure and rules (metadata) to which each record must conform, so that any information that doesn't fit into this fixed schema will be barred or adjusted. Thus the richness of information will be attenuated, and useful and meaningful information may be filtered out. (See Jon Udell's piece on Object Data and the Procrustean Bed from March 2000. See also my presentation on SOA for Data Management.)

The final problem is that a single source of information represents a single source of failure. If something is really important, it is better to have two independent sources of information or intelligence, as I pointed out in my piece on Information Algebra. This follows Bateson's slogan that "two descriptions are better than one". Doctors using the SCR database appear to understand this aspect of real-world information better than the database designers.

It may be a very good idea to build an information service that provides improved access to patient information, for those who need this information. But if this information service is designed and implemented according to some simplistic dogma, then it isn't going to work properly.


Update. The Health Secretary has announced that NHS regulation will be based on a single version of the truth.

"in the future the chief inspector will ensure that there is a single version of the truth about how their hospitals are performing, not just on finance and targets, but on a single assessment that fully reflects what matters to patients"

Roger Taylor, Jeremy Hunt's dangerous belief in a single 'truth' about hospitals (Guardian 26 March 2013)



Updated 28 March 2013

Tuesday, February 02, 2010

What's Wrong with Principles?

Lots of people in the SOA / EA world think that principles are very important. So in this blogpost, I'm going to take a contrarian view.

(Before I start, let me cheerfully admit that I have probably done some of the things that I'm complaining about here, and if you catch me doing any of them in the future, please feel free to give me a sharp dig in the ribs.)


There are four reasons why I think principles (especially lists of principles) are overrated.
  • Purpose / Value - uncertainty about what the principles are supposed to achieve
  • Form - uncertainty about what a principle should look like
  • Source / Material - uncertainty about what a principle should be based upon
  • Process - uncertainty about how a principle should be used.

Form

Let me start with the way that principles are expressed. Lists of principles often include a mixture of assertions, commands, preferences and goals.

For example, here are a few example principles from TOGAF 9 (section 23.6).
  • Requirements-Based Change Only in response to business needs are changes to applications and technology made. (command)
  • Common Use Applications Development of applications used across the enterprise is preferred over the development of similar or duplicative applications which are only provided to a particular organization. (preference)
  • IT Responsibility The IT organization is responsible ... Each data element has a trustee ... (organization structure)
  • Data is an asset ... that has value to the enterprise ... (assertion)
  • Data is shared ... and accessible and defined consistently ... (goals)
  • Maximize Benefit to the Enterprise Information management decisions are made to provide maximum benefit to the enterprise as a whole. (wishful thinking)

And here are a few from PeaF
  • Strategic planning is at the heart ... Relationships are the key to understanding ... (assertion)
  • Adopt a service oriented approach. Adopt architecture best practice. (command)
  • Reusable applications will be favored. ... Reuse will be considered first. (preference)

This kind of confusion is not unique to SOA and EA, but appears in other domains as well. For example, Dave "Cynefin" Snowden's seven principles of knowledge management are generalized observations, which are true except when they aren't. He then adds four organizing principles, which are more like guidelines.


Okay, so there is some variation in the way that principles are expressed. Why should this be a problem? Because it reflects some confusion as to whether a principle is supposed to be true or useful. If we are going to accept a set of principles uncritically then maybe that doesn't matter, but if we are going to evaluate principles and apply them intelligently to a particular situation then the difference between truth and utility is rather important.

At least the TOGAF and PeaF principles have been worked on to achieve some degree of objectivity and verifiability. In other discussions on the Internet, I've found slogans like "Think Strategically, Act Tactically" being described as principles (or perhaps "guiding principles"). These might be useful reminders to the self, but not much use for governance.


Source / Material


Where do principles come from? What is their authority? If everything is supposed to be guided by a core set of principles, then we need to be confident that the principles are right.

I am particularly suspicious of principles that are supposed to be obvious or self-justifying, or are based on majority opinion. If this is something that everyone believes, it is probably either false or not worth stating at all. And anything produced by a committee of experts usually has all the interesting content leached out in order to achieve a bland generalized compromise they can all agree to.


And I am very irritated by websites such as OWASP that publish lists of "proven principles" without the slightest indication of the proof underlying any of these principles. (As far as I can see, any member of OWASP can add anything at all to the Principles page on the OWASP wiki, so the only verification consists in the fact that no other member has bothered to challenge any of them.)

TOGAF 9 (section 23.4) identifies five criteria that distinguish a good set of principles: understandable, robust, complete, consistent and stable. But in my view, the most important criterion is missing here. Principles are like policies, they should be based on robust evidence, they should be monitored for ongoing effectiveness, and subject to revision if the evidence shows they aren't working. This is one of the key differences between a science and pseudo-science - see my post Is Enterprise Architecture a science?



Process


How are principles supposed to be used? Here are some snippets.

  • "Guiding principles drive the IT architecture and the service model, which in turn dictate how the enterprise IT infrastructure services may be defined." (Tilak Mitra, IBM)
  • "The principles of service-orientation can be applied to services on an individual basis, allowing a reasonable degree of service-orientation to be achieved regardless of the approach." (soaprinciples.com)
  • "Each enterprise should have a set of guiding principles to inform decision-making." (Open Group SOA Sourcebook)
  • The purpose of these principles is not to constrain, but to provide a broad cultural framework in which work will be carried out. (PeaF)
  • "As projects begin to define solutions to problems they are assessed as complying with these principles or not. For a project, if all principles are complied with - no problem - no Enterprise Debt is created." (Kevin Smith, PeaF, via Linked-In). 

So there is some variation here - do the principles work as hard policies, dictating and controlling a set of relevant processes and practices? Or do they merely work as soft guidelines, informing decisions and providing a basis for estimating some relevant metrics such as "enterprise debt"?

And if the principles are not so bland as to be meaningless, we should expect occasional tension between different principles. So how do we resolve conflicts between different principles?

The answers to these questions depend on our earlier discussion. If the principles can be traced to robust evidence, then they have visible authority and weight, which gives them particular force in a particular situation. But if they are only bland generalizations, then they will be ignored as soon as they become inconvenient - in which case, you might as well not bother having them at all.


Purpose / Value


So in the end, what are these kinds of principle actually worth?

Principles may be helpful in creating a common narrative about what we are trying to achieve - but if that's all they are, then we don't have to take them too seriously. Some people seem to regard principles as rules that others should follow religiously but they themselves are free to deviate from if they feel it necessary.

To the extent that it is worth having a consistent approach to something, then a set of evidence-based policies should help achieve some degree of consistency. But most lists of principles fall way short of this idea. Instead, a so-called principle may be any of the following ...
  1. something that is always true
  2. something that is usually true (except when it isn't)
  3. something that is always good (or beautiful)
  4. something that is usually good (except when it isn't)
  5. a random thought that someone thinks is important
  6. something that good people believe or follow and bad people don't - a way of separating Them from Us
  7. something that ignorant people should believe and follow, and clever people don't have to - ditto 
  8. something that everyone should always do, except when it's inconvenient

So that's my argument against principles - they may have some limited use, but they are not strong enough to bear the weight that is commonly put on them. I know many of my friends will disagree with me, and I look forward to some robust discussion.

See also What's Wrong With Principles 2 and The Power of Principles (Not)

Monday, May 25, 2009

What's Wrong with Layered Service Models?

@JohanDenHaan pointed me at a couple of articles by Bill Poole

Bill takes a particular layered service model, a plausible combination of layers such as you might find in any number of popular SOA books, and shows how he can use this model to produce an inefficient and inflexible design. 

Of course, all this shows is that there are problems with a particular layered service model, as interpreted by Bill. (The authors of the books from which Bill has taken this model might claim that Bill had misunderstood their thinking, but whose fault is that?) It doesn't show that all layered service models are bad. 

As Bill points out, one reason commonly cited in support of the layered service model approach is the hope that services will be highly reusable. But there is a much more important reason for layering - based on the expectation that each layer has a different characteristic rate of change. (This principle is known as pace layering.) 

When done properly, layering should make things more flexible. When done badly (as in Bill's example) then layering can have the opposite effect. 

One of the problems is that the people inventing these layered service models often confuse classification with layering. We can identify lots of different types of service, therefore each type must go into a separate layer. A deeper problem with these models is that their creation is based purely on clever thinking rather than systematic and empirical comparison of alternatives. Too much architectural opinion is based on "here's a structural pattern that seems to make sense" rather than "here's a structural pattern that has been demonstrated to produce these effects". 

See my post on Layering Principles. You can't just take an SOA principle ("loose coupling is good", "layering is good"), apply it indiscriminately and expect SOA magic to occur.

Tuesday, April 11, 2006

Loose Coupling 2

Is loose coupling a defining characteristic of the service-based business and/or a core principle of SOA? ZDnet's analysts have been producing a set of IT commandments (13 at the last count), and Joe McKendrick's latest contribution is Thou Shalt Loosely Couple.

Joe quotes John Hagel's definition of loose coupling, which refers to reduced interdependencies between modules or components, and consequently reduced interoperability risk. Hagel clearly intends this definition to apply to dependencies between business units, not just technical artefacts. I think this is fine as far as it goes, but is not precise enough to my taste.

In his post The Developer's View of SOA: Just adding complexity?, Ron Ten-Hove (Sun Microsystems) defines loose coupling in terms of knowledge - "the minimization of the "knowledge" a service consumer has of a service provider it is is using, and vice versa".

(It is surely possible to link these defining notions - interoperability and interdependency, risk and knowledge - at a deep level, but I'm not going to attempt it right now.)

I want to have a notion of loose coupling that applies to sociotechnical systems of systems - and therefore needs to cover organizational interdependencies and interoperability as well as technical. I have previously proposed a definition of Loose Coupling based on Karl Weick's classic paper on loosely coupled organizations.

The trouble with proclaiming the wonders of loose coupling is that it sounds as if tight coupling was just a consequence of stupid design and/or stupid technology. It fails to acknowledge that there are sometimes legitimate reasons for tight coupling.

Ron Ten-Hove puts forward a more sophisticated argument for loose coupling. He acknowledges the advantages of what he calls a mixed service model, namely that "it allows for creation of a component model that combines close-coupling and loose-coupling in a uniform fashion". But he also talks about the disadvantages of this model, in terms of reduced SOA benefits and increased developer complexity, at least with the current technology.

Loose coupling is great, but it is not a free lunch. It is not simply a bottom-up consequence of the right design on the right platform. Sometimes loose coupling requires a top-down forcing-apart. I think the correct word for this top-down forcing-apart is deconfliction, although when I use this word it causes some of my colleagues to shudder in mock horror.

Deconfliction is a word used in military circles, to refer to the active principle of making one unit independent of another, and this will often include the provision of redundant supplies and resources, or a tolerance of reduced utilization of some central resources. Deconfliction is a top-down design choice.

Deconfliction is an explicit acceptance of the costs of loose coupling, as well as the benefits. Sometimes the deconflicted solution is not the most efficient in terms of the economics of scale, but it is the most effective in terms of flexibility and interoperability. This is the kind of trade-off that military planners are constantly addressing.


Sometimes coupling is itself a consequence of scale. At low volumes, a system may be able to operate effectively in asynchronous mode. At high volumes, the same system may have to switch to a more synchronous mode. If an airport gets two incoming flights per hour, then the utilization of the runway is extremely low and planes hardly ever need to wait. But if the airport gets two incoming flights per minute, then the runway becomes a scarce resource demanding tight scheduling, and planes are regularly forced to wait for a take-off or landing slot. Systems can become more complex simply as a consequence of a change in scale.

(See my earlier comments on the relationship between scale and enterprise architecture: Lightweight Enterprise.)

In technical systems, loose coupling carries an overhead - not just an operational overhead, but a design and governance overhead. Small-grained services may give you greater decoupling, but only if you have the management capability to coordinate them effectively. In sociotechnical systems, fragmentation may impair the effectiveness of the whole, unless there is appropriate collaboration.

In summary, I don't see loose coupling as a principle of SOA. I prefer to think of it as a design choice. I think it's great that SOA technology give us better choices, but I want these choices to be taken intelligently rather than according to some fixed rules. SOA entails just-enough loose coupling with just-enough coordination. What is important is getting the balance right.

Wednesday, March 02, 2005

Layering Principles

Martin Fowler reports on a workshop in which people (a good group of people, [but] it was hardly a definitive source of enterprise development expertise) vote on principles for software layering.

Some people take this exercise seriously. JohnLim describes the result of the vote as excellent guidelines, while Paul Gielens adds his own vote. Others are more critical, including David Anderson and Rob Diana.

To my mind, even if you could collect up the most experienced people in the software world, I distrust the idea that you could get a coherent and meaningful set of architectural principles from a vote.

Architectural principles must come from reflective practice and/or grounded theory. For example, I can derive layering from a differential theory of change, as follows.


Purpose - What is layering supposed to achieve?

A well-layered artefact or system is more adaptable, because some types of variation can be accommodated within one layer without significantly affecting adjacent layers.


Form - What is the underlying structure of layering?

Boundaries between layers represent step changes with respect to some form of variation, from some perspective:
  • Differential change over time
  • A split between relatively homogeneous and relatively heterogeneous 

Process - How do layers get established? Layers emerge from an evolutionary process, in which a series of small alterations affect the architectural properties of a system or (often unplanned and unremarked by the so-called architects).

  • Redundant layers (where there is insufficient difference in variation between two adjacent layers) tend to gradually fuse together. 
  • Flexibility that is not used or exercised will attenuate. 
  • Engineers under time pressure will take shortcuts that compromise the official separation between layers. 
  • Where there is excessive differentiation within a single layer, this will tend to split apart, initially in an incoherent way.

Material - What is the source of a particular layering?

Layering comes from the experience of variation.



Related post: What's wrong with layered service models? (May 2009), Data Strategy - More on Agility (March 2020)

Saturday, February 26, 2005

SOA Principles

Several pundits have attempted to define the principles of Service-Oriented Architecture (SOA). Here is a summary of the SOA principles I have been propounding.

Service Ecosystem
Business viability depends on delivering services into a broader service ecosystem. Thus the service economy drives the business (which drives further services, which ultimately drive technology.)

Service Economy
Each service represents a unit of value. Services are regarded as intrinsically tradeable, and have both an exchange value and a use value. (Of course we may often choose not to exercise the option to trade services, for various reasons.)

Service Integrity
Each service represents a meaningful 'whole' from the user-side as well as from the supply-side. Service coherence, reliability and 'wholeness' promotes broad and robust use/reuse.
Loose Coupling /
Rich Coupling
Open (typically asynchronous) connections between organizations, components and services. Interoperability between human activity and software services. more
Differentiated Service
Functionality, quality or cost vary with circumstances, including identity and context. This helps to generate requisite variety in the service ecosystem.
more
Multiple Provision
Availability of alternative services or service implementations, biodiversity. This produces agile and robust systems, and also helps to generate requisite variety in the service ecosystem.
Complexity / Stratification
Complex networks of services must be understood as systems of systems. Such systems are generally organized in layers: one layer acts as a platform of services for the layer above.

Distributed Intelligence
Not only is functionality distributed across a network of services, but the intelligence governing this functionality is also distributed. Systems with distributed intelligence may be amenable to much more radical change than centralized ones.
Model-Based Management
Using business models to drive all aspects of system/service management
  • seamlessly through development, testing/simulation and operations
  • to provide a common understanding and visibility of systems and services
  • to monitor and control all aspects of system design and operational performance.
more
Component-Based Business
Loosely coupled networks of independent business components.
more
Emergent Order
The service economy evolves into a continous network of value-adding services, through a series of structure-preserving transformations.


I shall try to expand and illustrate these in future posts.