Showing posts with label governance. Show all posts
Showing posts with label governance. Show all posts

Thursday, July 07, 2022

Data Governance Overview

In many organizations there is a growing awareness of the need for data governance. This is often driven by a perception that something is lacking - perhaps related to data quality or accountability. So this leads to a solution based on data ownership, and the monitoring and control of data quality. But is that all there is to data governance?

Data management operates at many levels, and there may be governance issues at each of these levels.

 

 

In some organizations, we may be constrained to work bottom-up, starting from level 1 and 2, because this is where the perceived issues are focused, and where we can engage people to establish the correct activities and control mechanisms. Some level 3 issues may be perceived, but it may initially be difficult to find people willing to commit any effort to resolving them. Level 4 and 5 issues are "above the ceiling", at least within these forums. 

There may be some clashes between the five levels, especially the higher ones. For example, at Level 5 the organization may assert a vision of a data-driven strategy, delivering strategic advantage to the business, while at Level 4 the data-related policies of an organization might be predominantly defensive, for example concerned with privacy, security and other compliance issues. If these two levels are not properly aligned, this is a matter for data governance, but one which bottom-up data governance is unlikely to reach. 

So what would top-down data governance look like? 

Purpose (final cause) - for example, to drive the reach, richness, agility and assurance of enterprise data - see my posts on DataStrategy

Approach (efficient cause) - to establish policies and processes that align the enterprise to these purposes 

Structure (formal cause) - an understanding of the contradictions and conflicts that might enable or inhibit change - what kinds of collaboration and coordination might be possible - and how might these structures themselves be altered 

Problem space (material cause) - finding issues that people are willing to invest time and energy to address 

Bottom-up data governance is driven by the everyday demands of case workers and decision-makers within the organization, the data fabric not being fit for purpose for fairly mundane tasks. Top-down data governance would be driven by senior management trying to push a transformation of data management and culture - always provided that they can give this topic much attention alongside all the other transformations they are trying to push. (I have some experiences from different organizations, which I plan to mash together into a generic example. Watch this space.)

But what if that's not where the true demand lies? 

Philip Boxer and I have been looking at alternatives to these top-down/bottom-up hierarchical forms, which we call edge-driven governance. If the traditional top-down / bottom-up can be thought of as North-South, then this is East-West. We recently got together at the Requisite Agility conference to discuss how far our thinking had developed (separately but largely in parallel) since we wrote our original papers for the Microsoft Architecture Journal. During this time, Phil spent a few years at the Software Engineering Institute (SEI) while I was mostly working at the architectural coal-face. There's a lot of really interesting work that has emerged recently, including further work by David Alberts, and a book on Bifurcation edited by Bernard Stiegler.

So how can we make practical interventions into these data governance issues from an East-West perspective? There are some critical concepts that are in play in this world, including agility, alignment, centricity, flexibility, simplicity, variety. Everyone pays lip service to these concepts in their own way, but they turn out to be highly unstable, capable of being interpreted in quite different ways from different stakeholder positions. 

So to pin these concepts down requires what John Law and Annemarie Mol call Ontological Politics. This include asking the Who-For-Whom question - agility for whom, flexibility for whom, etc. Philip has developed an abstract framework he calls Ontic Scaffolding, to support practical efforts to move “Across and Up”.


Note - videos from the Requisite Agility conference are currently in post-production and should be available soon. I'll post a link here when it is.


Philip Boxer, East-West Dominance (Asymmetric Leadership, April 2006)

Philip Boxer, Pathways across the 3rd epoch domain (Slideshare, November 2019)

Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal 6, August 2006) 

Annemarie Mol, Ontological Politics (Sociological Review 1999)

Bernard Stiegler and the Internation Collective (eds), Bifurcate: There Is No Alternative (trans Daniel Ross, Open Humanities Press, 2021) 

Richard Veryard and Philip Boxer, Metropolis and SOA Governance: Towards the Agile Metropolis (Microsoft Architecture Journal 5, July 2005) 

 

Related topics on Philip's blog: Agility 

Related topics on Richard's blog: EdgeStrategy, Top-Down, RequisiteVariety

NEW eBook How to do things with data (Leanpub 2022)

Sunday, April 21, 2019

How Many Ethical Principles?

Although ethical principles have been put forward by philosophers through the ages, the first person to articulate ethical principles for information technology was Norbert Wiener. In his book The Human Use of Human Beings, first published in 1950, Wiener based his computer ethics on what he called four great principles of justice.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom.”

Meanwhile, Isaac Asimov's Three Laws of Robotics were developed in a series of short stories in the 1940s, so this was around the same time that Wiener was developing his ideas about cybernetics. Many writers on technology ethics argue that robots (or any other form of technology) should be governed by principles, and this idea is often credited to Asimov. But as far as I can recall, in every Asimov story that mentions the Three Laws of Robotics, some counter-example is produced to demonstrate that the Three Laws don't actually work as intended. I have therefore always regarded Asimov's work as being satirical rather than prescriptive. (Similarly J.K. Rowling's descriptions of the unsuccessful attempts by wizard civil servants to regulate the use of magical technologies.)

So for several decades, the Wiener approach to ethics prevailed, and discussion of computer ethics was focused on a common set of human values: life, health, security, happiness, freedom, knowledge, resources, power and opportunity. (Source: SEP: Computer and Information Ethics)

But these principles were essentially no different to the principles one would find in any other ethical domain. For many years, scholars disagreed as to whether computer technology introduced an entirely new set of ethical issues, and therefore called for a new set of principles. The turning point was at the ETHICOMP1995 conference in March 1995 (just two months before Bill Gates' Internet Tidal Wave memo), with important presentations from Walter Maner (who had been arguing this point for years) and Krystyna Górniak-Kocikowska. From this point onwards, computer ethics would have to address some additional challenges, including the global reach of the technology - beyond the control of any single national regulator - and the vast proliferation of actors and stakeholders. Terrell Bynum calls this the Górniak hypothesis.

Picking up the challenge, Luciano Floridi started to look at the ethical issues raised by autonomous and interactive agents in cyberspace. In a 2001 paper on Artificial Evil with Jeff Sanders, he stated "It is clear that something similar to Asimov's Laws of Robotics will need to be enforced for the digital environment (the infosphere) to be kept safe."

Floridi's work on Information Ethics (IE) represented an attempt to get away from the prevailing anthropocentric worldview. "IE suggests that there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy." He therefore articulated a set of principles concerning ontological equality (any instance of information/being enjoys a minimal, initial, overridable, equal right to exist and develop in a way which is appropriate to its nature) and information entropy (which ought not to be caused in the infosphere, ought to be prevented, ought to be removed). (Floridi 2006)

In the past couple of years, there has been a flood of ethical principles to choose from. In his latest blogpost, @Alan_Winfield lists over twenty sets of principles for robotics and AI published between January 2017 and April 2019, while AlgorithmWatch lists over fifty. Of particular interest may be the principles published by some of the technology giants, as well as the absence of such principles from some of the others. Meanwhile, Professor Floridi's more recent work on ethical principles appears to be more conventionally anthropocentric.

The impression one gets from all these overlapping sets of principles is of lots of experts and industry bodies competing to express much the same ideas in slightly different terms, in the hope that their version will be adopted by everyone else.

But what would "adopted" actually mean? One possible answer is that these principles might feed into what I call Upstream Ethics, contributing to a framework for both regulation and action. However some commentators have expressed scepticism as to the value of these principles. For example, @InternetDaniel thinks that these lists of ethical principles are "too vague to be effective", and suggests that this may even be intentional, these efforts being "largely designed to fail". And @EricNewcomer says "we're in a golden age for hollow corporate statements sold as high-minded ethical treatises".

As I wrote in an earlier piece on principles:
In business and engineering, as well as politics, it is customary to appeal to "principles" to justify some business model, some technical solution, or some policy. But these principles are usually so vague that they provide very little concrete guidance. Profitability, productivity, efficiency, which can mean almost anything you want them to mean. And when principles interfere with what we really want to do, we simply come up with a new interpretation of the principle, or another overriding principle, which allows us to do exactly what we want while dressing up the justification in terms of "principles". (January 2011)

The key question is about governance - how will these principles be applied and enforced, and by whom? What many people forget about Asimov's Three Laws of Robotics was that these weren't enforced by roving technology regulators, but were designed into the robots themselves, thanks to the fact that one corporation (U.S. Robots and Mechanical Men, Inc) had control over the relevant patents and therefore exercised a monopoly over the manufacture of robots. No doubt Google, IBM and Microsoft would like us to believe that they can be trusted to produce ethically safe products, but clearly this doesn't address the broader problem.

Following the Górniak hypothesis, if these principles are to mean anything, they need to be taken seriously not only by millions of engineers but also by billions of technology users. And I think this in turn entails something like what Julia Black calls Decentred Regulation, which I shall try to summarize in a future post. Hopefully this won't be just what Professor Floridi calls Soft Ethics.

Update: My post on Decentred Regulation and Responsible Technology is now available.



Algorithm Watch, AI Ethics Guidelines Global Inventory

Terrell Bynum, Computer and Information Ethics (Stanford Encyclopedia of Philosophy)

Luciano Floridi, Information Ethics - Its Nature and Scope (ACM SIGCAS Computers and Society · September 2006)

Luciano Floridi and Tim (Lord) Clement-Jones, The five principles key to any ethical framework for AI (New Statesman, 20 March 2019)

Luciano Floridi and J.W. Sanders, Artificial evil and the foundation of computer ethics (Ethics and Information Technology 3: 55–66, 2001)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)

Alan Winfield, An Updated Round Up of Ethical Principles of Robotics and AI (18 April 2019)


Wikipedia: Laws of Robotics, Three Laws of Robotics


Related posts: The Power of Principles (Not) (January 2011), Data and Intelligence Principles from Major Players (June 2018), Ethics Soft and Hard (February 2019), Upstream Ethics (March 2019), Ethics Committee Raises Alarm (April 2019), Decentred Regulation and Responsible Technology (April 2019), Automation Ethics (August 2019)

Link corrected 26 April 2019

Thursday, January 05, 2012

Unruly Google and VPEC-T

Google has been hoist by its own petard: it seems obliged to ban its own browser from its own search engine for infringing its strict rules. Apparently the infringement resulted from some misbehaviour somewhere down the subcontract chain, unknown to Google itself or its prime subcontractor (which with fitting irony is called Unruly Media). A number of blogposts were created to promote Google Chrome, containing direct hotlinks to the Chrome download page. Google has recently penalized a number of other companies for such behaviour, including J C Penney, Forbes and Overstock. See also my 2006 post on BMW Search Requests.

A number of offending posts were discovered because they contained the magic words This post was sponsored by Google, and the Google search engine dutifully delivered a list of webpages containing these words. (This kind of transparency was foreseen by Isaac Asimov in a story called "All the troubles of the world", in which the computer Multivac was unable to conceal its own self-destructive behaviour.)

As a number of search engine analysts have pointed out, there are two problems with the sponsored pages. Besides containing the offending links, they are also pretty thin in terms of content. (Google has recently developed a search filter code-named Panda, which is intended to demote such low-value content, but this filter is extremely costly in computing power and is apparently only run sporadically.) Many of these pages credit Google Chrome for having helped a company in Vermont over the past five years, despite the fact that Google Chrome hasn't been available for that long. None of them explain why Google Chrome might be better than other browsers.

So here we have an interesting interaction between the elements of VPEC-T. 


Value - How is commercial sponsorship reconciled with high-value content? Does this incident expose a conflict of interest inside Google?

Policy - How does Google apply its strict rules to itself?

Events - How was this situation detected (with the aid of Google itself)? Will any future incidents be as easy to detect?

Content - What is the net effect on the content, on which Google's market position depends?

Trust - What kinds of trust have been eroded in this situation? How can trust be restored, and how long will it take?



Sources


Aaron Wall, Google caught buying paid links yet again (SEO Book 2 Jan 2012)

Danny Sullivan, Google’s Jaw-Dropping Sponsored Post Campaign For Chrome (SearchEngineLand 2 Jan 2012)

Charles Arthur, Will Google be forced to ban its own browser from its index? (Guardian 3 Jan 2012) Google shoves Chrome down search rankings after sponsored blog mixup (Guardian 4 Jan 2012)

 

Related post: Towards a VPEC-T analysis of Google (October 2011)

Thursday, November 17, 2011

Meta-Architecture (Yawn)

#entarch people seem to spend a lot of time defining the building blocks of architecture, and insisting on the correct definition. Some of my friends have been doing it on Twitter recently, and I've certainly participated in this kind of debate myself in the past.
    @chrisdpotts Appearing to not know the difference between a strategy and a roadmap can damage your reputation and influence. 

There are several different reference models that attempt to answer such questions, at varying levels of detail and abstraction. Here are just a few of these models.


The Wikipedia page on Technical Architecture contains the following paragraph on Meta-Architecture, lifted practically word-for-word from a white paper on the Visual Architecting Process (Bredemeyer Consulting, pdf) by Ruth Malan and Dana Bredemeyer. Malan and Bredemeyer also appear to be behind the EWITA (Enterprise-Wide Information Technology Architecture) initiative.

"First, the architectural vision is formulated, to act as a beacon guiding decisions during the rest of system structuring. It is a good practice to explicitly allocate time for research in documented architectural styles, patterns, dominant designs and reference architectures, other architectures your organization, competitors, partners, or suppliers have created or you find documented in the literature, etc. Based on this study, and your and the team’s past experience, the meta-architecture is formulated. This includes the architectural style, concepts, mechanisms and principles that will guide the architecture team during the next steps of structuring."

Yawn. I can see that this kind of thing might be necessary for architecture management, especially across a large organization or sector. Tom Graves points out that we can view a reference model as a set of job descriptions. But explicitly allocating time to meta-architectural research??

My worry about meta-architecture is that it distracts from real architecture. If I have a lump of business activity, I'm not sure I understand it any better whether I label it as a function or a process or a capability or a service. What really helps me to understand this lump better, and to use that understanding in improving the business, is analysing how it varies by context (differentiation) and how it interacts with other lumps of business activity (integration). And it's more important to see whether a strategy or roadmap is any good than whether it's been correctly labelled. The challenge of architecture isn't classification, it is coordination and quality.


For my bootcamp next week, I don't want to spend any time on Functions versus Capabilities, Goals versus Objectives versus Principles, Strategies versus Roadmaps, and all that meta-architecture stuff. Some architecture courses seem to spend so much time on meta-architecture that they don't have any time left for real architecture. And one can waste a lot of time on the Internet trying to promote one's favourite definition of any of these terms. My winter's resolution - to keep out of these debates. (I may not always manage to do this.)


 book now  Business Architecture Bootcamp (November 22-23, 2011)
 book now  Workshop: Organizational Intelligence (November 24th, 2011)

Thursday, June 10, 2010

Ecosystem SOA 2

What are the problems of large complex sociotechnical systems? How far do SOA and enterprise architecture help to address this problem space, and what else might we need?


When I started writing about SOA and the service-based business over ten years ago, I defined two "cuts" across the service ecosystem. One cut separates inside from outside, while the other cut separates supply from demand.



(This diagram was included in my 2001 book on the Component-Based Business, and frequently referenced in my work for the CBDI Forum. For a brief extract from the book, see my Slideshare presentation on the Service Ecosystem.)

The inside/outside cut is sometimes called encapsulation. It decouples the external behaviour of a service from its internal implementation, and can be described in terms of knowledge - the outside has limited knowledge of the inside, and vice versa. (The cut is also sometimes called transparency - for example location transparency, which means that external viewers can't see where something is located.)

The supply/demand cut is about delegation, and can be described in terms of responsibility. Getting these two cuts right may yield economics of scale and scope; and the business case for SOA as a development paradigm is often formulated in terms of reusing and repurposing shared services.

For relatively small and simple SOA projects, it may be feasible to collapse the difference between these two cuts, and treat them as equivalent. (The inside/outside relationship and the supply/demand relationship are sometimes both described as "contracts", although they are clearly not the same kind of contract.) However, enterprise-scale SOA requires a proper articulation of both cuts: confusing them can result in suboptimal if not seriously dysfunctional governance and procurement. Many people in the SOA world still fail to understand the conceptual importance of these cuts, and this may help to explain why some organizations have had limited success with enterprise-scale SOA.

Going beyond enterprise SOA as it is generally understood, there is a third cut separating two views of a system: the system-as-designed (whose structure and behaviour and rules can perhaps be expressed in some formal syntax such as UML, BPMN or ArchiMate) and the system-in-use (whose actual performance is embedded/situated in a particular social or business context). This cut is critical for technology change management, because of the extent to which the designed system underdetermines the pragmatics of use. I have been talking about this cut for over twenty years, but only more recently working out how to articulate this cut in composition with the other two cuts.

One important reason for looking at the pragmatics of use is to understand the dimensions of agility. In many settings, we can see a complex array of systems and services forming a business platform, supporting a range of business activities. If no agility is required in the business, then it may not matter if the platform is inflexible, forcing the business activities to be carried out in a single standardized manner. But if we assume that agility is a critical requirement, then we need to understand how the flexibility of the platform supports the requisite variety of the business.

More generally, understanding the pragmatics of use leads to the recognition of a third kind of economic value alongside the economics of scale and the economics of scope: the economics of alignment. The value of a given system-of-systems depends on how it is used to deliver real (joined-up) business outcomes, across the full range of business demands. (I'm afraid I get impatient with people talking glibly and simplistically about business/IT alignment, without paying attention to the underlying complexity of this relationship.)

Understanding these three cuts (and analysing their implications) is critical to understanding and managing a whole range of complex systems problems - not just SOA and related technologies, not even just software architecture, but any large and complex sociotechnical systems (or systems-of-systems). If the three cuts are not understood, the people in charge of these systems tend not to ask the right questions. Questions of pragmatics are reduced to questions of platform design; while questions of the cost-justification and adoption of the platform are reduced to a simple top-down model of business value. Meanwhile the underlying business complexity (requisite variety) will be either misplaced (e.g. buried in the platform) or suppressed (e.g. constrained by the platform).

So there are three challenges I face as a consultant, attempting to tackle this kind of complex problem. The first challenge is to open up a new way of formulating the presenting problem, based on the three cuts. The second challenge is to introduce systematic techniques for analysing the problem and visualizing the key points. And the third challenge is to identify and support any organizational change that may be needed.


With thanks to Philip Boxer and Bernie Cohen. For a different formulation of the three cuts, together with a detailed example, see their new paper "Why Critical Systems Need Help to Evolve" Computer, vol. 43, no. 5, pp. 56-63, May 2010, doi:10.1109/MC.2010.150. See also Philip Boxer, When is a stratification not a universal hierarchy? (January 30th, 2007)


Related post Ecosystem SOA (October 2009)

Wednesday, April 28, 2010

Quality and Responsibility

One of the key challenges with shared data and shared services is the question of data quality. Who is responsible for mistakes?

@tonyrcollins raises a specific example - who's responsible for mistakes in summary care records?

"NHS Connecting for Health suggests that responsibility for mistakes lies with the person making the incorrect entry into a patient's medical records. But the legal responsibility appears to lie with the Data Controller who, in the case of Summary Care Records, is the Secretary of State for Health."

From an organizational design point of view, it is usually best to place responsibility for mistakes along with the power and expertise to prevent or correct mistakes. But that in turn calls for an analysis of the root causes of mistakes. If all mistakes can be regarded as random incidents of carelessness or incompetence on the part of the person making the incorrect entry, then clearly the responsibility lies there. But if mistakes are endemic across the system, then the root cause may well be carelessness or incompetence in the system requirements and design, and so the ultimate responsibility rightly lies with the Secretary of State for Health.

Part of the problem here is that the Summary Care Record (SCR) is supposed to be a Single Source of Truth (SSOT), and I have already indicated What's Wrong with the Single Version of Truth (SVOT). Furthermore, it is intended to be used in Accident and Emergency, to support decisions that may be safety-critical or even life-critical. Therefore to design a system that is vulnerable to random incidents of carelessness or incompetence is itself careless and incompetent.

What general lessons can we learn from this example, for shared services and SOA? The first lesson is for design: data quality must be rigorously designed-in, rather than merely relying on validation filters at the data entry stage, and then building downstream functionality that uses the data uncritically. (This is a question for the design of the whole sociotechnical system, not just the software architecture.) And the second lesson is for governance: make sure that stakeholders understand and accept the distribution of risk and responsibility and reward BEFORE spending billions of taxpayers' money on something that won't work.

Tuesday, March 09, 2010

Multiple styles of EA

@tetradian has an interesting post on Big EA, Little EA and Personal EA., based loosely on Patti Ancram's classification of knowledge management.
  • Big KM is about top-down, structured and organizationally distinct “knowledge management”
  • Little KM is about safe-fail experiments embedded in the organizational structure
  • Personal KM is about access to tools and methods to ensure that knowledge, context, bits, fragments, thoughts, ideas are harvestable



As I see it, this classification identifies different styles that may possibly coexist, or perhaps different kinds of knowledge claim that may interact in interesting ways. (I don't like the word "layers" for this kind of classification, because it implies a particular structural pattern, which isn't appropriate here.)

I've used a slightly different division in the trust sphere, which might make sense here as well.
  • Authority EA - this is a kind of top-down command-and-control EA, representing the will-to-power of the enterprise as a whole, and ultimately answerable to the CEO. This is what Tom calls Big EA.
  • Commodity EA - this is where the EA is based on some kind of external product source - such as when the enterprise models are imported wholesale from IBM or SAP. This often resembles Big EA, but has some important differences.
  • Network EA - this is where EA is based on informal and emergent collaboration between people and organizations. Tom calls it Little EA, but the collaborations can be very extended indeed - just think about some of the mashup ecosystems around Google or Twitter.
  • Authentic EA - this is a personally engaged practice - what Tom calls Personal EA.

Once we have agreed that there are different styles, the really interesting question is not identifying and naming the styles, nor even saying that one style is somehow "better" than another style", but talking about how the different styles interact, and what are the implications for governance.

Thursday, April 09, 2009

Scale and Governance

Following my post Does Britain Need Smaller Banks? Fred Fickling objected to my point that large banks are more difficult to regulate.
"Not sure that the simple answer here is really the right one. Size & corp responsibility aren't necessarily inversely related."
Of course, some large companies can have a highly refined sense of ethics and social responsibility, although this doesn't always protect them from strong disagreements with external lobbyists, as Shell discovered when it wished to decommission the Brent Spar.

In any case, not all large companies can be trusted to regulate their own behaviour in the public interest. Companies pay tax where it suits them, move (or threaten to move) operations from one country to another, shrug off massive fines, and can sometimes behave as if they were above the law, as Fred concedes. In the long term, these legacy corporations may be doomed, but in the short term they can still cause a lot of disruption in the financial ecosystem.

Of course there are no simple answers. I certainly think some companies are too large, and I hope there will always be good opportunities for smaller companies to compete. What I'm calling for is a way of reasoning intelligently about the size of companies, which balances the interests of all stakeholders in a fair and governable manner. I think this is a proper topic for business architecture. And perhaps some of what we've learned from SOA about granularity and governance can be applied to this greater problem domain.

Wednesday, April 08, 2009

Ford and Fordism

In his piece Flashlights not Bailouts, Phil Gilbert describes the classic CBD/SOA argument: standardization/rationalization/reuse.

"One of the three (Ford) is in demonstrably better shape than the other two, and it's no mystery why. Two years ago, when he took the reins of Ford, Alan Mulally identified two things that needed to change: parts costs have to go down, and engineering productivity must go up."

"Get it? The white collar workers who design the cars have to move from artisan to engineer, and they need to work together across all the company's platforms to use common parts."

I asked Phil if this was Fordism?
Phil responded thus:

"Ford used standardization of task to drive down costs. I'm suggesting 'standardization' of transparency to drive down risk."

Thus Fordism progresses from the economics of scale to the economics of governance. This fits well with the story about SOA I've been telling in this blog. http://tinyurl.com/d3ronn

Monday, December 15, 2008

SOA Bank

In a discussion on the CBDI Forum Linked-In Group, Dave Ruzius proposes an SOA Bank.

'Did anyone ever consider a "SOA Bank" that would fund initial investments for SOA transformation and service development for a nice cut of the cost savings after x years?'


I love the idea of an "SOA Bank", but there's that voice in the back of my head saying "It will never work". The killer question is of course trust between the "lender" and the "borrower". Unfortunately, there is no agreed benchmark for measuring cost-savings. So there is too much scope for disagreeing (and even cheating) on the actual levels of cost-savings achieved, especially if there is real money at stake. This is ultimately a question of financial governance.

So let's look at the situations where it might work. In a public sector environment, or in a global multinational, you might have a central accounting function with enough power to make this kind of arrangement viable. Or even in an ecosystem dominated by a single player - perhaps a major manufacturer or franchise operation, providing funding for cost-saving initiatives by its suppliers or franchisees.

At the other extreme, you could have some kind of market mechanism. If the cut of the cost-savings is built into the price of using the service, then the SOA Bank will get its money back if and only if enough people are using the service.

I think we can expect organizations to get a lot more savvy about IT procurement generally, and if SOA gives them more negotiating options (for example, pay-as-you-go or payment-by-results) then we can expect people to take advantage of this. And we can expect suppliers to respond to this pressure - there are already many offerings in the "xxx-as-a-service" category, and I am sure more of these will emerge during 2009.

Funding is going to be squeezed all round, so SOA transformation programmes are going to be managed on the basis of just-in-time investment. You need to look carefully at the capabilities needed at each stage, rather than rushing around the SOA supermarket and just chucking everything with an SOA label into your shopping basket in case it might be useful.

Finally, if there was going to be an SOA Bank, whether supporting the demand-side or the supply-side, it would need to check SOA plans for credibility and viability before offering any funding. If this meant that over-ambitious or incoherent plans got sent back to the drawing board and only sensible plans got approved, this would surely be a good thing for SOA. Wouldn't it?

Monday, August 13, 2007

Service Ecosystem and Market Forces

One of the problems with a network of services is that the responsibilities and costs and risks are often in the wrong place.

In this post I'm going to explain what I mean by this statement, outline some of the difficulties, and then make some modest proposals.

The statement is based on a notion of the efficiency of an ecosystem. If there is one service provider and a thousand service consumers, it may be more efficient for the ecosystem as a whole if the service provider includes some particular capability or responsibility within the service, instead of each service consumer having to do this. In addition to the economics of scale, there may be economics of governance - for example, increased costs of managing the service relationship, especially if the service provider doesn't provide a complete service (in some sense).

One important application of this idea is in security, risk and liability. There is a very good discussion of this in the recent British House of Lords Science and Technology Committee Report into “Personal Internet Security", who specifically address the question whether ISPs and banks should take greater responsibility for the online security of their customers.

"A lot of people, notably the ISPs and the Government, dumped a lot of the responsibility onto individuals, which neatly avoided them having to shoulder very much themselves. But individuals are just not well-informed enough to understand the security implications of their actions, and although it’s desirable that they aren’t encouraged to do dumb things, most of the time they’re not in a position to know if an action is dumb or not." [via LightBlueTouchpaper]
In other words, the responsibility should be placed with the player who has (or may reasonably be expected to have) the greatest knowledge and power to do something about it. In many cases, this is the service provider. Some of us have been arguing this point for a long time - see for example my post on the Finance Industry View of Security (June 2004).

Similar arguments may apply to self-service. When self-service is done well, it provides huge benefits of flexibility and availability. When self-service is done poorly, it merely imposes additional effort and complexity. (Typical example via Telepocalypse). Some service providers seem to regard self-service primarily as a way of reducing their own costs, and do not seem much concerned about the amount of frustration experienced by users. (And this kind of thing doesn't just apply to end-consumers - similar considerations often apply between business partners.)

But it's all very well saying that the service provider ought to do X and the service consumer ought to do Y. What if there is no immediate incentive for the service provider to adopt this analysis? There are two likely responses.
  1. "We don't agree with your analysis. Our analysis shows that the service consumer ought to do X."
  2. "We agree it might be better if service providers always did X. But our competitors aren't doing X, and we don't want to put ourselves at a disadvantage."
More fundamentally, there may be a challenge to the possibility of making any prescriptive judgements about what ought to happen in a complex service ecoystem. This challenge is based on the assertion that such judgements are always relative to some scope and perspective, and can easily be disputed by anyone who scopes the problem differently, or takes a different stakeholder position.

Another fundamental challenge is based on the assertion that in an open competitive market, the market is always right. So if some arrangement is economically inefficient, it will sooner or later be replaced by some other arrangement that is economically superior. On this view, regulation can only really achieve two things: speed this process up, or slow it down.

But does this mean we have to give up architecture in despair - simply let market forces take their course? One of the essential characteristics of an open distributed world is that there is no central design architectural authority. Each organization within the ecosystem may have people trying to exercise some architectural judgement, but the overall outcome is the result of complex interplay between them.

How this interplay works, whether it is primarily driven by economics or by politics, is a question of governance. We need to spell out a (federated?) process for resolving architectural questions in an efficient, agile and equitable manner. This is where IT governance looks more than ever like town planning.

Notes

The House of Lords Science and Technology Committee Report into “Personal Internet Security" was published on August 10th 2007 (html, pdf). Richard Clayton , who was a specialist adviser to the committee, provides a good summary on his blog. Further comments by Bruce Schneier and Chris Walsh.

Sunday, December 10, 2006

Service Planning 2

Nick Malik asks Should SOA be Top Down or Bottom Up. He suggests that architects need to pay attention to the big pieces and can ignore the small pieces.
"While the architects do not need to know about every moving part, they DO need to be aware of the largest of those parts, and make sure that they are managed well. This is similar to city planning, where the city needs to work with a large employer or a large retailer (like Walmart) to make sure that roads and parking and congestion issues are managed, without having to worry about cafe and card shop that are also employers, but have a minimal impact on the infrastructure."

Some people argue for a "fractal" notion of the service economy. While the word "fractal" isn't always used in its precise mathematical sense, its use seems to imply that the service portfolio should have a good mixture of different sizes / granularities (although not necessarily in the same architectural layer). Such a mix of different sizes is also advocated by Christopher Alexander.

In city planning, small retail outlets such as cafes and card shops may individually have less economic or environmental impact than a major retailer such as Wal-Mart. But collectively, they may have at least as much impact. (This is a "long tail" argument.) One of the challenges for city planning is to achieve a good balance between the large few and the many small. (Of course this raises questions of governance as well as architecture, politics as well as economics.)

If the total weight of the large pieces is greater than the total weight of the small pieces (whatever measure we choose for "weight"), then this is itself an architectural choice, with important implications for project agility and enterprise agility.

Most people in this game think they know what the terms "top-down" and "bottom-up" mean, but these terms are commonly used in different (contrary, confusing) ways. If an architect only worries about fitting the big pieces together, and assumes that the small pieces will somehow look after themselves in the remaining ("negative") space, this sounds like one version of what some people would call "top-down".

What if the architect concentrates on providing "positive space" in which the small pieces can thrive, and prevents the big pieces from encroaching on this space. What if the architect concentrates on the interfaces between the pieces, rather than the pieces themselves. Is this "top-down" or "bottom-up"?

I don't really care what we call this - although it would be good to have a more precise way of talking about it - but I think this is an equally valid strategy.



See also: What does Top-Down mean? (September 2011)

Saturday, November 25, 2006

For Whom

There are several enterprise architecture approaches (TOGAF, DoDAF, MoDAF) based on the work of John Zachman and his Kiplingesque sextet:
"I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who."

These six interrogatives are commonly presented as the columns in a table. There have been some suggestions (strongly resisted by Mr Zachman and his followers) to extend the table.

One of the extensions we’ve been looking at the CBDI Forum is the possibility of introducing a "For Whom" column. Because value (in SOA and the service-oriented business) is not just experienced by “The Enterprise” (regarded as a single centralized pot of costs and benefits) but may be distributed across a federation or ecosystem.

For example, does a service intermediary add value for the service provider, or for the service consumer, or both? Does a change in business policy improve the whole supply chain, or have we merely pushed the problems upstream? Does a security layer mitigate risk for the bank or for its customers? Does a compliance monitoring service protect the interests of the directors or the shareholders?

Some people have told me that this is already implicit in the "Who" column. Or perhaps it is implicit in the "Why column. But I don't believe that many enterprise architects currently interpret the "Who" or "Why" columns in this way.

"For Whom" is important for SOA when we start to look at service networks that span several organizations. One organization may produce a business case for doing some SOA, but this may only be viable if other organizations cooperate. Participation in a network is based on some form of self-interest (each participating organization gets out more than it puts in) and/or some form of governance (the organizations collaborate according to some agreed or imposed regime).

In addition, "For Whom" is important for security engineering. Some organizations focus their security on protecting their own internal systems against a narrow range of direct threats, but seem to pay little attention to a broader range of indirect threats against themselves and their customers. In my view, an organization such as a bank should take a 360-degree view of security, and should try to provide real security for its customers and their assets, as well as for itself.

Finally, "For Whom" is important for ethics. The distinction between "For Whom" and "Who" is similar to the distinction between "Customer" and "Actor" in Soft Systems Methodology (SSM). Some readers may be familiar with the SSM acronym CATWOE, which stands for Customer, Actor, Transformation Process, WorldView, Owner, Environment.



Philip Boxer, Modelling Structure-Determining Processes (19 December 2006)

Chris Bruce, Environmental Decision-Making as Central Planning: FOR WHOM is Production to Occur? (Environmental Economios, 19 August 2005)

Wikipedia: CATWOE



Related posts: Arguing with Mendeleev (March 2013), Arguing with Drucker (April 2015), Whom Does The Technology Serve? (May 2019)


Updated 18 August 2019 to emphasize the ethical dimension.

Tuesday, March 07, 2006

Christopher Alexander

I've just received a preparation pack for the SPARK workshop, including a new copy of Christopher Alexander's book Timeless Way of Building, first published in 1979. I wonder how many of the SPARK participants have looked at Alexander's later material, including his New Theory of Urban Design (1987), and his magnificent new 4-volume work The Nature of Order.

A few years ago I wrote

Christopher Alexander's work is frequently cited in the software world, usually after a delay of about 15 years. Thus Notes was used by Ed Yourdon and Tom De Marco in the 1970s, to support a view of top-down design. Patterns made a serious entry into the software world the early 1990s. His book on Urban Design has not yet achieved such popularity, although I think it is extremely relevant to business and IT planning. In the past, Alexander has expressed some ambivalence and suspicion about the use of his work by software engineers. More recently, he has been persuaded to make occasional keynote speeches at software conferences and to write prefaces for software books. It may even be true that some software practitioners understand his work better than most architects. However, he is clearly troubled by the fact that software practitioners mostly only pick up fragments of his work, and ignore the holistic aspects of his thinking that he regards as crucial.
 

Here are some of the themes of Alexander's thinking that I see as particularly relevant to SOA.

1. Under the right conditions, complex order emerges from a series of simple steps. A given structure at a given moment is in a partially evolved state.

2. Each step is generally structure-preserving - it builds upon the differentiation and coherence (strength) of the existing structure.

3. Each step brings greater differentiation and greater coherence (strength). Alexander calls this the Fundamental Differentiating Process. (See Nature of Order, Book 2, page 216).

4. Alexander defines structure in terms of a network of centres. Alexander's notions of structure-preservation, differentiation and coherence are explained in terms of this notion of structure. (See "Centers: The Architecture of Services and the Phenomenon of Life," FTPOnline, Richard C. Murphy, March 2004)

5. A key role of governance is then to establish and maintain the conditions under which this fundamental differentiating process can work. (See my articles in the Architecture Journal: Metropolis and SOA Governance and Taking Governance to the Edge).

 

See also Christopher Alexander as Teacher (May 2004), Christopher Alexander 1936-2022 (March 2022)

Wednesday, November 09, 2005

Twin-Track Governance

Twin-track development involves a management separation between the development of components and services, and the development of larger artefacts that use these components and services. It was a characteristic feature of the more advanced CBSE methods, such as Select Perspective, and is obviously relevant to service engineering as well.

Clarification Update: Select Perspective has always been more than just a CBSE method, and now qualifies as a full-fledged SOA method. Even as a CBSE method, it was one of the first to embrace service as a first class construct. Although this may not be immediately obvious from the Select website, it is clear from an article contributed by Select consultants (then part of Aonix) to the CBDI Journal in 2002.

The twin-track pattern can be superimposed on a traditional lifecycle process, such as that recently propagated by OASIS for the web service lifecycle (pdf). Even though the OASIS process is presented as applying to a single web service, it doesn't take much reframing to see this kind of process applying to an artefact that is larger than a single service - such as a whole system or subsystem - provided it is specified and designed in one place at one time. So we can start to see the OASIS process (or a suitably generalized version of it) applying to either of the twin tracks - but not at the same time. Both tracks may be following some version of the OASIS process, but they don't talk to each other.

The twin-track pattern is sometimes interpreted in a simple way, with an IT organization divided into a service-creation track and a business project track. The service track produces services with web service interfaces; the business projects produce applications with user interfaces (UI). In this interpretation, use of BPEL belongs exclusively in the service track, because it doesn't produce applications with UI.

However, we can interpret twin-track in a much more powerful way than this, by generalizing the pattern. We simply specify that supply of services is in one track, and the consumption of these services (even if this is in the context of using BPEL to build larger artefacts that are themselves services) is in another track. The point of the twin-track pattern is that the supply and consumption can be decoupled and managed separately - possibly even in separate organizations. Of course, this pattern can be applied more than once, yielding more than two tracks in total.

Meanwhile, the possession of a UI is probably of secondary importance here. With SOA, we can (and probably should) build applications that give the user a choice between interacting directly or via some user-built secondary application. Thus for example, I want my online bank to offer me a set of services rendered in two alternative ways: firstly via a UI (probably browser-based) and secondly as a series of webservices (or equivalent) that I can invoke from within some desktop money management program. Think of Google and eBay delivering the same services via browser and via web services. In the service economy, I want all interfaces to have something like a webservice or REST or RSS/Atom alternative.

And perhaps if lots of people are using desktop money management programs, it might be cheaper for the bank to give all remaining customers a low-functionality desktop program (or recommend a suitable bit of freeware) and then decommission the UI altogether. I'm not saying this is always going to be advisable, but it's certainly an option. So we might see a growing number of serious business applications with no traditional UI at all.

In architecture, if you build windows everywhere, it makes it harder to join buildings together. Think of a university campus, which grows piecemeal over many decades. If every new block has a blank wall, then it is easier to build another block next to it. If you put windows on every available wall, you have to put useless space between the blocks, and then build silly walkways to save people keep going down to the ground floor. The evolution of complex SOA raises some of the same issues.

Even with single-track development, there is a governance question here. How can we maintain order over a complex and evolving system, if we cannot simply outline all the requirements at the beginning? And with twin-track, a critical function of SOA governance is governing the relationship between the tracks, however many tracks there may be.

How do we get (and keep) all these loosely coupled development processes and operations processes in alignment with each other, and also in alignment with the business. And alignment with IT accounting (financial and otherwise) would be nice too. Obviously the whole point of twin-track is that you can decouple the tracks to some extent, but if you decouple them too much then you throw baby (reuse, interoperability, flexibility) out with the bathwater (agile=fragile).

Some sources advocate frequent synchronization between development teams. While synchronization does not necessarily rule out federated, distributed development, but this would only be possible with a great deal of horizontal coordination. And this would introduce lots of other challenges.

SOA governance governs what kind of development is appropriate, and how it should be coordinated. For example, SOA governance provides ground-rules for participants to agree interfaces before they go off and do their own thing, and for enforcing these agreements. In the real world, we know that all agreements are subject to being reneged and renegotiated as requirements and conditions change. But who incurs this risk, and who shall bear the cost of any change? If you decide you need to change the interface, am I forced to respond to this, and if so how quickly?

Change management (e.g. avoiding uncontrolled demand for service specification change) cannot be managed within either one track, but implies governance of the relationship between the tracks.

If we assume that the two tracks report to a single IT manager within a single organization, then vertical line management may provide all the governance you need. But if the two tracks are in separate organizations, then the question of governance becomes a matter for negotiation between the two organizations.

A general governance framework for SOA must support federated, distributed development. Obviously if an enterprise chooses to remain (for the time being) with a simpler development style, then it should not be forced to adopt all the elements of the framework it doesn't yet need. Why should an enterprise adopt principles that are not relevant to the way it is currently doing development? But sometimes it might be correct for an enterprise to adopt principles for federated, distributed development even where the development team is not. For example, to provide some horizontal compatibility with the way that its partners are doing development, or to provide upwards compatibility to the way it intends to do development in future.

Technorati Tags:

Wednesday, September 21, 2005

SOA Stupidity

In my previous post on SOA Chaos 2, I discussed Jeff Schneider's view that Stupid People Shouldn't Do SOA

I've now been looking at Stephen Swoyer's article SOA Fatigue: It's the People and Processes, Stupid (ADTmag September 2005). He talks about several forms of stupidity affecting people, processes and organizations, which interfere with successful SOA.

  • bureaucratic infighting, turf wars, petty fiefdoms
  • incentive incompatibility
  • inertia

In such a context, it is the collective intelligence/stupidity of the organization that matters. Inside stupid organizations, most clever people either try to protect themselves from the stupidity, or find smarter ways to exploit it to their own advantage. I think the Conmergence blog is wrong to try to blame SOA failure on stupid people. 

But I don't think it's helpful simply to blame SOA failure on organizational stupidity either. I think it makes more sense to regard the problems discussed by Swoyer as problems of interoperability. And I think one of the implications of Swoyer's discussion is that interoperability is not just a technical question but a sociotechnical question (involving people, processes and organizations). Achieving (and motivating and funding) requisite levels of interoperability with SOA is a matter for SOA governance. 

To those that have ... Organizational intelligence is an enabler for effective SOA, and might even to some extent be a result of succesful SOA. 

Swoyer reports that Kareem Yusuf (director of SOA product management at IBM) advocates a strong top-down push for service-enablement. (Well he would, wouldn't he?) This approach might make sense for improving interoperability within one organization (endo-interoperability), but isn't going to help much with interoperability between separate organizations (exo-interoperability).

Related posts: SOA Chaos, SOA Chaos 2

Saturday, July 23, 2005

Microsoft Architecture Journal

Philip Boxer and I have two papers in the Microsoft Architecture Journal.

Metropolis and SOA Governance: Towards the Agile Metropolis (Architecture Journal 5, July 2005)

Taking Governance to the Edge (Architecture Journal 6, August 2006)


Please let us have your comments on these articles. And please contact us if your organization is interested in adopting a new approach for aligning new business opportunities with the new technologies.

Friday, May 06, 2005

Buffering

In many activities, there is a highly variable relationship between the quantity of input and the quantity of output.

For example, Naba Barkakati discusses the nonlinearity of creative endeavors, and argues for buffering as a form of decoupling or desynchronization. Buffering is a way to bridge between an internal world with variable levels of production, and an external world with "linear expectations".

Naba's point doesn't just apply to creative endeavours. It also applies to any activities involving other human beings, such as sales and marketing. On a day-to-day basis, there is no linear relationship between the input (quantity of sales effort, time) and the output (quantity of sales). So sales people (and sales organizations) build in exactly the kind of buffers Naba is talking about, to smooth away the visible peaks and troughs. This may include booking sales in the following period.

Buffers are also used to provide some smoothing between a fixed set of spending budgets, and a variable (and unpredictable) set of spending requirements.

But this mechanism has several negative effects.
  1. It makes it much harder to identify and execute system improvements. A writer may need more active support from an editor or publisher, but the buffers work as a defence mechanism,with the result that the writer doesn't get this support.
  2. The writer (the productive agent) carries more responsibility and takes a greater risk. This may increase the stress on the writer, who may be operating above his/her bearing limit.
  3. The attempted smoothing may be counter-productive, particularly if the buffers get larger and larger, resulting in a small number of major disappointments instead of more frequent minor disappointments.
In many contexts, buffering is regarded as borderline malfeasance. Senior management, investors and regulators can take a particularly dim view of buffering that misrepresents the true state of an operation.

So what's the answer? We may wish to implement various forms of loose coupling and asynchronicity, but this needs to be done within an appropriate governance framework, with shared attention to the economic and ethical behaviour of the whole system.

Technorati Tags:

Friday, August 06, 2004

Governance at the Edge

Lots of edge thinking in the web services world at the moment.

Peter Cousins (Iona) has been talking about Integration at the Edge. Good summary by Steve Vinoski (also Iona).

The core argument here is that change and variety and business value occur at the edge of the organization, so that's where technology needs to be focused – including integration technology.

Some supporting evidence comes from the Yankee Group, which finds a surprisingly large number of web service projects addressing supply chain integration. Alice LaPlante (Web Services Pipeline) suggests this means that web services are most valuable at the seams where organizations are trying to meet and share data and functionality. Web Services Thrive On Enterprise 'Edge'

Radovan Janecek adds: OK, so if we agree we should do the integration on the edge, then I add: do the management on the edge too. Otherwise, you will fall in the ESB trap again. (RJ expands this here.)

Meanwhile, leading management thinkers are arguing for organizations to take power to the edge - and this is also motivated by matters of change and variety and business value. Thus the technology agenda coincides with and supports the business agenda.





Update: Following this post, Philip Boxer and I wrote two articles for the Microsoft Architecture Journal.

Metropolis and SOA Governance: Towards the Agile Metropolis (Architecture Journal 5, July 2005)

Taking Governance to the Edge (Architecture Journal 6, August 2006)

Wednesday, June 30, 2004

The Planning Dilemma

One of the key problems faced by planning (in IT and elsewhere) has been the dilemma – top-down or bottom-up. Top-down methods produce grand schemes without addressing the problems on the ground (including legacy), while bottom-up methods produce local solutions without any overall order, coherence or reuse.

Bottom-Up Approach (Point Projects) Top-Down Approach (Area Projects)
Local short-term initiative. No mandate to pay attention to broader, longer-term opportunities and effects. Broader, longer-term initiative
Building a solution against immediate requirements (where “building” means design, construct or assemble) Focus on system properties across a whole area (e.g. business domain, technical domain, infrastructure)
Strongly aligned to local objectives. Direct link between (local) benefits, costs and risks. Indirect links between benefits (across area), costs and risks
  • Often difficult to create/maintain business case for adequate investment in resources and infrastructure
  • Often difficult to demonstrate return on investment
Cost-effective use of conveniently available resources (improvisation or “bricolage”) Creating value by establishing (procuring or building) conveniently available resources

One way of addressing this dilemma is to introduce a twin track process, involving a top-down stream of activity and a bottom-up stream of activity.

Obviously for this twin-track process to be effective, we need clear allocation of responsibility, authority, expertise and work (RAEW). This is an aspect of governance - making sure the right things are done in the right way. Twin-track development exposes the inevitable tensions between business goals and service needs. And in federated/distributed development, these tensions are replicated across multiple business entities; governance then becomes a question of negotiation between two separate organizations, rather than simple management resolution within a single right framework.


This is a modified extract from an article on Business-Driven SOA published in the CBDI Journal, June 2004. For further extracts from this article, please see my Slideshare presentation on Organic Planning. See also our papers in the Microsoft Architecture Journal on Metropolis and SOA Governance (July 2005) and Taking Governance to the Edge (August 2006).

The notion of twin-track development is included in the Practical Guide to Federal SOA, published by the CIO Council in 2008. See also


See also: What does Top-Down mean? (September 2011)
For other posts on twin-track: browse, subscribe.