Showing posts with label ODP. Show all posts
Showing posts with label ODP. Show all posts

Friday, October 29, 2021

Interaction and Impedance

In the early 1990s, I was on a research and development project called the Enterprise Computing Project, within an area that was then known as Open Distributed Processing (ODP) and subsequently evolved into Service Oriented Architecture (SOA). One of the concepts we introduced was that of Interaction Distance. This was explained in a couple of papers that Ian Macdonald and I wrote in 1994-5, and mentioned briefly in my 2001 book.

Interaction distance is not measured primarily in terms of physical distance, but in terms of such dimensions as cost, risk, speed and convenience. It is related to notions of commodity and availability.

Goods that are available to us enrich our lives and, if they are technologically available, they do so without imposing burdens on us. Something is available in this sense if it has been rendered instantaneous, ubiquitous, safe, and easy. Borgmann p 41

In our time, things are not even regarded as objects, because their only important quality has become their readiness for use. Today all things are being swept together into a vast network in which their only meaning lies in their being available to serve some end that will itself also be directed towards getting everything under control. Levitt

One of the key principles of ODP was distribution transparency - you can consume a service without knowing where anything is located. The service interface provides convenient access to something that might be physically located anywhere. Or even something that doesn't have a fixed existence, but is assembled dynamically from multiple sources to satisfy your request.

As we noted at the time, this affects the relational economy in several ways. It may introduce new intermediary roles into the ecosystem, or alternatively it may allow some previously dominant intermediaries to be bypassed. Meanwhile, new value-adding services may become viable. Over the past twenty years there have been various standardization initiatives of this kind, often prefixed by the word Open. For example, Open Banking.

The example we used in our 1995 paper was video on demand (VoD). At that time there were three main methods for watching films: cinema, scheduled television or cable broadcast, and video rental. Video rental generally involved borrowing (and then rewinding and returning) VHS cassettes. DVDs were not introduced until 1996, and Netflix was founded in 1997.

Our analysis of VoD identified a delivery subsystem and a control subsystem, and sketched how these roles might be taken by some kind of collaboration between existing players (cable companies, phone companies). We also noted the organizational and commercial difficulties of implementing such a collaboration. As we now know, in the VoD case these difficulties were bypassed by the emergence of streaming services that were able to combine control and delivery into a single platform, and the technical configuration we outlined now looks horribly complicated, but the organizational and commercial issues are still relevant for other potential collaborative innovations.

And our analysis of interaction distance in relation to this example is still valid. In particular, we showed how VoD (in whatever technological form this might take) could significantly reduce the interaction distance between the film distributor and the consumer.

People often talk about digital transformation, and want to use this label for all kinds of random innovations. As I see it, the digital transformation in the video industry was largely on the production side. While the switch from VHS to DVD brought some minor benefits for the consumer, the real difference for the consumer came from the switch from rental to streaming, reducing interaction distance and bringing availability closer in space and time to the consumer. So I think a more meaningful label for this kind of innovation is service transformation.



Albert Borgmann, Technology and the Character of Contemporary Life (University of Chicago Press, 1984)

William Levitt's introduction to Heiddegger, The Question Concerning Technology

Christian Licoppe, ‘Connected’ Presence: The Emergence of a New Repertoire for Managing Social Relationships in a Changing Communication Technoscape (Environment and Planning D Society and Space, February 2004)

Ian G Macdonald and Richard Veryard, Modelling Business Relationships in a Non-Centralised Systems Environment. In A. Sölvberg et al. (eds.) Information Systems Development for Decentralized Organizations (Springer 1995)

Richard Veryard, Information Coordination (Prentice-Hall 1994) 

Richard Veryard, The Future of Distributed Services (December 2000)

Richard Veryard, Component-Based Business: Plug and Play (Springer 2001)

Richard Veryard and Ian G. Macdonald, EMM/ODP: A Methodology for Federated and Distributed Systems, in Verrijn-Stuart, A.A., and Olle, T.W. (eds) Methods and Associated Tools for the Information Systems Life Cycle (IFIP Transactions North-Holland 1994)

Wikipedia: ODP Reference Model

Monday, April 15, 2013

Business Architecture and Related Domains

The following post is an extract from my draft eBook Business Architecture Viewpoints, available from @Leanpub



There are some things I don't regard as part of the Business Architecture but as part of some other domain.

One reason for these exclusions is that am trying to avoid scope-creep - putting everything into the Business Architecture that has (or should have) a good link to business. I regard it as the task of Enterprise Architecture to coordinate between Business Architecture, Solution Architecture, Technology Architecture and whatever other domains the enterprise might need.

Which is why I am reluctant to add a Technology View to the Business Architecture. I am also unwilling to extend the Motivation View to include business case, because I regard the business case (despite its name) as belonging to the Solution Architecture. Documents are also part of the Solution Architecture.

Solution Architecture


Key Elements: Business Case (for Solution), Solution, System (including both Sociotechnical System and Software Application), Transformation, Use Case, Workflow.

Note that the Solution domain covers the whole sociotechnical solution space, not just the software application.

Note also that some frameworks (notably RM-ODP) define an enterprise viewpoint covering the purpose, scope and policies of a system. Because of the specific reference to a system, I do not regard the RM-ODP notion of enterprise viewpoint as equivalent to Business Architecture. It could be understood either as a business-oriented viewpoint within the Solution Architecture or as a mapping between the Business Architecture and the Solution Architecture.


Technology Architecture


Key Elements: Business Case (for Infrastructure or Product or Technology), Technical Commodity/Service, Technical Device/Mechanism

Within the Technology Architecture, there is a clear distinction between the abstract technology (for example "middleware", "SOA") and a specific technical product or platform (for example "IBM Websphere"). There is also a distinction between the technology-as-built and the technology-in-use.

Development Architecture


Key Elements: Project, Requirement, Development Lifecycle

Tom Mochal defines development architecture in a broad sense to include three major areas:

* The development life cycle and processes used to build business applications
   
* The application models that show the appropriate technical design that will best fit the business requirements

* The inventory and categorization of the business applications that exist within the organization today.

Physical Architecture / Environment


Key Elements: Building, Location, Equipment

Many IT people use such terms as Physical Architecture or Environment to refer to computer hardware. But I'm using the term to refer to all aspects of the physical environment, such as office buildings and physical equipment.

Where does Location belong? Traditionally this is regarded as part of the Business Architecture, but I believe it belongs somewhere else, along with buildings and working space and the kind of architecture associated with Le Corbusier and Frank Lloyd Wright. Ideally I'd want to call this the Physical Architecture, which would include a Geographical View or what Stewart Brand calls Site. I'd also include something I call Consilience.

Of course, location doesn't disappear in a global organization, but it doesn't dominate business processes in the way it once did, and is often merely a cost factor or speed factor. It is not geographical distance that matters any more, but abstract notions of distance, including commercial, semantic and cultural distance.


Friday, November 21, 2008

Progressive Design Constraints

When I wrote my piece on Post-Before-Processing, I wasn't thinking about how it might apply to the design process. So when I read Saul Caganoff's reply, on Progressive Data Constraints, my first reaction was, well that's interesting but completely different. But when I read Saul's piece again more slowly, I started to see a common pattern between designing an engineering system (using CAD) and designing a response to a complex unstructured set of events (using what first Vickers and then Checkland call an "appreciative system"). 

In both cases, there is a pitfall known as "jumping to conclusions" - seeking premature closure as a result of an inability to tolerate incompleteness, inconsistency or uncertainty. There is a related pitfall of psychological attachment - being unable to abandon or revise one's earlier decisions. One of the most important skills for the business analyst or systems designer is the ability to throw away his/her first attempt and start again, or to make radical revisions to an existing artefact. (I once built an advanced modelling class with a series of exercises designed to give students opportunities to practise this skill. I think classes like this are still pretty rare.)

There are two apparently opposite approaches to design. According to some design methodologies, we are supposed to start with a vague, generic and all-inclusive concept, and gradually add specific detail and refinement until we have something that we can implement. Work-in-progress should always be consistent and horizontally complete - it just lacks vertical completeness. Alternatively, we start with a bunch of conflicting and sometimes incoherent requirements, and try to find a way of satisfying as much of them as possible. Saul's description fits the second of these.

Bringing the topic back to SOA at the end of his post, Saul indicates the relevance of this kind of thinking to SOA design-time governance. Perhaps because of my background in Open Distributed Processing (ODP), I always think of SOA in terms of open distributed systems, whose design is never complete, always open-ended. So design-time governance always spills over into run-time governance, or is it the other way around?

If we take the SOA project seriously, we should be building systems and services that will interoperate with third-party systems and services, using unstructured data and complex events from a broad range of heterogeneous sources, to be reused and repurposed in user contexts we haven't yet thought about. So we have to be prepared for the emergence of "undocumented features" or "anomalies" (or whatever we want to call them) - and these may emerge at any point in the lifecycle.

Of course in mission-critical or safety-critical systems, there are certain categories of system failure that cannot be permitted. And even for non-critical systems, there is a limit to the amount of buggy behaviour that users will tolerate. But we don't achieve an acceptable level of quality in any kind of complex service-oriented systems engineering by pretending we can design and test our way to perfection in a single step.

Friday, February 22, 2008

Software + Services

This week I visited Microsoft campus in Reading, to hear some presentations on their Software-and-Service (S+S) strategy, and have a chat with Gianpaolo Carraro.

Phil Wainewright recently described Microsoft's S+S strategy as bunkum, and accused Gianpaolo of drinking too much Kool-Aid. Phil's point was that people didn't want to worry about buying software AND buying services.

However, Gianpaolo isn't really focusing on the commercial side of the equation. He spends most of his time talking about deployment. From this viewpoint, it makes sense in many situations to have a combination of stuff running on your own machines (called software) and stuff running on other people's machines (called services). Actually that's what most users have anyway - both corporate users and domestic consumers.

But isn't it all software anyway? And who cares where the stuff is being run? Perhaps all of the people care some of the time, and some of the people care all of the time. I care when it affects my service level. For example, there are some places where broadband is unavailable, or at least very expensive. So if I want to work on an aeroplane, I may need to make sure the relevant documents are physically loaded onto my laptop beforehand.

The original vision of open distributed processing (ODP) included not only location transparency but other forms of transparency including migration and relocation. This means I don't notice when stuff moves around. Suppose I'm working on a bunch of documents that are currently on the server. Imagine my laptop knows my travel plans, knows that I'm going to be on an airplane this afternoon, works out which documents I'm going to need, and quietly and efficiently moves them while I'm in mid-edit, without dropping a comma.

I presume that Ray Ozzie, founder of Groove Networks and now Microsoft Chief Architect, understands these kind of requirements. He is pushing the S+S story hard.

But there is a conceptual problem with this semi-transparency - the fact that sometimes these aspects are visible and sometimes they aren't. ODP handles this by mandating several parallel viewpoints of a distributed system.

The CBDI method for service architecture and engineering (SAE) inherits this principle, although our viewpoints are not identical to the ODP ones. Meanwhile Microsoft has four perspectives, but these are defined in terms of process: Build, Run, Consume and Monetize.

When you want to think about deployment and location, you use one viewpoint; when you want to think about functionality and semantics, you use a different viewpoint; and when you want to think about commercial terms, costs and liabilities, you use a different one again.

I think this helps to explain the apparent disagreement between Gianpaolo and Phil. The S+S world looks very different from a commercial/organizational viewpoint and a deployment viewpoint. When I raised this with Gianpaolo, he made the valid point that these viewpoints cannot be totally isolated from one another. What happens from a deployment viewpoint inevitably has an impact upon the commercial viewpoint, and vice versa. Technological progress may help us reduce this impact, but it isn't going to disappear altogether any time soon.

But that doesn't mean the viewpoints simply merge into one unmanageable mess. The viewpoints are important precisely because they help us understand how things in one viewpoint relate to things in another viewpoint. And this in turn raises another important challenge. How are architects supposed to manage all this complexity? If you try to optimize the commercial/organizational arrangements alone, you may get unsatisfactory performance or service levels; if you try to optimize the physical deployment alone, you may not get the best commercial deal or organization structure; and if you try to optimize everything simultaneously, your head will explode. Gianpaolo's best advice at present is to do things at a very coarse level of granularity, which reduces the number of permutations to consider. But that's clearly not ideal.

The industry currently lacks decent tools to support this kind of architectural reasoning. We don't even have decent notations - you aren't going to get very far with colour-coded UML diagrams.

But what about enterprise SOA? Some people are working towards a world where all software is rendered as services - whether it is running on an enterprise server safely inside the firewall, or on a third-party server farm in Mumbai or Kiev. (By Day In Bollywood, By Night In The Ukraine.) Some of the same technical and conceptual issues here, but the terminology is different. S+S suggests that it only counts as a service if it is remote - this clashes with the enterprise SOA terminology.

Software-and-Services? The name may well generate some misunderstanding, especially if it is taken too literally, but that's probably true of any jargon. Not the name I'd have chosen, but then I'm not in charge of Microsoft's marketing strategy. Gianpaolo and his team have been working hard, producing some interesting material and examples, and the tools and techniques and future challenges coming out of this kind of work will undoubtedly be relevant to Enterprise SOA as well.


References

Gianpaolo Carraro: S+S: Real or have I drunk too much Kool-Aid? :) (Feb 2008)
Phil Wainewright: Microsoft Kool-Aid and the cloud
Gianpaolo Carraro: I think Phil drank the Kool-Aid too but he has not realized it yet :) (Feb 2008)
Ray Ozzie: MIX07 keynote, interview
Broadway Musical: "By day in Hollywood, by night in the Ukraine"
Reference Model for Open Distributed Processing (RM-ODP): Wikipedia, specification

Sunday, October 07, 2007

Grandpa's SOA

JJ complains about SOA misconceptions, including the widespread claim that "SOA is not new, people were doing SOA 30 years ago". And it's not just SOA that attracts claims like these. I meet people who claim they were doing BPM and workflow twenty years ago.

JJ believes we can date SOA ("as we know it today") to the appearance of XML-RPC in early 1998. If we define SOA and BPM in technological terms, involving the use of a particular set of technologies, then it is certainly difficult to see how SOA and BPM could predate these technologies.

But if we define SOA and BPM in architectural terms, involving certain styles and patterns, then it is quite possible that some people were experimenting with these styles and patterns perhaps long before the associated technologies appeared. Indeed, according to writers like Lewis Mumford, this kind of pre-technological experimentation may be a vital step in the development of new technologies.

To take an analogy from electronic music, I am quite comfortable with the idea that Karlheinz Stockhausen and Delia Derbyshire were producing synthesized music before synthesizers existed. (See my post on Art and the Enterprise.)

But before modern synthesizers existed, the field of electronic music involved a very small number of brilliant composers (and a slightly larger number of not-so-brilliant composers), devoting enormous effort and expense to produce a very small amount of music (of varying quality).

Likewise, there were no doubt a very small number of brilliant software designers thirty years ago, doing amazing stuff with CICS and PL/1. Okay, so there wasn't a critical mass of network-accessible reusable IT assets then, but nor was there in 1998 either.

Mass adoption of industrial-strength SOA has only been feasible in the last few years. Thirty years ago, we didn't use the language of service-orientation. By the early 1990s there were lots of people in the ODP world looking beyond CORBA and talking seriously about services. And by the late 1990s there were lots of vendors trying to talk up the SOA vision.

The trouble is that when people talk about SOA, they typically lump together a load of different stuff. Some of this stuff was possible with CICS and PL/1 if you were very clever and your employer had deep pockets; some of it is possible today with the latest web service platforms; and some of it really isn't available yet.

How much of SOA is new is an impossible and subjective question. (See my post on the Red Queen Effect.) SOA champions spend half their time explaining how radical SOA could be, and half their time reassuring people how tried-and-tested and safe and based on sound principles it all is. So maybe grandpa is right some of the time, after all.

Saturday, March 19, 2005

Component-Based Business

My book on the Component-Based Business was published in 2001. At that time, Component-Based Software Engineering (CBSE) was all the rage (kindof) and I wanted to demonstrate that the same principles could be applied to the design and construction of business as to the design and construction of software.

At that time, there was a fundamental tension in the CBSE world, which I can very crudely characterize as follows: On the one hand, there were OO people who, when they thought of components at all, saw them as glorified objects. On the other hand there were the ODP/CORBA people who thought of components as inherently distributed service packages, but who were often prevented from implementing interesting and viable solutions by the prevailing technological state-of-the-art. (I did say this was a crude characterization, I know there are loads of exceptions, but still ...)

My own alignment was always with the service-oriented rather than the object-oriented. (Historical note: I worked on ANSA/ODP in the early 1990s, participating in the Enterprise Computing Project within the ODSA programme.) In my book, I talk about the fundamental principles of decomposing business into independent chunks, from a component perspective, but I also devote a great deal of space to the (sociotechnical) relationships between the chunks, and to the emergent properties of the whole ecosystem that is composed of these chunks.

In the past few years, there has been some huge shifts both in the technology itself, and in the way people are thinking about the technology. People are starting to become much more comfortable in thinking about service-orientation, and the technology is becoming more credible. For my part, I have started to talk less about the Component-Based Business and more about the Service-Based Business, but I see this as a shift in emphasis rather than a fundamental shift in perspective.

Elsewhere, some people are starting to take much more interest in the potential business impact of web services and SOA. (In terms of RM-ODP, this is a perfectly valid separation of concerns: other people can worry about the Technology and Engineering viewpoints, leaving me to devote my attention to the Enterprise and Information viewpoints.) As one indicator of this, we can see people starting to talk seriously about the component-based business as well as the service-based business. For example, IBM has been promoting a method called Component Business Modeling (CBM), which serves as a front-end to its Service-Oriented Modeling Approach (SOMA). However, based on the materials I have seen, I don't think that IBM's CBM represents the full power of the component-based business approach. (See separate posting on my Software Industry Analysis weblog.)

In my view, the main challenges of the component-based (/service-based) business include those of governance. A component-based approach must have a clear strategy for managing complexity, not just denying or suppressing complexity, and is probably going to draw ideas from complex systems engineering (systems, not software). This calls, among other things, for new kinds of modelling and new approaches to architecture.

There are two reasons why it might be a good time for me to produce an updated edition of my book.
  • This topic is now getting much hotter, and I think people may be more ready to accept some of my more radical suggestions.
  • I am now in a position to update some of the material, reference more recent technological opportunities and threats, and introduce a lot of practical business examples.
Or perhaps I should devote my energies to writing the sequel, with entirely new material, based on my more recent work. Please let me know what you think.

Monday, June 21, 2004

Beyond Binary Logic

Traditionally, business information systems have operated with a binary boolean logic. For example, some business processing may depend on determining one of two alternative states: e.g. "customer-on-file" or "customer-not-known". 

Open distributed processing (ODP), on which SOA is based, represented a radical challenge to this binary logic – a challenge which the IS world has still not properly embraced. With remote data, there is always a third alternative state: e.g. "customer-file-unavailable".

In distributed systems, perhaps the most obvious reason for the third alternative state was initially the unreliability of telecommunications links. Although the internet acts as a fairly robust medium to link two or more nodes, it also acts as a transmission channel for malware that can take out individual nodes for extended periods. Over the years, system designers have thought of ingenious ways to handle this so-called exception.

But ODP doesn’t just involve geographic distribution – it also distributes over heterogeneous technologies and systems, with possible ontological, semantic and pragmatic impedance.

Trust is an important example. Many systems are designed to discriminate definitively into those individuals (customers, passengers, business partners) that can be trusted, and those that cannot. Mechanisms such as single sign-on and hierarchical trust are called upon to impose binary notions of trust across multiple organizations. Binary trust attempts to suppress the possibility of doubt as to whether someone can/should be trusted.

But mechanisms for trust and security are always vulnerable to false positives (Type 2 errors) and false negatives (Type 1 errors). A system that properly manages these false positives and false negatives entails reintroducing the possibility of doubt. We should generally adopt a critical stance in relation to simplistic binary trust mechanisms (Type 3 errors).

Thus in the open distributed world of SOA, information may always be provisional rather than absolute. For certain purposes, it may be appropriate to construct (impose) a context where closure is viable. With tongue firmly in cheek, we call these Fortresses of Certainty. Outside these fortresses is the land of Maybe, where SOA plays free. 

Related post Complexity and Attenuation (June 2004)