Showing posts with label deperimeterization. Show all posts
Showing posts with label deperimeterization. Show all posts

Thursday, May 19, 2022

Enterprise as Noun

@tetradian spots a small, subtle yet very welcome sentence in the latest version of TOGAF. 

"An enterprise may include partners, suppliers, and customers as well as internal business units."

Although similar statements can be found in earlier versions of TOGAF, and the Open Group (which produces TOGAF) has been talking about the "extended enterprise" and "boundarylessness" for over twenty years, these ideas have often seemed to take a back seat. So let's hope that the launch of TOGAF 10 will trigger a new emphasis on the "extended enterprise". 

But why is this so difficult? It seems people are much more comfortable thinking about "the enterprise" or "the organization" as a fixed thing with a clearly defined perimeter. An enterprise is assumed to be a legal entity, with a dedicated workforce, one or more bank accounts and perhaps even a stock exchange listing. 

Among other things, this encourages a perimeter-based approach to risk and security, where the primary focus is to control events crossing the perimeter in order to protect and manage what is inside. Companies often seek to export risk and uncertainty onto their customers or suppliers.

Alternatives to the fortress model of security are known as de-perimeterization. The aptly named Jericho Forum was set up by the Open Group in 2004 to promote this idea, now merged into the Open Group's Security Forum.

Instead of enterprise as noun, what about enterprise as verb, thinking of enterprise simply as a way of being enterprising? This would allow us to consider transient collaborations as well as permanent legal entities. Businesses are sometimes described as "going concerns", and this phrase shifts our attention from the identity of the business to questions of activity, viability and intentionality.

  • Activity - What is going on?
  • Viability - Is it going? Is it going to go on going?
  • Intentionality - Where is it supposed to be going? Whose concern is it?

With this way of talking about enterprise, does it still make sense to talk about inside and outside, what belongs to the organization and what doesn't? I think it probably does, as long as we are careful not to take these terms too literally. Enterprises are assemblages of knowledge, power, responsibility, etc, and these elements are never going to be distributed uniformly. So even if we don't have a strict line dividing the inside from the outside, we can may be able to detect a gradient from inwards to outwards, and there may be opportunities to alter the gradient and make some strategic improvements, which we might frame in terms of agility or requisite variety. Hence Power to the Edge.

 


Related posts: Dividing Risk (April 2005), Jericho (May 2005), Power to the Edge (December 2005), Boundary, Perimeter, Edge (March 2007), Outside-In Architecture (August 2007), Ecosystem SOA (June 2010)

Wednesday, January 28, 2009

Peanut Ban as Perimeter Security

Bruce Schneier praises A Rational Response to Peanut Allergies and Children.

Many schools attempt to enforce a peanut ban. My son isn't allowed to take peanut butter sandwiches for lunch. In software terms, this is equivalent to perimeter security.

A better way of protecting children with allergies of all kinds is to ensure that the children and their carers are careful, and that they can deal promptly with any accidental ingestion. This is more robust, because it is not dependent upon total reliability of the perimeter defence. It is also more flexible, because it deals with a broader range of allergies (not just peanuts) without insisting on exclusion of every possible allergen.

In software terms, this is equivalent to deperimeterization.

Friday, August 01, 2008

Faithful representation

Systems people (including some SOA people and CEP people and BPM people) sometimes talk as if a system was supposed to be a faithful representation of the real world.

This mindset leads to a number of curious behaviours.

Firstly, ignoring the potential differences between the real world and its system representation, treating them as if they were one and the same thing. For example, people talking about "Customer Relationship Management" when they really mean "Management of Database Records Inaccurately and Incompletely Describing Customers". Or referring to any kind of system objects as "Business Objects". Or equating a system workflow with "The Business Process".

Secondly, asserting the primacy of some system ontology because "That's How the Real World Is Structured". For example, the real world is made up of "objects" or "processes" or "associations", therefore our system models ought to be made of the same things.

Thirdly, getting uptight about any detected differences between the real world and the system world, because there ought not to be any differences. Rigid data schemas and obsessive data cleansing, to make sure that the system always contains only a single version of the truth.

Fourthly, confusing the stability of the system world with the stability of the real world. The basic notion of "Customer" doesn't change (hum), so the basic schema of "Customer Data" shouldn't change either. (To eliminate this confusion you may need two separate information models - one of the "real world" and one of the system representation of the real world. There's an infinite regress there if you're not careful, but we won't go there right now.)

In the Complex Event world, Tim Bass and Opher Etzion have picked up on a simple situation model of complex events, in which events (including derived, composite and complex events) represent the "situation". [Correction: Tim's "simple model" differs from Opher's in some important respects. See his later post The Secret Sauce is the Situation Models, with my comment.] This is fine as a first approximation, but what neither Opher nor Tim mentions is something I regard as one of the more interesting complexities of event processing, namely that events sometimes lie, or at least fail to tell the whole truth. So our understanding of the situation is mediated through unreliable information, including unreliable events. (This is something that has troubled philosophers for centuries.)

From a system point of view, there is sometimes little to choose between unreliable information and basic uncertainty. If we are going to use complex event processing for fraud detection or anything like that, it would make sense to build a system that treated some class of incoming events with a certain amount of suspicion. You've "lost" your expensive camera have you Mr Customer? You've "found" weapons of mass destruction in Iraq have you Mr Vice-President?

One approach to unreliable input is some kind of quarantine and filtering. Dodgy events are recorded and analyzed, and then if they pass some test of plausibility and coherence they are accepted into the system. But this approach can produce some strange effects and anomalies. (This makes me think of perimeter security, as critiqued by the Jericho Forum. I guess we could call this approach "perimeter epistemology". The related phenomenon of Publication Bias refers to the distortion resulting from analysing data that pass some publication criterion while ignoring data that fail this criterion.)

In some cases, we are going to have to unpack the simple homogeneous notion of "The Situation" into a complex situation awareness, where a situation is constructed from a pile of unreliable fragments. Tim has strong roots in the Net-Centric world, and I'm sure he could say more about this than me if he chose.

Thursday, March 27, 2008

Heathrow Terminal 5

Going Live

"... technical difficulties ... staff familiarisation ... teething problems ... technical defect ... brief system fault ... a few minor problems ... time to bed down ..."
A complex bundle of services relocated into a purpose-built space. Mutual recriminations between the collaborating parties (airport and airline). What lessons for SOA design, implemention and deployment here?

Identity, Privacy and Security

"BAA has had to drop controversial plans to fingerprint domestic passengers after the information commissioner expressed concerns about the move. The airport operator said fingerprinting was needed for border security. Instead it will take photographs while the proposal is discussed with the commissioner's office."
Some aspects of the design not approved by key stakeholders and regulators. Apparently the reason for this controversial requirement is that domestic and international travellers are mixed into the same space, rather than being kept separate, and this creates additional security risks. It was assumed that technology (fingerprinting or photography) would substitute for physical barriers. Perhaps BAA chiefs have been reading articles on "deperimeterization".

Meanwhile, one technical solution is quickly substituted for another, although I'm not sure I understand why taking (and presumably transmitting and storing) photographs of passengers is any less of an invasion of privacy than taking fingerprints.

Sources

BBC News, March 27th 2008

Related Posts 

Service-Oriented Security (August 2006)
Travel Hopefully (March 2008)
For Whose Benefit Are Airports Designed? (January 2013)

Saturday, July 09, 2005

Purpose-Agnostic

Sean McGrath (Propylon) thinks purpose-agnosticism to be one of the really useful bits of SOA. He refers to a posting he made to the Yahoo SOA group in June 2003, where he wrote:
The real trick with EAI I think, is to get purpose-agnostic data representations of business level concepts like person, invoice, bill of lading etc., flowing around processing nodes.

The purpose-agnostic bit is the most important bit. OO is predicated on a crystal ball - developers will have sufficient perfect foresight to expose everything you might need via an API.

History has shown that none of us have such crystal balls.

Now if I just send you the data I hold, and you just send me the data you hold - using as thin an API as we can muster - we don't have to doubleguess each other.
Propylon has been implementing this idea in some technologies for the Irish Government.
However, Sean believes that purpose agnosticism can only be pushed so far. He cites the service example “test for eligibility for free telephone allowance”, which is evidently purpose-specific. Thus there are some areas where very prescriptive messages (which Sean calls "perlocutionary") are more appropriate.

Let's push this example back a little. Why does B need to know if a particular customer is entitled to a free telephone allowance? Only because it alters the way B acts in relation to this customer. We should be able to derive what B needs to know from (a model of) the required variety of B's behaviour. Let's suppose B delivers a telephone to the customer and also produces an invoice. And let's suppose the free telephone allowance affects the invoice but not the delivery. Then we can decompose B into B1 and B2, where only B2 needs to know about the free telephone allowance, allowing us to increase the decoupling between B1 and A. Furthermore, the invoice production may draw upon a range of possible allowances and surcharges, of which the free telephone allowance is just one instance.

On a cursory view, it appears that the technologies Sean has been building for the Irish Government support a programme of deconfliction - using some aspects of SOA (or whatever you want to call it) to separate sociotechnical systems into loosely coupled subsystems. If this is done right, it should deliver both IT benefits (productivity, reuse) and business benefits. This is all good stuff.


Joined-Up Services - Trust and Semantics

But the deconfliction agenda leads to a new set of questions about joined-up services - what emerges when the services interact, sometimes in complex ways? How do you impose constraints on the possible interactions? For example, there may be system-level requirements relating to privacy and trust.

One aspect of trust is that you only give data to people who aren't going to abuse it. Look at the current fuss in the US about data leakage and identity theft involving such agencies as ChoicePoint. The problem with Helland's notion of trust is that it deals with people who come into your systems to steal data, but doesn't deal with people who steal data elsewhere. So you have to start thinking about protecting data (e.g. by encryption) rather than protecting systems. (This is an aspect of what the Jericho Forum calls deperimeterization.) Even without traditional system integration, an SOA solution such as PSB will presumably still need complex trust relationships (to determine who gets the key to which data), but this won't look like the Helland picture.

A further difficulty comes with the semantics of the interactions. If we take away the assumption that everyone in the network uses the same reference model (universal ontology), then we have to allow for translation/conversion between local reference models. As Sean points out elsewhere, this is far from a trivial matter.


Technology Adoption

My expectation is that the deployment of technologies such as the Public Service Broker will experience organizational resistance to the extent that certain kinds of problem emerge out of the business requirements. In my role as a technology change consultant, I am particularly concerned with the question of technology adoption of SOA and related technologies, and how this is aligned with the business strategy of the target organization. I invite readers of this blog to tell me (in confidence) about the adoption of SOA in their organizations, or their customers' organizations, and the specific barriers to adoption it has encountered.


See also

Collaboration and Context (January 2006) Context and Purpose (February 2006)
Context and Presence (Category)