Showing posts with label NetCentric. Show all posts
Showing posts with label NetCentric. Show all posts

Wednesday, July 29, 2020

Information Advantage (not necessarily) in Air and Space

Some good down-to-earth points from #ASPC20 @airpowerassn 's Air and Space Power Conference earlier this month. Although the material was aimed at a defence audience, much of the discussion is equally relevant to civilian and commercial organizations interested in information superiority (US) or information advantage (UK).

Professor Dame Angela Mclean, who is the Chief Scientific Advisor to the MOD, defined information advantage thus:

The credible advantage gained through the continuous, decisive and resilient employment of information and information systems. It involves exploiting information of all kinds to improve every aspect of operations: understanding, decision-making, execution, assessment and resilience.

She noted the temptation for the strategy to jump straight to technology (technology push); the correct approach is to set out ambitious, enduring capability outcomes (capability pull), although this may be harder to communicate. Nevertheless, technology push may make sense in those areas where technologies could contribute to multiple outcomes.

She also insisted that it was not enough just to have good information, it was also necessary to use this information effectively, and she called for cultural change to drive improved evidence-based decision-making. (This chimes with what I've been arguing myself, including the need for intelligence to be actioned, not just actionable.)

In his discussion of multi-domain integration, General Sir Patrick Sanders reinforced some of the same points.
  • Superiority in information (is) critical to success
  • We are not able to capitalise on the vast amounts of data our platforms can deliver us, as they are not able to share, swap or integrate data at a speed that generates tempo and advantage
  • (we need) Faster and better decision making, rooted in deeper understanding from all sources and aided by data analytics and supporting technologies

See my previous post on Developing Data Strategy (December 2019) 


Professor Dame Angela Mclean, Orienting Defence Research to anticipate and react to the challenges of a future information-dominated operational environment (Video)

General Sir Patrick Sanders, Cohering Joint Forces to deliver Multi Domain Integration (Air and Space Power Conference, 15 July 2020) (Video, Official Transcript)

For the full programme, see https://www.airpower.org.uk/air-space-power-conference-2020/programme/

Saturday, December 07, 2019

Developing Data Strategy

The concepts of net-centricity, information superiority and power to the edge emerged out of the US defence community about twenty years ago, thanks to some thought leadership from the Command and Control Research Program (CCRP). One of the routes of these ideas into the civilian world was through a company called Groove Networks, which was acquired by Microsoft in 2005 along with its founder, Ray Ozzie. The Software Engineering Institute (SEI) provided another route. And from the mid 2000s onwards, a few people were researching and writing on edge strategies, including Philip Boxer, John Hagel and myself.

Information superiority is based on the idea that the ability to collect, process, and disseminate an uninterrupted flow of information will give you operational and strategic advantage. The advantage comes not only from the quantity and quality of information at your disposal, but also from processing this information faster than your competitors and/or fast enough for your customers. TIBCO used to call this the Two-Second Advantage.

And by processing, I'm not just talking about moving terabytes around or running up large bills from your cloud provider. I'm talking about enterprise-wide human-in-the-loop organizational intelligence: sense-making (situation awareness, model-building), decision-making (evidence-based policy), rapid feedback (adaptive response and anticipation), organizational learning (knowledge and culture). For example, the OODA loop. That's my vision of a truly data-driven organization.

There are four dimensions of information superiority which need to be addressed in a data strategy: reach, richness, agility and assurance. I have discussed each of these dimensions in a separate post:





Philip Boxer, Asymmetric Leadership: Power to the Edge

Leandro DalleMule and Thomas H. Davenport, What’s Your Data Strategy? (HBR, May–June 2017) 

John Hagel III and John Seely Brown, The Agile Dance of Architectures – Reframing IT Enabled Business Opportunities (Working Paper 2003)

Vivek Ranadivé and Kevin Maney, The Two-Second Advantage: How We Succeed by Anticipating the Future--Just Enough (Crown Books 2011). Ranadivé was the founder and former CEO of TIBCO.

Richard Veryard, Building Organizational Intelligence (LeanPub 2012)

Richard Veryard, Information Superiority and Customer Centricity (Cutter Business Technology Journal, 9 March 2017) (registration required)

Wikipedia: CCRP, OODA Loop, Power to the Edge

Related posts: Microsoft and Groove (March 2005), Power to the Edge (December 2005), Two-Second Advantage (May 2010), Enterprise OODA (April 2012), Reach Richness Agility and Assurance (August 2017)

Wednesday, December 04, 2019

Data Strategy - Assurance

This is one of a series of posts looking at the four key dimensions of data and information that must be addressed in a data strategy - reach, richness, agility and assurance.



In previous posts, I looked at Reach (the range of data sources and destinations), Richness (the complexity of data) and Agility (the speed and flexibility of response to new opportunities and changing requirements). Assurance is about Trust.

In 2002, Microsoft launched its Trustworthy Computing Initiative, which covered security, privacy, reliability and business integrity. If we look specifically at data, this mean two things.
  1. Trustworthy data - the data are reliable and accurate.
  2. Trustworthy data management - the processor is a reliable and responsible custodian of the data, especially in regard to privacy and security
Let's start by looking at trustworthy data. To understand why this is important (both in general and specifically to your organization), we can look at the behaviours that emerge in its absence. One very common symptom is the proliferation of local information. If decision-makers and customer-facing staff across the organization don't trust the corporate databases to be complete, up-to-date or sufficiently detailed, they will build private spreadsheets, to give them what they hope will be a closer version of the truth.

This is of course a data assurance nightmare - the data are out of control, and it may be easier for hackers to get the data out than it is for legitimate users. And good luck handling any data subject access request!

But in most organizations, you can't eliminate this behaviour simply by telling people they mustn't. If your data strategy is to address this issue properly, you need to look at the causes of the behaviour, understand what level of reliability and accessibility you have to give people, before they will be willing to rely on your version of the truth rather than theirs.

DalleMule and Davenport have distinguished two types of data strategy, which they call offensive and defensive. Offensive strategies are primarily concerned with exploiting data for competitive advantage, while defensive strategies are primarily concerned with data governance, privacy and security, and regulatory compliance.

As a rough approximation then, assurance can provide a defensive counterbalance to the offensive opportunities offered by reach, richness and agility. But it's never quite as simple as that. A defensive data quality regime might install strict data validation, to prevent incomplete or inconsistent data from reaching the database. In contrast, an offensive data quality regime might install strict labelling, with provenance data and confidence ratings, to allow incomplete records to be properly managed, enriched if possible, and appropriately used. This is the basis for the NetCentric strategy of Post Before Processing.

Because of course there isn't a single view of data quality. If you want to process a single financial transaction, you obviously need to have a complete, correct and confirmed set of bank details. But if you want aggregated information about upcoming financial transactions, you don't want any large transactions to be omitted from the total because of a few missing attributes. And if you are trying to learn something about your customers by running a survey, it's probably not a good idea to limit yourself to those customers who had the patience and loyalty to answer all the questions.

Besides data quality, your data strategy will need to have a convincing story about privacy and security. This may include certification (e.g. ISO 27001) as well as regulation (GDPR etc.) You will need to have proper processes in place for identifying risks, and ensuring that relevant data projects follow privacy-by-design and security-by-design principles. You may also need to look at the commercial and contractual relationships governing data sharing with other organizations.

All of this should add up to establishing trust in your data management - reassuring data subjects, business partners, regulators and other stakeholders that the data are in safe hands. And hopefully this means they will be happy for you to take your offensive data strategy up to the next level.

Next post: Developing Data Strategy



Leandro DalleMule and Thomas H. Davenport, What’s Your Data Strategy? (HBR, May–June 2017)

Richard Veryard, Microsoft's Trustworthy Computing (CBDI Journal, March 2003)

Wikipedia: Trustworthy Computing

Tuesday, December 03, 2019

Data Strategy - Agility

This is one of a series of posts looking at the four key dimensions of data and information that must be addressed in a data strategy - reach, richness, agility and assurance.



In previous posts, I looked at Reach, which is about the range of data sources and destinations, and Richness, which is about the complexity of data. Now let me turn to Agility - the speed and flexibility of response to new opportunities and changing requirements.

Not surprisingly, lots of people are talking about data agility, including some who want to persuade you that their products and technologies will help you to achieve it. Here are a few of them.
Data agility is when your data can move at the speed of your business. For companies to achieve true data agility, they need to be able to access the data they need, when and where they need it. Pinckney
Collecting first-party data across the customer lifecycle at speed and scale. Jones
Keep up with an explosion of data. ... For many enterprises, their ability to collect data has surpassed their ability to organize it quickly enough for analysis and action. Scott
How quickly and efficiently you can turn data into accurate insights. Tuchen
But before we look at technological solutions for data agility, we need to understand the requirements. The first thing is to empower, enable and encourage people and teams to operate at a good tempo when working with data and intelligence, with fast feedback and learning loops.

Under a trimodal approach, for example, pioneers are expected to operate at a faster tempo, setting up quick experiments, so they should not be put under the same kind of governance as settlers and town planners. Data scientists often operate in pioneer mode, experimenting with algorithms that might turn out to help the business, but often don't. Obviously that doesn't mean zero governance, but appropriate governance. People need to understand what kinds of risk-taking are accepted or even encouraged, and what should be avoided. In some organizations, this will mean a shift in culture.

Beyond trimodal, there is a push towards self-service ("citizen") data and intelligence. This means encouraging and enabling active participation from people who are not doing this on a full-time basis, and may have lower levels of specialist knowledge and skill.

Besides knowledge and skills, there are other important enablers that people need to work with data. They need to be able to navigate and interpret, and this calls for meaningful metadata, such as data dictionaries and catalogues. They also need proper tools and platforms. Above all, they need an awareness of what is possible, and how it might be useful.

Meanwhile, enabling people to work quickly and effectively with data is not just about giving them relevant information, along with decent tools and training. It's also about removing the obstacles.

Obstacles? What obstacles?

In most large organizations, there is some degree of duplication and fragmentation of data across enterprise systems. There are many reasons why this happens, and the effects may be felt in various areas of the business, degrading the performance and efficiency of various business functions, as well as compromising the quality and consistency of management information. System interoperability may be inadequate, resulting in complicated workflows and error-prone operations.

But perhaps the most important effect is on inhibiting innovation. Any new IT initiative will need either to plug into the available data stores or create new ones. If this is to be done without adding further to technical debt, then the data engineering (including integration and migration) can often be more laborious than building the new functionality the business wants.

Depending on whom you talk to, this challenge can be framed in various ways - data engineering, data integration and integrity, data quality, master data management. The MDM vendors will suggest one approach, the iPaaS vendors will suggest another approach, and so on. Before you get lured along a particular path, it might be as well to understand what your requirements actually are, and how these fit into your overall data strategy.

And of course your data strategy needs to allow for future growth and discovery. It's no good implementing a single source of truth or a universal API to meet your current view of CUSTOMER or PRODUCT, unless this solution is capable of evolving as your data requirements evolve, with ever-increasing reach and richness. As I've often discussed on this blog before, one approach to building in flexibility is to use appropriate architectural patterns, such as loose coupling and layering, which should give you some level of protection against future variation and changing requirements, and such patterns should probably feature somewhere in your data strategy.

Next post - Assurance


Richard Jones, Agility and Data: The Heart of a Digital Experience Strategy (WayIn, 22 November 2018)

Tom Pinckney, What's Data Agility Anyway (Braze Magazine, 25 March 2019)

Jim Scott, Why Data Agility is a Key Driver of Big Data Technology Development (24 March 2015)

Mike Tuchen, Do You Have the Data Agility Your Business Needs? (Talend, 14 June 2017)

Related posts: Enterprise OODA (April 2012), Beyond Trimodal: Citizens and Tourists (November 2019)

Sunday, December 01, 2019

Data Strategy - Richness

This is one of a series of posts looking at the four key dimensions of data and information that must be addressed in a data strategy - reach, richness, agility and assurance.



In my previous post, I looked at Reach, which is about the range of data sources and destinations. Richness of data addresses the complexity of data - in particular the detailed interconnections that can be determined or inferred across data from different sources.

For example, if a supermarket is tracking your movements around the store, it doesn't only know that you bought lemons and fish and gin, it knows whether you picked up the lemons from the basket next to the fish counter, or from the display of cocktail ingredients. And can therefore guess how you are planning to use the lemons, leading to various forms of personalized insight and engagement.

Richness often means finer-grained data collection, possibly continuous streaming. It also means being able to synchronize data from different sources, possibly in real-time. For example, being able to correlate visits to your website with the screening of TV advertisements, which not only gives you insight and feedback on the effectiveness of your marketing, but also allows you to guess which TV programmes this customer is watching.

Artificial intelligence and machine learning algorithms should help you manage this complexity, picking weak signals from a noisy data environment, as well as extracting meaningful data from unstructured content. From quantity to quality.

In the past, when data storage and processing was more expensive than today, it was a common practice to remove much of the data richness when passing data from the operational systems (which might contain detailed transactions from the past 24 hours) to the analytic systems (which might only contain aggregated information over a much longer period). Not long ago, I talked to a retail organization where only the basket and inventory totals reached the data warehouse. (Hopefully they've now fixed this.) So some organizations are still faced with the challenge of reinstating and preserving detailed operational data, and making it available for analysis and decision support.

Richness also means providing more subtle intelligence, instead of expecting simple answers or trying to apply one-size-fits all insight. So instead of a binary yes/no answer to an important business question, we might get a sense of confidence or uncertainty, and an ability to take provisional action while actively seeking confirming or disconfirming data. (If you can take corrective action quickly, then the overall risk should be reduced.)

Next post: Agility

Data Strategy - Reach

This is one of a series of posts looking at the four key dimensions of Data and Information that must be addressed in a data strategy - reach, richness, agility and assurance.



Data strategy nowadays is dominated by the concept of big data, whatever that means. Every year our notions of bigness are being stretched further. So instead of trying to define big, let me talk about reach.

Firstly, this means reaching into more sources of data. Instead of just collecting data about the immediate transactions, enterprises now expect to have visibility up and down the supply chain, as well as visibility into the world of the customers and end-consumers. Data and information can be obtained from other organizations in your ecosystem, as well as picked up from external sources such as social media. And the technologies for monitoring (telemetrics, internet of things) and surveillance (face recognition, tracking, etc) are getting cheaper, and may be accurate enough for some purposes.

Obviously there are some ethical as well as commercial issues here. I'll come back to these.

Reach also means reaching more destinations. In a data-driven business, data and information need to get to where they can be useful, both inside the organization and across the ecosystem, to drive capabilities and processes, to support sense-making (also known as situation awareness), policy and decision-making, and intelligent action, as well as organizational learning. These are the elements of what I call organizational intelligence. Self-service (citizen) data and intelligence tools, available to casual as well as dedicated users, improve reach; and the tool vendors have their own reasons for encouraging this trend.

In many organizations, there is a cultural divide between the specialists in Head Office and the people at the edge of the organization. If an organization is serious about being customer-centric, it needs to make sure that relevant and up-to-date information and insight reaches those dealing with awkward customers and other immediate business challenges. This is the power-to-the-edge strategy.

Information and insight may also have value outside your organization - for example to your customers and suppliers, or other parties. Organizations may charge for access to this kind of information and insight (direct monetization), may bundle it with other products and services (indirect monetization), or may distribute it freely for the sake of wider ecosystem benefits.

And obviously there will be some data and intelligence that must not be shared, for security or other reasons. Many organizations will adopt a defensive data strategy, protecting all information unless there is a strong reason for sharing; others may adopt a more offensive data strategy, seeking competitive advantage from sharing and monetization except for those items that have been specifically classified as private or confidential.

How are your suppliers and partners thinking about these issues? To what extent are they motivated or obliged to share data with you, or to protect the data that you share with them? I've seen examples where organizations lack visibility of their own assets, because they have outsourced the maintenance of these assets to an external company, and the external company fails to provide sufficiently detailed or accurate information. (When implementing your data strategy, make sure your contractual agreements cover your information sharing requirements.)

Data protection introduces further requirements. Under GDPR, data controllers are supposed to inform data subjects how far their personal data will reach, although many of the privacy notices I've seen have been so vague and generic that they don't significantly constrain the data controller's ability to share personal data. Meanwhile, GDPR Article 28 specifies some of the aspects of data sharing that should be covered in contractual agreements between data controllers and data processors. But compliance with GDPR or other regulations doesn't fully address ethical concerns about the collection, sharing and use of personal data. So an ethical data strategy should be based on what the organization thinks is fair to data subjects, not merely what it can get away with.

There are various specific issues that may motivate an organization to improve the reach of data as part of its data strategy. For example:
  • Critical data belongs to third parties
  • Critical business decisions lacking robust data
  • I know the data is in there, but I can't get it out.
  • Lack of transparency – I can see the result, but I don’t know how it has been calculated.
  • Analytic insight narrowly controlled by a small group of experts – not easily available to general management
  • Data and/or insight would be worth a lot to our customers, if only we had a way of getting it to them.
In summary, your data strategy needs to explain how you are going to get data and intelligence
  • From a wide range of sources
  • Into a full range of business processes at all touchpoints
  • Delivered to the edge – where your organization engages with your customers


Next post Richness

Related posts

Power to the Edge (December 2005)
Reach, Richness, Agility and Assurance (August 2017)
Setting off towards the data-driven business (August 2019)
Beyond Trimodal - Citizens and Tourists (November 2019)

Wednesday, August 16, 2017

Digital Disruption, Delivery and Differentiation in Fast Food

What are the differentiating forces in the fast food sector? Stuart Lauchlan hears some contrasting opinions from a couple of industry leaders.

In the short term, those fast food outlets that offer digital experience and delivery may get some degree of competitive advantage by reaching more customers, with greater convenience. Denny Marie Post, CEO at Red Robin Gourmet Burgers, sees the expansion of third-party delivery services as a strategic priority. So from agility to reach.

But Lenny Comma, CEO of Jack in the Box, argues that this advantage will be short-lived. Longer-term competitive advantage will depend on the quality of the brand. So from assurance to richness.




Stuart Lauchlan, Digital and delivery – which ‘D’ matters most to the fast food industry? Two contrasting views (Diginomica, 16 August 2017)

Related post: Reach, Richness, Agility and Assurance (Aug 2017)


Tuesday, August 15, 2017

Reach, Richness, Agility and Assurance

The concept of TotalData™ (which I helped to develop when I worked for Reply) implements the four dimensions of data and information - reach, richness, assurance and agility. But where did these dimensions come from?



I first encountered these four dimensions in discussions of net-centricity, which spilled out from the US defence world into the commercial world over ten years ago. Trying to dig up the original material recently, I found a military version in a report written in 2005 by the Association for Enterprise Integration (AFEI) for the Net-Centric Operations Industry Forum (NCOIF).

Going further back, the first two dimensions - reach and richness - had been discussed by Evans and Wurster before the turn of the millennium. They argued that old technologies had forced you to choose (either/or) between reach and richness, whereas the new technologies emerging at that time allowed you to have both/and.

Source: Evans and Wurster 1997

The authors also introduced the concept of affiliation, by which they meant transparency of relationships - for example, knowing whether the intermediary agent is working for you or working for the other side. Or both. And knowing who really wrote all those "customer reviews".

According to the authors, it would be these three factors - reach, richness and affiliation - that would determine the success of e-commerce. Clearly some sectors would be more open to these factors than others - according to The Economist in February 2000, online trade was then dominated by business-to-business (B2B). The three factors identified some of the challenges facing other sectors, including professional services, in going online. As Duncan, Barton and McKellar argued for legal firms, "The Web provides Reach, but offering Richness and the sense of community required for creating and sustaining relationships with visitors could be difficult."

Meanwhile, new architectural thinking had shown ways of resolving the traditional trade-off between speed (agility) and quality (assurance). (A very early version of this was known as Bimodal IT. Some industry analysts are still pushing this idea.)

When agility and assurance were added to reach and richness to produce the four dimensions of net-centricity, affiliation appears to have been divided between community (reach) and trust (assurance). But the importance of affiliation was never entirely forgotten. As Commander Chakraborty observes, "organisational affiliations and culture ... play very significant roles in a networked environment."

So whatever happened to net-centricity? It has been replaced by data-centricity, which, as Dan Risacher argues, is probably a more accurate term anyway. Or as we they call it at Reply, TotalData™.




Notes and References

Much of the original material for the NCOW Reference Model is no longer available. This includes the pages referenced from Wikipedia: NCOW (retrieved 8 August 2017). Net-centric concepts were incorporated into DODAF Version 1.5 (April 2007). The UK Ministry of Defence developed something similar, which they called Network-Enabled Capability. The civilian version is known as Network-Centric Organization.

Define and Sell (Economist, 24 Feb 2000)

AFEI, Industry Best Practices in Achieving Service Oriented Architecture (SOA) (NCOIF, April 2005)

Devbrat Chakraborty, Net-Centricity to Ne(x)t-Centricity (SP's Navel Forces, Issue 4/2011)

Peter Duncan, Karen Barton and Patricia McKellar, Reach and Rich: the new economics of information and the provision of on-line legal services in the U.K. (16th Bileta Annual Conference, 2001)

Philip Evans and Thomas S. Wurster, Strategy and the New Economics of Information (Harvard Business Review, Sept-Oct 1997)

Philip Evans and Thomas Wurster, Blown to Bits - How the New Economics of Information Transforms Strategy (Boston Consulting Group, 2000) - excerpts. See also reviews by McRae and O'Keefe.

Hamish McRae, The business world: Three factors that lead to successful e-commerce (Independent, 17 November 1999) - review of Evans and Wurster (2000)

Jordan Moskowitz, Richness versus Reach (Service Channel, 29 Jan 2013)

Terry O'Keefe, The strategy of information: Richness and reach (Atlanta Business Journal, 1 November 1999) - review of Evans and Wurster (2000)

Dan Risacher, The Fundamentals of Net-Centricity (a little late) (4 February 2013), 10 years of Net-Centric Data Strategy (25 April 2013)



Related Posts: Beyond Bimodal (May 2016), New White Paper - TotalData™ (August 2016), Developing Data Strategy (December 2019)

TotalData™ is a registered trademark of Reply Ltd.

Tuesday, November 11, 2008

Post Before Processing

In Talk to the Hand, Saul Caganoff describes his experiences of error entering his timesheet data into one of those time-recording systems many of us have to use. He goes on to draw some general lessons about error-handing in business process management (BPM). In Saul's account, this might sometimes necessitate suspending a business rule.

My own view of the problem starts further back - I think it stems from an incorrect conceptual model. Why should your perfectly reasonable data get labelled as error or invalid just because it is inconsistent with your project manager's data? This happens in a lot of old bureaucratic systems because they are designed on the implicit (hierarchical, top-down) assumption that the manager (or systems designer) is always right and the worker (or data entry clerk) is always the one that gets things wrong. It's also easier for the computer system to reject the new data items, rather than go back and question items (such as reference data) that have already been accepted into the database.

I prefer to label such inconsistencies as anomalies, because that doesn't imply anyone in particular being at fault.

It would be crazy to have a business rule saying that anomalies are not allowed. Anomalies happen. What makes sense is to have a business rule saying how and when anomalies are recognized (i.e. what counts as an anomaly) and resolved (i.e. what options are available to whom).

Then you never have to suspend the rule. It is just a different, more intelligent kind of rule.

One of my earliest experiences of systems analysis was designing order processing and book-keeping systems. When I visited the accounts department, I saw people with desks stacked with piles of paper. It turned out that these stacks were the transactions that the old computer system wouldn't accept, so the accounts clerks had developed a secondary manual system for keeping track of all these invalid transactions until they could be corrected and entered.

According to the original system designer, the book-keeping process had been successfully automated. But what had been automated was over 90% of the transactions - but less than 20% of the time and effort. So I said, why don't we build a computer system that supports the work that the accounts clerks actually do. Let them put all these dodgy transactions into the database and then sort them out later.

But I was very junior and didn't know how things were done. And of course the accounts clerks had even less status than I did. The high priests who commanded the database didn't want mere users putting dodgy data in, so it didn't happen.


Many years later, I came across the concept of Post Before Processing, especially in military or medical systems. If you are trying to load or unload an airplane in a hostile environment, or trying to save the life of a patient, you are not going to devote much time or effort to getting the paperwork correct. So all sorts of incomplete and inaccurate data get shoved quickly into the computer, and then sorted out later. These systems are designed on the principle that it is better to have some data, however incomplete or inaccurate, than none at all. This was a key element of the DoD Net-Centric Data Strategy (2003).

The Post Before Processing paradigm also applies to intelligence. For example, here is a US Department of Defense ruling on the sharing of intelligence data.
In the past, intelligence producers and others have held information pending greater completeness and further interpretative processing by analysts. This approach denies users the opportunity to apply their own context to data, interpret it, and act early on to clarify and/or respond. Information producers, particularly those at large central facilities, cannot know even a small percentage of potential users' knowledge (some of which may exceed that held by a center) or circumstances (some of which may be dangerous in the extreme). Accordingly, it should be the policy of DoD organizations to publish data assets at the first possible moment after acquiring them, and to follow-up initial publications with amplification as available. Net-Centric Enterprise Services Technical Guide


See also

Saul Caganoff, Talk to the Hand (11 November 2008), Progressive Data Constraints (21 November 2008)

Jeff Jonas, Introducing the concept of network-centric warfare and post before processing (21 January 2006), The Next Generation of Network-Centric Warfare: Process at Posting or Post at Processing (Same thing) (31 January 2007)(

Related Post: Progressive Design Constraints (November 2008)

Saturday, September 13, 2008

SOA Example - Total Asset Visibility

One of the potential applications of service-oriented architecture (SOA) is something called Total Asset Visibility. As we shall see, there are slightly different interpretations as to what this phrase actually means: what kind of visibility over what kind of assets; and does total refer to the assets (some visibility of all assets) or to the visibility (complete visibility of some assets)? However, SOA seems to be relevant to any of these (overlapping) meanings.

Supply Chain - Materiel

Update (20 July 2009):

The annual processes for verifying the location of certain fixed assets have revealed a significant increase in the levels of discrepancies being reported. In the case of the BOWMAN secure communications system (currently being used by Service personnel in Iraq and Afghanistan), some £155 million worth of BOWMAN assets reported in the accounts could not be fully accounted for, although the MOD estimates that a significant proportion of these are under repair. “At this time of high operational demand, it is more important than ever for the Ministry of Defence to have accurate records of where its assets are, and how much stock it has.” [National Audit Office, BBC News]

Our People Are Our Greatest Assets (Stalin)

I think what he actually said was something like "Our Cadres Are Our Greatest Wealth", but it comes to the same thing doesn't it?

But perhaps Total Asset Visibility isn't just about material, but about people as well. In a post called Big Brother USA: Surveillance via "Tagging, Tracking and Locating", Laurel Federbush refers to the possibility of implanting RFID chips into American soldiers, allowing not only their location to be tracked but also their physical and mental state. Federbush refers to something called the Soldier Status Monitoring Project: this is presumably the same as Warfighter Physiological Status Monitoring, which was planned as long ago as 1997 [Army Science and Technology Master Plan (ASTMP 1997)]; current research now addresses predictive modeling.



Friday, August 01, 2008

Faithful representation

Systems people (including some SOA people and CEP people and BPM people) sometimes talk as if a system was supposed to be a faithful representation of the real world.

This mindset leads to a number of curious behaviours.

Firstly, ignoring the potential differences between the real world and its system representation, treating them as if they were one and the same thing. For example, people talking about "Customer Relationship Management" when they really mean "Management of Database Records Inaccurately and Incompletely Describing Customers". Or referring to any kind of system objects as "Business Objects". Or equating a system workflow with "The Business Process".

Secondly, asserting the primacy of some system ontology because "That's How the Real World Is Structured". For example, the real world is made up of "objects" or "processes" or "associations", therefore our system models ought to be made of the same things.

Thirdly, getting uptight about any detected differences between the real world and the system world, because there ought not to be any differences. Rigid data schemas and obsessive data cleansing, to make sure that the system always contains only a single version of the truth.

Fourthly, confusing the stability of the system world with the stability of the real world. The basic notion of "Customer" doesn't change (hum), so the basic schema of "Customer Data" shouldn't change either. (To eliminate this confusion you may need two separate information models - one of the "real world" and one of the system representation of the real world. There's an infinite regress there if you're not careful, but we won't go there right now.)

In the Complex Event world, Tim Bass and Opher Etzion have picked up on a simple situation model of complex events, in which events (including derived, composite and complex events) represent the "situation". [Correction: Tim's "simple model" differs from Opher's in some important respects. See his later post The Secret Sauce is the Situation Models, with my comment.] This is fine as a first approximation, but what neither Opher nor Tim mentions is something I regard as one of the more interesting complexities of event processing, namely that events sometimes lie, or at least fail to tell the whole truth. So our understanding of the situation is mediated through unreliable information, including unreliable events. (This is something that has troubled philosophers for centuries.)

From a system point of view, there is sometimes little to choose between unreliable information and basic uncertainty. If we are going to use complex event processing for fraud detection or anything like that, it would make sense to build a system that treated some class of incoming events with a certain amount of suspicion. You've "lost" your expensive camera have you Mr Customer? You've "found" weapons of mass destruction in Iraq have you Mr Vice-President?

One approach to unreliable input is some kind of quarantine and filtering. Dodgy events are recorded and analyzed, and then if they pass some test of plausibility and coherence they are accepted into the system. But this approach can produce some strange effects and anomalies. (This makes me think of perimeter security, as critiqued by the Jericho Forum. I guess we could call this approach "perimeter epistemology". The related phenomenon of Publication Bias refers to the distortion resulting from analysing data that pass some publication criterion while ignoring data that fail this criterion.)

In some cases, we are going to have to unpack the simple homogeneous notion of "The Situation" into a complex situation awareness, where a situation is constructed from a pile of unreliable fragments. Tim has strong roots in the Net-Centric world, and I'm sure he could say more about this than me if he chose.

Sunday, September 17, 2006

Lessons from Zune

"Microsoft launches the Zune" reports Engadget this week.

There has been some discussion on the Internet (Kirk Biglione, Cory Doctorow, Bob Wyman, plus discussion on Digg) about the main feature apparently intended to differentiate the Zune from the iPod - the wireless share-with-a-friend feature.

  • Is this feature compatible with copyright law, or with a Creative Commons licence?
  • Is this feature compatible with a reasonably broad range of use-contexts?

Zune Insider (and Microsoft employee) Cesar Menendez reveals the thinking behind the design of this feature.
' "I made a song. I own it. How come, when I wirelessly send it to a girl I want to impress, the song has 3 days/3 plays?" Good question. There currently isn't a way to sniff out what you are sending, so we wrap it all up in DRM. We can’t tell if you are sending a song from a known band or your own home recording so we default to the safety of encoding. And besides, she'll come see you three days later. . .'
Now I certainly don't want to join in the criticism of Microsoft based on one unguarded remark by a Microsoft employee, and I don't know whether the final Zune will work exactly as Cesar describes. What I do want to talk about here is the importance of differentiated behaviour.

What Microsoft's critics are demanding is that the copying/sharing function of the Zune ought to be differentiated according to several factors, including
  • the original source (e.g. is this my own band recorded via old-fashioned microphones and mixed on my own computer, is it copied from a CD or downloaded from the internet)
  • the presence of some copyright or creative commons licence
  • the intentions of the copyright owner or licence-holder

What Cesar seems to be saying is that trusted information is not available to support this differentiation. Okay, something may be wrapped with a creative commons licence, but how do we know this can be trusted. Okay, you may have recorded this with your own microphone, but that doesn't prove you own the copyright. (Has your college lecturer given you permission to distribute his lectures?)

So why is this Microsoft's problem? If Microsoft has come up with a solution that suits most of the people most of the time, then everyone else can go hang. After all, it's not as if they got much better from Apple or Sony. (The Sony MiniDisc contained an early copy-prevention mechanism called SCMS, making the consumer-grade devices largely unsuitable for amateur music producers.)

Maybe it isn't Microsoft's problem. But there remains a significant value-deficit in some use-contexts. There might be a niche opportunity for some specialist provider, but this would presumably require some degree of interoperability. (Can the Zune receive material from third-party devices, or only from other Zunes?)

On this blog, and elsewhere, I have consistently supported differentiated (context-aware) services. I believe that differentiation is the right way (in an increasingly complex world) to deliver the greatest value to the greatest number of users, and I see loosely-coupled service architectures as the right way to configure differentiated services, to balance the (economic) needs of the provider with the (increasingly diverse) needs of the consumer.

In situations like these, differentiated service requires rich, reliable and ubiquitous data. In other words, network-centric.

Meanwhile, I take some comfort from Cesar's word "currently". There currently isn't a way - but let's hope they are working towards a sufficiently robust ontology, with decent (not just supplier-centric) trust, to support a fair degree of differentiation.



Some earlier posts about services and devices for recorded music:

 

Thursday, December 29, 2005

Power to the Edge

Power to the edge is about changing the way individuals, organizations, and systems relate to one another and work.
  • empowerment of individuals at the edge of an organization
  • adoption of an edge organization, with greatly enhanced peer-to-peer interactions.
  • moving senior personnel into roles that place them at the edge
Power to the edge is being presented in the military domain as the correct response to increased uncertainty, volatility, and complexity. Clearly these factors also apply to civilian enterprises, both commercial and public sector.

Military use of the term comes from a book by David S. Alberts and Richard E. Hayes, of the US Department of Defense Command and Control Research Program (DoD CCRP). See also presentation material by Dr Margaret Myers.

Groove (acquired by Microsoft in March 2005) always liked this concept – see blogs by Ray Ozzie and Michael Helfrich. See also blogs by Doug Simpson and Nathan Wallace.

Philip Boxer and I wrote a couple of articles for the Microsoft Architecture Journal on the implications for Service Oriented Architecture. Philip and Carole Eigen also applied the concept to the psychoanalytic study of organizations.

I found a weblog rant here to the effect that Power to the Edge is all about speeding up information flow, just another name for Reengineering. In my view, this is a fundamental misunderstanding. Obviously Power to the Edge may call for improved flow of information: quality and complexity as well as quantity and speed. But Power to the Edge is not the improved flow itself but what it enables – which is a fundamental transformation in the geometry of the organization away from a hierarchical command-and-control structure. And such structures are still as common in civilian/commercial organizations as in the military, if not more so.



David S. Alberts and Richard E. Hayes, Power to the Edge: Command … Control … in the Information Age (CCRP June 2003) PDF version available online (1.7 MB).

Philip Boxer, Taking power to the edge by empowering the edge role (Asymmetric Leadership, 24 January 2006)

Philip Boxer and Carole Eigen, Taking power to the edge of the organisation: re-forming role as praxis (ISPSO Symposium, Baltimore, June 2005) (abstract) (presentation) (paper)

Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal 6, August 2006)

Margaret Myers, Power to the Edge Through Net Centricity – Transformation of the Global Information Grid (CHIPS July-September 2002) Slides (pdf).

John Stenbit, Moving Power to the Edge (CHIPS July-September 2003)

Richard Veryard and Philip Boxer, Metropolis and SOA Governance: Towards the Agile Metropolis (Microsoft Architecture Journal 5, July 2005)

Wikipedia: Power to the Edge

Cross-posted to AsymmetricLeadership blog

See also Demise of the Super Star (August 2004), Governance at the Edge (August 2004) Microsoft and Groove (March 2005), Developing Data Strategy (December 2019)

Updated 7 December 2019

Monday, October 03, 2005

SOA 2.0

There has been renewed discussion of Web 2.0 around the blogosphere this week. Tim O'Reilly posted a schematic diagram Web2MemeMap onto Flickr, and published a long article What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software on his website. Several of the blogs I read have linked to either or both of these, and some of them have copied the diagram.

Many of the elements of Web 2.0 are highly relevant to service-oriented architecture and the service economy. In this post, I want to extract two sets in particular.

Loosely coupled systems of systems Small pieces loosely joined - web as components
Granular addressability of content
Software above the level of a single device
Radical decentralization
User-centric Users control their own data
Rich user experience
Trust your users
Emergent - user behaviour not predetermined
Customer self-service - enabling the long tail

If we apply this kind of thinking to SOA, a distinction emerges between SOA 1.0 and SOA 2.0.

SOA 1.0 SOA 2.0
Supply-side oriented Supply-demand collaboration
Straight-through processing Complex systems of systems
Aggregating otherwise inert systems and providing some new communication channels Frameworks, applications, agents and communication channels understanding each other more deeply. Building a smarter stack and designing applications to take advantage of new constructs that (we hope) promote agility and simplicity. Erik Johnson via Dion Hinchcliffe
Single directing mind Collaborative composition
Controlled reuse Uncontrolled reuse See my earlier posts on Controlling Content and Shrinkwrap or Secondhand
Endo-interoperability (within single enterprise or closed collaborative system) Exo-interoperability I am currently preparing a longer paper on interoperability and risk. See my recent posts on Efficiency and Robustness.
Cost savings Improved user experience This is one of the areas where SOA starts to get interesting for the business and not just for the technologists.

An emerging network-centric platform to support distributed, collaborative and cumulative creation by its users John Hagel

There are some other elements of SOA 2.0 that I intend to discuss in subsequent posts.

Sunday, June 26, 2005

Network-Centric Presence

Since Microsoft acquired Groove, and especially following some mysterious hints from ex-Groover and network-centric blogger Michael Helfrich, I've been wondering what Michael was up to. His latest blog posting on Network-Centric Products reveals that he is now with Jabber.

At first sight, Jabber appears to be merely selling a set of technical communication products for Enterprise Instant Messaging. There is a lot of technical material available via the Jabber Software Foundation. See also this Jabber Presence world map.

But now I am aware of Michael's involvement in Jabber, so this prompts me to take a closer interest in new opportunities created by the technology of presence. Jabber defines presence as "the rich suite of changing characteristics that describe the state of a user, device, or application".

I see this notion of presence as highly relevant to service-oriented architecture (SOA). In my work with the CBDI Forum, I have talked a lot about Differentiated Service - services whose operating characteristics vary according to a range of context indicators. I have argued that this is a key principle for achieving the SOA goal of network-centric adaptability.

Michael will be present (presenting? establishing a presence?) at the 4th Annual Government Symposium on Information Sharing and Homeland Security and his blog focuses on asymmetry and network-centricity in mitigating security threats.

As I have written in this blog and elsewhere, I see asymmetry and network-centricity as key issues for business strategy and business/IT alignment, and I think the commercial world can learn a lot from some of the latest military thinking. I look forward to seeing how Michael's next-generation product can be used to support business SOA (business-oriented architecture? architecture-oriented business?).

Technorati Tags

Sunday, May 01, 2005

Service-Oriented Business Strategy

A service-oriented modelling approach helps us to identify alternative business strategies, involving the creation and deployment of added-value services.
  • Identify value-added business services that can be seen (by customers) as more relevant to the context of use. 
  • Identify value-added business services that are flexible / reusable (by customers) in multiple use-contexts. 
  • Compose value-added business services in an efficient and reliable manner from internal and external capabilities. 
  • Provide a service platform to support customers in composing our business services to solve their problems. 
I am in the process of defining an extended example from the pharmaceutical sector. The first part will now be published by CBDI in May 2005.

Why Pharma?

We selected the pharmaceutical industry for a worked example because it provides a good example of a complex information supply chain. A drug company (in collaboration with a distributed network of research and test) produces a drug, together with lots of information relating to the drug. There are several different categories of information user: the patient who takes the drug, the medical practitioner who prescribes and/or dispenses the drug, the health service or insurer that pays for the drug, and the regulator who monitors the safety of the drug.

Pharma appears to illustrate some general characteristics of complex service networks, so this example should be of relevance to other industry sectors.

Above all, for SOA illustration purposes, pharma has two advantages. Firstly, it isn't the same old boring examples everyone else is using (finance, travel, retail). And secondly, it isn't military.

In the past, drug companies have been able to make substantial profits from an essentially drug-centric process, getting high sales volumes for its blockbuster drugs from a largely undifferentiated mass of patients with a given condition. This business model treated the physician or clinic as pretty much equivalent to a retail outlet, and did not involve any relationship with the end-customer (the drug consumer). But this business model is subject to major challenge from several directions.

Approach

We model a business as an open system, whose viability depends on robust and appropriate interactions with a dynamic environment. (This contrasts with the closed system approach adopted by many traditional business modeling methods, whose focus is on producing a complete and coherent account of some internal configuration of processes and services, against a fixed view of the environment.)

We model a service-oriented business as a system of systems. Services here may include tasks automated in software (typically but not necessarily rendered as web services) as well as human tasks.

SOA is not just about decomposition – producing fine-grained services with maximum decoupling. Equally important is to think about composition – how these services can be integrated in many different ways to support a wide variety of demand.

In general terms, there are a number of distinct categories of stakeholder, each performing fairly complex functions in relation to the pharma value chain. A key SOA challenge for a drug company is to provide services to all these different stakeholders in a consistent and coordinated yet flexible way. In order to meet this challenge, we need to produce a series of models, from different stakeholder perspectives, showing how the services can be composed in various contexts of use.

In our view, the essential shift for service-oriented modeling is to view the services, not from the provider's perspective, but from the customer's perspective, and in the customer's context of use.
 

Strategic Reframe

From the perspective of business strategy, what we are looking at here is a strategic reframe of the pharma business. What is the drug company actually selling, and to whom? How does information and services become an integral component of the overall product offering?

The ultimate source of value for the patient is defined in terms of health. The provision of health to patients is based on the deployment of complex medical knowledge by a physician, and this in turn relies on information about particular drugs and combinations of drugs from the drug company and elsewhere. This essentially defines a value ladder, in which the value of the drugs contributes to the value of the heathcare.

While this kind of strategic reframing is widely discussed, what service-oriented modeling provides is a systematic way of determining and analyzing the strategic options, in terms of the value ladders that can be supported.

Can the drug company solve all the problems of healthcare? Can the physician solve all the problems of healthcare? The answer in both cases is of course NO. There is huge complexity involved, and the design goal for SOA is to define a reasonable separation of concerns between the physician and the drug company, that allows each of them to manage an appropriate part of the complexity.



Previous Post: SOA Pharma

Friday, April 01, 2005

Deconfliction and Interoperability

Deconfliction

Deconfliction is an important type of decoupling. In October 2001, a Time Magazine cover story (Facing the Fury) used the term.

Bush's gambit — filling the skies with bullets and bread — is also a gamble, Pentagon officials concede. The humanitarian mission will to some degree complicate war planning. What the brass calls "deconfliction" — making sure warplanes and relief planes don't confuse one another — is now a major focus of Pentagon strategy. "Trying to fight and trying to feed at the same time is something new for us," says an Air Force general. "We're not sure precisely how it's going to work out."

The military take interference very seriously - it's a life and death issue. Deconfliction means organizing operations in a way that minimizes the potential risk of interference and internal conflict, so that separate units or activities can be operated independently and asynchronously.

But deconfliction is often a costly trade-off. Resources are duplicated, and potentially conflicting operations are deliberately inhibited.

As communications become more sophisticated and reliable, it becomes possible to reintroduce some degree of synchronization, to allow units and activities to be orchestrated in more powerful ways. This is the motivation for network-centric warfare, which brings increased power to the edge.

Although the word isn't often used in commercial and administrative organizations, a similar form of deconfliction can be inferred from the way hierarchical organizations are managed, and in traditional accounting structure of budgets and cost centres. This is known to be inflexible and inefficient. Whenever we hear the terms "joined-up management" or "joined-up government", this is a clue that excessive deconfliction has occurred.

Interoperability

Deconfliction leads us towards a negative notion of pseudo-interoperability: X and Y are pseudo-interoperable if they can operate side-by-side without mutual interference.

But there is also a positive notion of real interoperability: X and Y are interoperable if there is some active coordination between them. This forces us to go beyond deconfliction, back towards synchronization.

General Shoomaker: "We've gone from deconfliction of joint capability to interoperability to actually interdependence where we've become more dependent upon each other's capabilities to give us what we need." (CSA Interview, Oct 2004).

Philip Boxer writes: "The traditional way of managing interoperability is through establishing forms of vertical transparency consistent with the way in which the constituent activities have been deconflicted. The new forms of edge role require new forms of horizontal transparency that are consistent with the horizontal forms of linkage needed across enterprise silos to support them. Horizontal transparency enables different forms of accountability to be used that take power to the edge, but which in turn require asymmetric forms of governance." (Double Challenge, March 2006)

Relevance to Service-Oriented Architecture (SOA)

It is sometimes supposed that the SOA agenda is all about decoupling. Requirements models are used to drive decomposition - the identification of services that will not interfere with each other. These services are then designed for maximum reuse, producing low-level economies of scale.

Clearly there are some systems that are excessively rigid, and will benefit from a bit of loosening up.

But this is only one side of the story. While some systems are excessively rigid, there are many others that are hopelessly fragmented. The full potential of SOA comes from decomposition and recomposition.


Further Reading
For more on Architecture, Data and Intelligence, please subscribe to this blog.


Other Sources