Showing posts with label orgintelligence. Show all posts
Showing posts with label orgintelligence. Show all posts

Sunday, November 08, 2020

Business Science and its Enemies

#FollowingTheScience As politicians around the world struggle to contain and master the Covid-19 pandemic, the complex role of science in guiding decision and policy has been brought into view. Not only the potential tension between science and policy, but also the tension between different branches of science. (For example, medical science versus behavioural science.)

In this post, I want to look at the role of science in guiding business decisions and policies. Professor Donoho traces the idea of data science back to a paper by John Tukey in the early 1960s, and the idea of management science, which Stafford Beer described as the business use of operations research is at least as old as that. More recently, people have started talking about business science. These sciences are all described as interdisciplinary.

Operations research itself is even older. It was originally established during the second world war as a multi-disciplinary exercise, perhaps similar to what is now being called business science, but it lost its way in the years after the war and was eventually reduced to a set of computer programming techniques with no real impact on organization and culture. 

In a recent webinar on Business Science, Steve Fox asked what business science enabled leaders to do better, and identified three main areas. 

Firstly system-related - to anticipate requirements and resources, identify issues, including risk and compliance issues, and fix problems. 

And secondly people-related - to tell the story, influence stakeholders and negotiate improvements. Focusing on message and communications to the various audiences we need to influence is a key part of business science. 

And thirdly, thinking-related. When business science is applied correctly, it changes the way we think. 

I agree with these three, but I'd like to add a fourth: organizational learning and agility. This is an essential component of my approach to organizational intelligence, which is based on the premise that serious business challenges require a combination of human/social intelligence and machine intelligence.


Steve Fox also stated that the biggest obstacles to creating data-driven business aren't technical; they're cultural and behavioural. So in this post, I also want to look at some of the obstacles of following the science in the context of business and organizational management.

  • Poor Data - Inadequate Measurement and Testing - Ignoring Critical Signals
  • Too Much Data - Overreliance on Technology - Abdication
  • Silo Culture - Someone Else's Problem
  • Linear Thinking - Denial of Complexity
  • Premature Attempts to Eliminate Uncertainty
  • Quantity becomes Quality

After I had initially drawn up this list, I went back to Tukey's original paper and found many of them clearly stated in there. 



Poor Data - Inadequate Measurement and Testing - Ignoring Critical Signals

Empirical science relies heavily on a combination of observation, experiment and measurement. 


Too Much Data - Overreliance on Technology - Abdication

Tukey: Danger only comes from mathematical optimizing when the results are taken too seriously.

Adrian Chiles reminds us that all the data in the world is no use to you if don’t know what to do with it. He quotes Aron F. Sørensen (via Chris Baraniuk) Maybe today there’s a bit of a fixation on instruments.

And in many situations, people overrely on algorithms. For example, judges relying on algorithms to decide probation or sentencing, without applying any of their own judgement or common sense. If a judge doesn't bother doing any actual judging, and lets the algorithm do all the work, what exactly are we paying them for?


Linear Thinking - Denial of Complexity

Tukey: If it were generally understood that the great virtue of neatness was that it made it easier to make things complex again, there would be little to say against a desire for neatness.

One of the best-known examples of linear thinking was a false theory about the vulnerabilities of aircraft during the second world war, based on the location of holes in planes that returned to base. People assumed that the vulnerabilities were where the holes were, and this led to efforts to reinforce planes at those points.

Non-linear thinking turns this theory on its head. If a plane makes it back to base with a hole at a particular location, this should be taken as evidence that the plane was NOT vulnerable at that point. What you really want to know is the location of the holes in the planes that did NOT make it back to base.

In 1979, C West Churchman wrote a book called The Systems Approach and its Enemies, about how people and organizations resist the ways of thinking that Churchman and others were championing. Among other things, he noted the way people often preferred to latch onto simplistic one-dimensional/linear solutions rather than thinking holistically.



Chris Baraniuk, Why it’s not surprising that ship collisions still happen (BBC News 22nd August 2017)

Christa Case Bryant and Story Hinckley, In a polarized world, what does follow the science mean? (Christian Science Monitor, 12 August 2020)

Adrian Chiles, In a data-obsessed world, the power of observation must not be forgotten (The Guardian, 5 November 2020)

C West Churchman, The Systems Approach and its Enemies (1979)

David Donoho, 50 years of Data Science (18 September 2015)

John Dupré, Following the science in the COVID-19 pandemic (Nuffield Council of Bioethics, 29 April 2020)

Faye Flam, Follow the Science Isn’t a Covid-19 Strategy: Policy makers can follow the same facts to different conclusions (Bloomberg, 10 September 2020)

Steve Fox, A better framework is needed: From Data Science to Business Science (Consider.Biz, 17 September 2020) via YouTube

Matt Mathers, Ministers using following the science defence to justify decision-making during pandemic, says Prof Brian Cox (Independent, 19 May 2020) 

Megan Rosen, Fighting the COVID-19 Pandemic Through Testing (Howard Hughes Medical Institute, 18 June 2020)

John Tukey, The future of data analysis (Annals of Mathematical Statistics, 33:1, 1962)

Wikipedia: Data Science, Management Science 


Related posts: Enemies of Intelligence (May 2010), Changing how we think (May 2010), Data-Driven Reasoning - COVID (April 2022)

Saturday, December 07, 2019

Developing Data Strategy

The concepts of net-centricity, information superiority and power to the edge emerged out of the US defence community about twenty years ago, thanks to some thought leadership from the Command and Control Research Program (CCRP). One of the routes of these ideas into the civilian world was through a company called Groove Networks, which was acquired by Microsoft in 2005 along with its founder, Ray Ozzie. The Software Engineering Institute (SEI) provided another route. And from the mid 2000s onwards, a few people were researching and writing on edge strategies, including Philip Boxer, John Hagel and myself.

Information superiority is based on the idea that the ability to collect, process, and disseminate an uninterrupted flow of information will give you operational and strategic advantage. The advantage comes not only from the quantity and quality of information at your disposal, but also from processing this information faster than your competitors and/or fast enough for your customers. TIBCO used to call this the Two-Second Advantage.

And by processing, I'm not just talking about moving terabytes around or running up large bills from your cloud provider. I'm talking about enterprise-wide human-in-the-loop organizational intelligence: sense-making (situation awareness, model-building), decision-making (evidence-based policy), rapid feedback (adaptive response and anticipation), organizational learning (knowledge and culture). For example, the OODA loop. That's my vision of a truly data-driven organization.

There are four dimensions of information superiority which need to be addressed in a data strategy: reach, richness, agility and assurance. I have discussed each of these dimensions in a separate post:





Philip Boxer, Asymmetric Leadership: Power to the Edge

Leandro DalleMule and Thomas H. Davenport, What’s Your Data Strategy? (HBR, May–June 2017) 

John Hagel III and John Seely Brown, The Agile Dance of Architectures – Reframing IT Enabled Business Opportunities (Working Paper 2003)

Vivek Ranadivé and Kevin Maney, The Two-Second Advantage: How We Succeed by Anticipating the Future--Just Enough (Crown Books 2011). Ranadivé was the founder and former CEO of TIBCO.

Richard Veryard, Building Organizational Intelligence (LeanPub 2012)

Richard Veryard, Information Superiority and Customer Centricity (Cutter Business Technology Journal, 9 March 2017) (registration required)

Wikipedia: CCRP, OODA Loop, Power to the Edge

Related posts: Microsoft and Groove (March 2005), Power to the Edge (December 2005), Two-Second Advantage (May 2010), Enterprise OODA (April 2012), Reach Richness Agility and Assurance (August 2017)

Wednesday, August 07, 2019

Process Automation and Intelligence

What kinds of automation are there, and is there a natural progression from basic to advanced? Do the terms intelligent automation and cognitive automation actually mean anything useful, or are they merely vendor hype? In this blogpost, I shall attempt an answer.


Robotic Automation

The simplest form of automation is known as robotic automation or robotic process automation (RPA). The word robot (from the Czech word for forced labour, robota) implies a pre-programmed response to a set of incoming events. The incoming events are represented as structured data, and may be held in a traditional database. The RPA tools also include the connectivity and workflow technology to receive incoming data, interrogate databases and drive action, based on a set of rules.




Cognitive Automation

People talk about cognitive technology or cognitive computing, but what exactly does this mean? In its marketing material, IBM uses these terms to describe whatever features of IBM Watson they want to draw our attention to – including adaptability, interactivity and persistence – but IBM’s usage of these terms is not universally accepted.

I understand cognition to be all about perceiving and making sense of the world, and we are now seeing man-made components that can achieve some degree of this, sometimes called Cognitive Agents.

Cognitive agents can also be used to detect patterns in vast volumes of structured and unstructured data and interpret their meaning. This is known as Cognitive Insight, which Thomas Davenport and Rajeev Ronanki refer to as “analytics on steroids”. The general form of the cognitive agent is as follows.



Cognitive agents can be wrapped as a service and presented via an API, in which case they are known as Cognitive Services. The major cloud platforms (AWS, Google Cloud, Microsoft Azure) provide a range of these services, including textual sentiment analysis.

At the current state-of-the-art, cognitive services may be of variable quality. Image recognition may be misled by shadows, and even old-fashioned OCR may struggle to generate meaningful text from poor resolution images. – but of course human cognition is also fallible.


Intelligent Automation

Meanwhile, one of the key characteristics of intelligence is adaptability – being able to respond flexibly to different conditions. Intelligence is developed and sustained by feedback loops – detecting outcomes and adjusting behaviour to achieve goals. Intelligent automation therefore includes a feedback loop, typically involving some kind of machine learning.



Complex systems and processes may require multiple feedback loops (Double-Loop or Triple-Loop Learning). 


Edge Computing

If we embed this automation into the Internet of Things, we can use sensors to perform the information gathering, and actuators to carry out the actions.



Closed-Loop Automation

Now what happens if we put all these elements together?



This fits into a more general framework of human-computer intelligence, in which intelligence is broken down into six interoperating capabilities.



I know that some people will disagree with me as to which parts of this framework are called "cognitive" and which parts "intelligent". Ultimately, this is just a matter of semantics. The real point is to understand how all the pieces of cognitive-intelligent automation work together.


The Limits of Machine Intelligence

There are clear limits to what machines can do – but this doesn’t stop us getting them to perform useful work, in collaboration with humans where necessary. (Collaborative robots are sometimes called cobots.) A well-designed collaboration between human and machine can achieve higher levels of productivity and quality than either human or machine alone. Our framework allows us to identify several areas where human abilities and artificial intelligence can usefully combine.

In the area of perception and cognition, there are big differences in the way that humans and machines view things, and therefore significant differences in the kinds of kinds of cognitive mistakes they are prone to. Machines may spot or interpret things that humans might miss, and vice versa. There is good evidence for this effect in medical diagnosis, where a collaboration between human medic and AI can often produce higher accuracy than either can achieve alone.

In the area of decision-making, robots can make simple decisions much faster, but may be unreliable with more complex or borderline decisions, so a hybrid “human-in-the-loop” solution may be appropriate. 

Decisions that affect real people are subject to particular concern – GDPR specifically regulates any automated decision-making or profiling that is made without human intervention, because of the potential impact on people’s rights and freedoms. In such cases, the “human-in-the-loop” solution reduces the perceived privacy risk. In the area of communication and collaboration, robots can help orchestrate complex interactions between multiple human experts, and allow human observations to be combined with automatic data gathering. Meanwhile, sophisticated chatbots are enabling more complex interactions between people and machines.

Finally there is the core capability of intelligence – learning. Machines learn by processing vast datasets of historical data – but that is also their limitation. So learning may involve fast corrective action by the robot (using machine learning), with a slower cycle of adjustment and recalibration by human operators (such as Data Scientists). This would be an example of Double-Loop learning.


Automation Roadmap

Some of the elements of this automation framework are already fairly well developed, with cost-effective components available from the technology vendors. So there are some modes of automation that are available for rapid deployment. Other elements are technologically immature, and may require a more cautious or experimental approach.

Your roadmap will need to align the growing maturity of your organization with the growing maturity of the technology, exploiting quick wins today while preparing the groundwork to be in a position to take advantage of emerging tools and techniques in the medium term.




Thomas Davenport and Rajeev Ronanki, Artificial Intelligence for the Real World (January–February 2018)

Related posts: Automation Ethics (August 2019), RPA - Real Value or Painful Experimentation? (August 2019)

Saturday, August 03, 2019

Towards the Data-Driven Business

If we want to build a data-driven business, we need to appreciate the various roles that data and intelligence can play in the business - whether improving a single business service, capability or process, or improving the business as a whole. The examples in this post are mainly from retail, but a similar approach can easily be applied to other sectors.


Sense-Making and Decision Support

The traditional role of analytics and business intelligence is helping the business interpret and respond to what is going on.

Once upon a time, business intelligence always operated with some delay. Data had to be loaded from the operational systems into the data warehouse before they could be processed and analysed. I remember working with systems that generated management information based on yesterday's data, or even last month's data. Of course, such systems don't exist any more (!?), because people expect real-time insight, based on streamed data.

Management information systems are supposed to support individual and collective decision-making. People often talk about actionable intelligence, but of course it doesn't create any value for the business until it is actioned. Creating a fancy report or dashboard isn't the real goal, it's just a means to an end.

Analytics can also be used to calculate complicated chains of effects on a what-if basis. For example, if we change the price of this product by this much, what effect is this predicted to have on the demand for other products, what are the possible responses from our competitors, how does the overall change in customer spending affect supply chain logistics, do we need to rearrange the shelf displays, and so on. How sensitive is Y to changes in X, and what is the optimal level of Z?

Analytics can also be used to support large-scale optimization - for example, solving complicated scheduling problems.

 
Automated Action

Increasingly, we are looking at the direct actioning of intelligence, possibly in real-time. The intelligence drives automated decisions within operational business processes, often without a human-in-the-loop, where human supervision and control may be remote or retrospective. A good example of this is dynamic retail pricing, where an algorithm adjusts the prices of goods and services according to some model of supply and demand. In some cases, optimized plans and schedules can be implemented without a human in the loop.

So the data doesn't just flow from the operational systems into the data warehouse, but there is a control flow back into the operational systems. We can call this closed loop intelligence.

(If it takes too much time to process the data and generate the action, the action may no longer be appropriate. A few years ago, one of my clients wanted to use transaction data from the data warehouse to generate emails to customers - but with their existing architecture there would have been a 48 hour delay from the transaction to the email, so we needed to find a way to bypass this.)


Managing Complexity

If you have millions of customers buying hundreds of thousands of products, you need ways of aggregating the data in order to manage the business effectively. Customers can be grouped into segments, products can be grouped into categories, and many organizations use these groupings as a basis for dividing responsibilities between individuals and teams. However, these groupings are typically inflexible and sometimes seem perverse.

For example, in a large supermarket, after failing to find maple syrup next to the honey as I expected, I was told I should find it next to the custard. There may well be a logical reason for this grouping, but this logic was not apparent to me as a customer.

But the fact that maple syrup is in the same product category as custard doesn't just affect the shelf layout, it may also mean that it is automatically included in decisions affecting the custard category and excluded from decisions affecting the honey category. For example, pricing and promotion decisions.

A data-driven business is able to group things dynamically, based on affinity or association, and then allows simple and powerful decisions to be made for this dynamic group, at the right level of aggregation.

Automation can then be used to cascade the action to all affected products, making the necessary price, logistical and other adjustments for each product. This means that a broad plan can be quickly and consistently implemented across thousands of products.


Experimentation and Learning

In a data-driven business, every activity is designed for learning as well as doing. Feedback is used in the cybernetic sense - collecting and interpreting data to control and refine business rules and algorithms.

In a dynamic world, it is necessary to experiment constantly. A supermarket or online business is a permanent laboratory for testing the behaviour of its customers. For example, A/B testing where alternatives are presented to different customers on different occasions to test which one gets the best response. As I mentioned in an earlier post, Netflix declares themselves "addicted" to the methodology of A/B testing.

In a simple controlled experiment, you change one variable and leave everything else the same. But in a complex business world, everything is changing. So you need advanced statistics and machine learning, not only to interpret the data, but also to design experiments that will produce useful data.


Managing Organization

A traditional command-and-control organization likes to keep the intelligence and insight in the head office, close to top management. An intelligent organization on the other hand likes to mobilize the intelligence and insight of all its people, and encourage (some) local flexibility (while maintaining global consistency). With advanced data and intelligence tools, power can be driven to the edge of the organization, allowing for different models of delegation and collaboration. For example, retail management may feel able to give greater autonomy to store managers, but only if the systems provide faster feedback and more effective support. 


Transparency

Related to the previous point, data and intelligence can provide clarity and governance to the business, and to a range of other stakeholders. This has ethical as well as regulatory implications.

Among other things, transparent data and intelligence reveal their provenance and derivation. (This isn't the same thing as explanation, but it probably helps.)




Obviously most organizations already have many of the pieces of this, but there are typically major challenges with legacy systems and data - especially master data management. Moving onto the cloud, and adopting advanced integration and robotic automation tools may help with some of these challenges, but it is clearly not the whole story.

Some organizations may be lopsided or disconnected in their use of data and intelligence. They may have very sophisticated analytic systems in some areas, while other areas are comparatively neglected. There can be a tendency to over-value the data and insight you've already got, instead of thinking about the data and insight that you ought to have.

Making an organization more data-driven doesn't always entail a large transformation programme, but it does require a clarity of vision and pragmatic joined-up thinking.


Related posts: Rhyme or Reason: The Logic of Netflix (June 2017), Setting off towards the Data-Driven Business (August 2019)


Updated 13 September 2019

Tuesday, November 06, 2018

Big Data and Organizational Intelligence

Ten years ago, the editor of Wired Magazine published an article claiming the end of theory. With enough data, the numbers speak for themselves.

The idea that data (or facts) speak for themselves, with no need for interpretation or analysis, is a common trope. It is sometimes associated with a legal doctrine known as Res Ipsa Loquitur - the thing speaks for itself. However this legal doctrine isn't about truth but about responsibility: if a surgeon leaves a scalpel inside the patient, this fact alone is enough to establish the surgeon's negligence.

Legal doctrine aside, perhaps the world speaks for itself. The world, someone once asserted, is all that is the case, the totality of facts not of things. Paradoxically, big data often means very large quantities of very small (atomic) data.

But data, however big, does not provide a reliable source of objective truth. This is one of the six myths of big data identified by Kate Crawford, who points out, data and data sets are not objective; they are creations of human design. In other words, we don't just build models from data, we also use models to obtain data. This is linked to Piaget's account of how children learn to make sense of the world in terms of assimilation and accommodation. (Piaget called this Genetic Epistemology.)

Data also cannot provide explanation or understanding. Data can reveal correlation but not causation. Which is one of the reasons why we need models. As Kate Crawford also observes, we get a much richer sense of the world when we ask people the why and the how not just the how many. And Bernard Stiegler links the end of theory glorified by Anderson with a loss of reason (2019, p8).

In the traditional world of data management, there is much emphasis on the single source of truth. Michael Brodie (who knows a thing or two about databases), while acknowledging the importance of this doctrine for transaction systems such as banking, argues that it is not appropriate everywhere. In science, as in life, understanding of a phenomenon may be enriched by observing the phenomenon from multiple perspectives (models). ... Database products do not support multiple models, i.e., the reality of science and life in general. One approach Brodie talks about to address this difficulty is ensemble modelling: running several different analytical models and comparing or aggregating the results. (I referred to this idea in my post on the Shelf-Life of Algorithms).

Along with the illusion that what the data tells you is true, we can identify two further illusions: that what the data tells you is important, and that what the data doesn't tell you is not important. These are not just illusions of big data of course - any monitoring system or dashboard can foster them. The panopticon affects not only the watched but also the watcher.

From the perspective of organizational intelligence, the important point is that data collection, sensemaking, decision-making, learning and memory form a recursive loop - each inextricably based on the others. An organization only perceives what it wants to perceive, and this depends on the conceptual models it already has - whether these are explicitly articulated or unconsciously embedded in the culture. Which is why real diversity - in other words, genuine difference of perspective, not just bureaucratic profiling - is so important, because it provides the organizational counterpart to the ensemble modelling mentioned above.


https://xkcd.com/552/


Each day seems like a natural fact
And what we think changes how we act



Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Wired, 23 June 2008)

Michael L Brodie, Why understanding of truth is important in Data Science? (KD Nuggets, January 2018)

Kate Crawford, The Hidden Biases in Big Data (HBR, 1 April 2013)

Kate Crawford, The Anxiety of Big Data (New Inquiry, 30 May 2014)

Bruno Gransche, The Oracle of Big Data – Prophecies without Prophets (International Review of Information Ethics, Vol. 24, May 2016)

Kevin Kelly, The Google Way of Science (The Technium, 28 June 2008)

Thomas McMullan, What does the panopticon mean in the age of digital surveillance? (Guardian, 23 July 2015)

Evelyn Ruppert, Engin Isin and Didier Bigo, Data politics (Big Data and Society, July–December 2017: 1–7)

Ian Steadman, Big Data and the Death of the Theorist (Wired, 25 January 2013)

Bernard Stiegler, The Age of Disruption: Technology and Madness in Computational Capitalism (English translation, Polity Press 2019)

Ludwig Wittgenstein, Tractatus Logico-Philosophicus (1922)


Related posts

Information Algebra (March 2008), How Dashboards Work (November 2009), Conceptual Modelling - Why Theory (November 2011), Co-Production of Data and Knowledge (November 2012), Real Criticism - The Subject Supposed to Know (January 2013), The Purpose of Diversity (December 2014), The Shelf-Life of Algorithms (October 2016), The Transparency of Algorithms (October 2016), Algorithms and Governmentality (July 2019), Mapping out the entire world of objects (July 2020)


Wikipedia: Ensemble LearningGenetic Epistemology, PanopticismRes ipsa loquitur (the thing speaks for itself)

Stanford Encyclopedia of Philosophy: Kant and Hume on Causality


For more on Organizational Intelligence, please read my eBook.
https://leanpub.com/orgintelligence/

Thursday, December 14, 2017

Expert Systems

Is there a fundamental flaw in AI implementation, as @jrossCISR suggests in her latest article for Sloan Management Review? She and her colleagues have been studying how companies insert value-adding AI algorithms into their processes. A critical success factor for the effective use of AI algorithms (or what we used to call expert systems) is the ability to partner smart machines with smart people, and this calls for changes in working practices and human skills.

As an example of helping people to use probabilistic output to guide business actions, Ross uses the example of smart recruitment.
But what’s the next step when a recruiter learns from an AI application that a job candidate has a 50% likelihood of being a good fit for a particular opening?

Let's unpack this. The AI application indicates that at this point in the process, given the information we currently have about the candidate, we have a low confidence in predicting the performance of this candidate on the job. Unless we just toss a coin and hope for the best, obviously the next step is to try and obtain more information and insight about the candidate.

But which information is most relevant? An AI application (guided by expert recruiters) should be able to identify the most efficient path to reaching the desired level of confidence. What are the main reasons for our uncertainty about this candidate, and what extra information would make the most difference?

Simplistic decision support assumes you only have one shot at making a decision. The expert system makes a prognostication, and then the human accepts or overrules its advice.

But in the real world, decision-making is often a more extended process. So the recruiter should be able to ask the AI application some follow-up questions. What if we bring the candidate in for another interview? What if we run some aptitude tests? How much difference would each of these options make to our confidence level?

When recruiting people for a given job, it is not just that the recruiters don't know enough about the candidate, they also may not have much detail about the requirements of the job. Exactly what challenges will the successful candidate face, and how will they interact with the rest of the team? So instead of shortlisting the candidates that score most highly on a given set of measures, it may be more helpful to shortlist candidates with a range of different strengths and weaknesses, as this will allow interviewers to creatively imagine how each will perform. So there are a lot more probabilistic calculations we could get the algorithms to perform, if we can feed enough historical data into the machine learning hopper.

Ross sees the true value of machine learning applications to be augmenting intelligence - helping people accomplish something. This means an effective collaboration between one or more people and one or more algorithms. Or what I call organizational intelligence.


Postscript (18 December 2017)

In his comment on Twitter, @AidanWard3 extends the analysis to multiple stakeholders.
This broader view brings some of the ethical issues into focus, including asymmetric information and algorithmic transparency


Jeanne Ross, The Fundamental Flaw in AI Implementation (Sloan Management Review, 14 July 2017)

Monday, August 08, 2016

Single Point of Failure (Airlines)

Large business-critical systems can be brought down by power failure. Who knew?

In July 2016, Southwest Airlines suffered a major disruption to service, which lasted several days. It blamed the failure on "lingering disruptions following performance issues across multiple technology systems", apparently triggered by a power outage.
In August 2016 it was Delta's turn.

Then there were major problems at British Airways (Sept 2016) and United (Oct 2016).



The concept of "single point of failure" is widely known and understood. And the airline industry is rightly obsessed by safety. They wouldn't fly a plane without backup power for all systems. So what idiot runs a whole company without backup power?

We might speculate what degree of complacency or technical debt can account for this pattern of adverse incidents. I haven't worked with any of these organizations myself. However, my guess is that some people within the organization were aware of the vulnerability, but this awareness didn't somehow didn't penetrate the management hierarchy. (In terms of orgintelligence, a short-sighted board of directors becomes the single point of failure!) I'm also guessing it's not quite as simple and straightforward as the press reports and public statements imply, but that's no excuse. Management is paid (among other things) to manage complexity. (Hopefully with the help of system architects.)

If you are the boss of one of the many airlines not mentioned in this post, you might want to schedule a conversation with a system architect. Just a suggestion.


American Airlines Gradually Restores Service After Yesterday's Power Outage (PR Newswire, 15 August 2003)

British Airways computer outage causes flight delays (Guardian, 6 Sept 2016)

Delta: ‘Large-scale cancellations’ after crippling power outage (CNN Wire, 8 August 2016)

Gatwick Airport Christmas Eve chaos a 'wake-up call' (BBC News, 11 April 2014)

Simon Calder, Dozens of flights worldwide delayed by computer systems meltdown (Independent, 14 October 2016)

Jon Cox, Ask the Captain: Do vital functions on planes have backup power? (USA Today, 6 May 2013)

Jad Mouawad, American Airlines Resumes Flights After a Computer Problem (New York Times, 16 April 2013)

 Marni Pyke, Southwest Airlines apologizes for delays as it rebounds from outage (Daily Herald, 20 July 2016)

Alexandra Zaslow, Outdated Technology Likely Culprit in Southwest Airlines Outage (NBC News, Oct 12 2015)


Related post Single Point of Failure (Comms) (September 2016), The Cruel World of Paper (September 2016), When the Single Version of Truth Kills People (April 2019)


Updated 14 October 2016. Link added 26 April 2019

Saturday, April 20, 2013

From information architecture to evidence-based practice

@bengoldacre has produced a report for the UK Department for Education, suggesting some lessons that education can learn from medicine, and calling for a coherent “information architecture” that supports evidence based practice. Dr Goldacre notes that in the highest performing education systems, such as Singapore, “it is almost impossible to rise up the career ladder of teaching, without also doing some work on research in education.”

Here are some of his key recommendations. Clearly these recommendations would be relevant to many other corporate environments, especially those where there is strong demand for innovation, performance and value-for-money.

  • a simple infrastructure that supports evidence-based practice
  • teachers should be empowered to participate in research
  • the results of research should be disseminated more efficiently
  • resources on research should be available to teachers, enabling them to be critical and thoughtful consumers of evidence
  • barriers between teachers and researchers should be removed
  • teachers should be driving the research agenda, by identifying questions that need to be answered.

Clearly it is not enough merely to create an information architecture or knowledge infrastructure. The challenge is to make sure they are aligned with an inquiring culture.

to be continued ...


Ben Goldacre, Teachers! What would evidence based practice look like? (Bad Science, March 2013)

Saturday, February 09, 2013

Complexity is not a problem

There is a common view in the enterprise architecture world that complexity is a big problem, perhaps the biggest problem, and that the primary task of enterprise architecture is to deal with this complexity.

  • "Enterprises are instances of complex adaptive systems having many interacting subcomponents whose interactions yield complex behaviors.  Enterprise Architecture is a way of understanding and managing such complexity." (Beryl Bellman Managing Organizational Complexity pdf FEAC Oct 2009)

Indeed, I'm sure I've said things like this myself. But if complexity is a problem, whose problem is it? I am not seeing a huge rush of businessmen hiring enterprise architects just to deal with The Complexity Problem. Usually they have much more practical problems that they want addressing.


Complexity is a problem amplifier


So here's the thing. Apart from architects, people generally don't see complexity as their problem. What they do often acknowledge, however, is that complexity makes their problems worse. Furthermore, complexity may be one of the reasons they can't solve their problems for themselves. So complexity is a relevant factor, it's just not the problem itself.


Complexity as a lifestyle choice


And where does the complexity come from? Ironically, complexity is often created by failed attempts to reduce or eliminate complexity. In a post about the AntiFragile Enterprise (Jan 2013), Alan Hakami warns against "a tendency to build over-governed solutions that try to 'manage' complexity or uncertainty".

And where is the motivation to eliminate complexity? Suppose you go to the doctor with a headache, and the doctor says the reason you keep getting these headaches is you don't get enough fresh air and exercise. The pills just give you a temporary relief from the symptoms. You know she's right, but somehow you can't manage to get up early enough to go for a run before work. So you continue to get the headaches and you continue to need the pills. Indeed, if the pills work, you may start to exercise even less than you did before.

According to this analogy, an organization chooses the level of complexity it is comfortable with, and may well resist attempts to shift to a higher or lower level.

Complexity as a smokescreen


In my post on Devious Management and Investment Risk (January 2004), I suggested that complexity can sometimes be a deliberate tactic to conceal something, and therefore serves as a clue that something is being concealed. For example, a tangle of complicated transactions. Obviously those responsible for the concealment will resist any attempt to strip away this kind of complexity. So there is no point in attacking the complexity directly, you need to identify what is behind it.

Complexity is an opportunity amplifier


If one organization is better at handling complexity than its competitors, as a consequence of superior agility and/or intelligence, then it can use complexity as a weapon. And many organizations use this weapon against customers or regulators. As @JackGavigan says, complexity is only a problem for those who lack the brain power to deal with and exploit it.


So that gives the enterprise architect a rather different perspective on complexity, doesn't it?



Related posts: Complexity-Based Pricing (June 2008), On the Causes of Business Complexity (October 2012)

Updated 27 June 2020

Sunday, November 11, 2012

Co-Production of Data and Knowledge

Following Russell Ackoff, systems thinkers like to equate wisdom with systems thinking. As Nikhil Sharma points out,

"the choice between Information and Knowledge is based on what the particular profession believes to be manageable".

When this is described as a hierarchy, this is essentially a status ranking. Wisdom (which is what I happen to have) is clearly superior to mere knowledge (which is what the rest of you might have, if you're lucky).


Here's an analogy for the so-called hierarchy of Data, Information, Knowledge and Wisdom (DIKW).

  • Data = Flour
  • Information = Bread
  • Knowledge = A Recipe for Bread-and-Butter Pudding
  • Wisdom = Only Eating A Small Portion

Note that Information isn't made solely from Data, Knowledge isn't made solely from Information, and Wisdom isn't made solely from Knowledge. See also my post on the Wisdom of the Tomato.



That's enough analogies. Let me now explain what I think is wrong with this so-called hierarchy.

Firstly, the term hierarchy seems to imply that there are three similar relationships.
  • between Data and Information
  • between Information and Knowledge
  • and between Knowledge and Wisdom
 as well as implying some logical or chronological sequence
  • Data before Information
  • Information before Knowledge
  • Knowledge before Wisdom
while the pyramid shape implies some quantitative relationships
  • Much more data than information
  • Much more information than knowledge
  • Tiny amounts of wisdom


But my objection to DIKW is not just that it isn't a valid hierarchy or pyramid, but it isn't even a valid schema. It encourages people to regard Data-Information-Knowledge-Wisdom as a fairly rigid classification scheme, and to enter into debates as to whether something counts as information or knowledge. For example, people often argue that something only counts as knowledge if it is in someone's head. I regard these debates as unhelpful and unproductive.

A number of writers attack the hierarchical DIKW schema, and propose alternative ways of configuring the four elements. For example, Dave Snowden correctly notes that knowledge is the means by which we create information out of data. Meanwhile Tom Graves suggests we regard DIKW not as layers but as distinct dimensions in a concept-space.


But merely rearranging DIKW fails to address the most fundamental difficulty of DIKW, which is a naive epistemology that has been discredited since the Enlightenment. You don't simply build knowledge out of data. Knowledge develops through Judgement (Kant), Circular Epistemology and Dialectic (Hegel), Assimilation and Accommodation (Piaget), Conjecture and Refutation (Popper), Proof and Refutation (Lakatos), Languaging and Orientation (Maturana), and/or Mind (Bateson).

These thinkers share two things: firstly the rejection of the Aristotelian idea of one-way traffic from data to knowledge, and secondly an insistence that data must be framed by knowledge. Thus we may validate knowledge by appealing to empirical evidence (data), but we only pick up data in the first place in accordance with our preconceptions and observation practices (knowledge). Meanwhile John Durham Peters suggests that knowledge is not the gathering but the throwing away of information. Marvellous Clouds, p 318

Among other things, this explains why organizations struggle to accommodate (and respond effectively to) weak signals, and why they persistently fail to connect the dots.

And if architects and engineers persist in trying to build information systems and knowledge management systems according to the DIKW schema, they will continue to fall short of supporting organizational intelligence properly.




For a longer and more thorough critique, see Ivo Velitchkov, Do We Still Worship The Knowledge Pyramid (May 2017)

Many other critiques are available ...

Gene Bellinger, Durval Castro and Anthony Mills, Data, Information, Knowledge, and Wisdom (Systems Thinking Wiki, 2004)

Tom Graves, Rethinking the DIKW Hierarchy (Nov 2012)

Patrick Lambe, From Data With Love (Feb 2010)

Nikhil Sharma, The Origins of the DIKW Hierarchy (23 Feb 2005)

Kathy Sierras, Moving up the wisdom hierarchy (23 April 2006)

Dave Snowden, Sense-making and Path-finding (March 2007)

Gordon Vala-Webb, The DIKW Pyramid Must Die (KM World, Oct 2012) - as reported by V Mary Abraham

David Weinberger, The Problem with the Data-Information-Knowledge-Wisdom Hierarchy (HBR, 2 February 2020)

DIKW Model (KM4dev Wiki)


Related posts: Connecting the Dots (January 2010), Too Much Information (April 2010), Seeing is not observing (November 2012), Big Data and Organizational Intelligence (November 2018), An Alternative to the DIKW Pyramid (February 2020)



Updated 8 December 2012
More links added 01 March 2020
Also merging in some material originally written in May 2006.

Saturday, April 07, 2012

From Design Thinking to Creative Intelligence

Some fans of Design Thinking got very excited about the references to design thinking in the US Army Field Manual 5-0: The Operations Process (pdf), published in March 2010, where design is described as a kind of sensemaking or orientation process - an intelligent front-end to the real business of decision making and action.

the importance of understanding complex problems more fully before we seek to solve them through our traditional planning processes ... applying design to understand before entering the visualize, describe, direct, lead, and assess cycle.

Thus design is part of the intelligence loop (rather than the other way around). See also @EllenNaylor on Design Thinking for Strategic Competitive Advantage.

In April 2011, Bruce Nussbaum, described as "one of Design Thinking’s biggest advocates" posted a blog entitled Design Thinking Is A Failed Experiment. So What’s Next? His answer: Creative Intelligence, the ability to frame problems in new ways and to make original solutions.

On the one hand, Nussbaum dreams that that his godchild will win admission to a top university on the strength not only of her IQ but also her creative intelligence - in other words, seeing creative intelligence as an attribute of individual genius. On the other hand, he wants to frame creative intelligence not in terms of a psychological approach of development stages but a sociological approach in which creativity emerges from group activity - in other words, seeing creative intelligence as an attribute of a group or organization, not just the individuals within it.

See further commentary by Tom Berno, Cameron D Norman, Erica Schlaikjer.


The draft of my book on Organizational Intelligence is now available on LeanPub http://leanpub.com/orgintelligence. Please support this development by subscribing and commenting. Thanks.

Friday, April 06, 2012

Intelligent Business Operations

@opheretzion reposts a healthcare example from @jimsinur concerning resource allocation in surgeries.

The loop is as follows.

Decision / Planning Scheduling and resource allocation for the following day, using simulation and optimization tools.
Information Gathering Real-time tracking of selected "things" (physicians, nurses, equipment; monitor of procedure duration and status) using a range of devices (sensors and cameras).
Sensemaking Detecting deviations from plan - "things going wrong"
Decision / Planning Revising the plan in "near real time"

Obviously this is an impressive use of the relevant technologies, and it may well deliver substantial benefits in terms of supply-side cost-effectiveness as well as a safer and better experience for the patient. This is essentially a goal-directed feedback loop.

However, we may note the following limitations of this loop.

1. Decision / Planning is apparently based on a fixed pre-existing normative model of operations - in other words a standard "solution" that should fit most patients' needs. This may be a reasonable assumption for some forms of routine surgery, but doesn't seem to allow for the always-present possibility of surprise when you cut the patient open.

2. Information Gathering is based on a fixed set of things-to-be-monitored. Opher calls this "real-time tracking of everything" - but of course this is a huge exaggeration. Perhaps the most important piece of information that cannot be included in this rapid feedback loop is the patient outcome. We might think that this cannot be determined conclusively until much later, but there may be some predictive metrics (perhaps the size of the incision?) that may be strongly correlated with patient outcomes.

3. Sensemaking is extremely limited - there is no time to understand what is going wrong, or to carry out deeper root-cause analysis and learning. All we can do is to react according to previously established "best practice".
4. Replanning is limited to detecting and quick-fixing deviations from the plan. See my post on Real-Time Events.



Full organizational intelligence needs to integrate this kind of rapid goal-directed feedback loop with a series of deeper analytic sensemaking and learning loops. For example, we might want to monitor how a given surgical procedure fits into a broader care pathway for the patient. Real-time monitoring is then useful not only for near-real-time operational intelligence but also for longer-term innovation.


Jim Sinur Success Snippet (January 2012)

Opher Etzion Medical Use Case (January 2012)

And please see my draft Organizational Intelligence Primer, now available on LeanPub.

Monday, April 02, 2012

Enterprise OODA

In response to @Griff0Jones, I promised to beef up the coverage of OODA in my book on Organizational Intelligence (draft now available via LeanPub). I should welcome any comments and suggestions on the following, as well as pointers to any practical examples.



The choices we make at the personal level are influenced by our experiences and our environment. We are not always fully aware of these influences, and may need someone else to point them out to us. The same is true of the strategic and operational choices made by organizations.

In a rapidly changing environment, we need a feedback loop that continuously aligns our behaviour to these changes. This is an important aspect of agility. And in a competitive situation, competitive success depends on our being more agile than our competitors - in other words, having a faster and more accurate loop.

A good model for this is the OODA (Observe, Orient, Decide, Act) loop created by John Boyd. Some people confuse this with the PDCA (Plan, Do, Check, Act) loop popularized by Shewhart and Deming. However, the key difference between PDCA and OODA is the explicit inclusion of sense-making, which Boyd calls Orientation. Boyd himself produced a second model, IOHAI, which is largely an expansion on the sense-making area.



In order for the OODA loop to produce real agility, there needs to be agility in each of its parts.

Agile Observation
. One criticism that has been levelled at the OODA loop (for example Benson and Rotkoff) is the tendency for what you observe to be narrowed to just those things that seem to help with decision making. (David Murphy describes this tendency as inevitable.)

Simon Thornington raises a related issue in a comment to David Murphy's blog. So much of what is observed is via instrumentation (or alternatively the output of models). Without the observer having prior knowledge of the construction and assumptions of the models, he or she cannot orient effectively based on the observations. Simon suggests that this was a factor in the recent Airbus crash, various space program mishaps and throughout finance.


Agile Orientation. David Murphy points out that really big risks are often not acted upon because we are oriented so that we cannot decide, and reminds us what happened to those people who tried to act against the firm’s orientation at MF Global, or Enron or HBOS.

Discussing strategic planning through the OODA lens, Henrik Mårtensson points out the importance of orientation. If we only know one strategic paradigm, and choose a strategic method from within the range of options provided by the paradigm, we loose (sic) the ability to improve beyond what the paradigm allows. ... Boyd believed it is absolutely necessary to be able to switch paradigms at will.


Agile Decision, Agile Action. Henrik argues that operating with a crippled OODA loop and a strategic model that separates strategy and action may not kill you, but the faster the environment changes, the more hampered your organization will be by its own strategic model. Henrik recommends William Dettmer's Strategic Navigation, which in his opinion combines the principles of Maneuver Warfare with the analysis and planning tools of The Logical Thinking Process. The result is fast high quality strategic planning, and seamless integration between planning and execution.



Sources and further reading


Kevin Benson and Steven Rotoff, Goodbye OODA Loop (Armed Forces Journal, October 2011)

Blogs by Henrik Mårtensson, David Murphy, Chet Richards, and Spartan Cops.

Saturday, March 31, 2012

Organizational Intelligence eBook

#orgintelligence I have put a draft of my Organizational Intelligence book onto the @LeanPub platform. This is now available in PDF, ePUB and MOBI.

http://leanpub.com/orgintelligence

I'm also hoping to get a draft of my Business Architecture book up soon, but this is going to take a bit longer because of the large number of diagrams.

If you work for a large company and can claim the book on expenses, please consider paying the full price. Otherwise, early readers can get the book for $15, paid via PayPal. LeanPub takes a small cut and I get the rest.

The idea of the LeanPub platform is to provide financial and moral support to authors during the development process. A variable pricing scheme encourages readers to subscribe to a book at an early stage and get full access to the book as it develops. I'm also hoping to get a lot of detailed questions, comments and suggestions.

Many thanks to Tom Graves for introducing me to LeanPub. As he states in his blog on Publishing Tetradian e-Books via LeanPub, there are a few constraints and minor bugs in the platform at present; so if you are thinking of publishing something yourself, please feel free to talk to Tom or myself.

Saturday, March 10, 2012

Structure Follows Strategy?

#entarch In his talk to the BCS Enterprise Architecture Group this week, Patrick Hoverstadt suggested that traditional enterprise architecture obeyed Alfred Chandler's principle: Structure Follows Strategy. In other words, first the leadership defines a strategy, and then enterprise architecture helps to create a structure (of sociotechnical systems) to support the strategy.

Chandler's principle was published in 1962, and is generally regarded nowadays as much too simplistic. In 1980, Hall and Saias published a paper asserting the converse principle Strategy Follows Structure! (pdf), and most modern writers now follow Henry Mintzberg in regarding the relationship between strategy and structure as reciprocal.

What are the implications of this for enterprise architecture? Patrick offered us a simple syllogism: if enterprise architects determine structure, and if structure determines strategy, then enterprise architects are (consciously or unconsciously) determining strategy. In particular, the strategies that are available to the enterprise are limited by the information systems that the enterprise uses (a) to understand what is going on both internally and externally, and (b) to anticipate future developments. In many situations, the information isn't readily accessible in the form that managers would need to mobilize a strategic response to the complexity of the demand ecosystem.

Of course it isn't as simple as this. The defacto structure (including its information systems) is hardly ever as directed by enterprise architecture, but is created by countless acts of improvisation by managers and workers just trying to get things done. (The late Claudio Ciborra wrote brilliantly about this.) People somehow get most of the information they need, not thanks to the formal information systems but despite them. Thus the emergent structures are a lot more powerful and rich than the official structures that enterprise architects and others are mandated to produce. Patrick cites the example of a wallpaper factory where productivity was markedly reduced after smoking was banned; a plausible explanation for this was that smoking had provided a pretext for informal communication between groups.

Meanwhile, Minztberg drew our attention to a potential gulf between the official strategy and the defacto emergent strategy. (I have always especially liked his example of the Canadian Film Board, published in HBR July-Aug 1987.)

Nevertheless, Patrick's mission (which I endorse) is to connect enterprise architects with the strategy processes in an enterprise. He is a strong advocate of Stafford Beer's Viable Systems Model (VSM). Using his approach, which he encourages enterprise architects to adopt, VSM provides a unique lens for viewing the structure of enterprise, and for recognizing some common structural errors, which he calls pathological; I encourage enterprise architects to read his book on The Fractal Organization.


Related post: Co-Production of Strategy and Execution (December 2012)

Saturday, October 22, 2011

Smart Content

There are several characteristic features of so-called smart content.
  • Content enhanced to be fit-to-purpose ... content that is organized and structured for customer tasks and needs, not just for the production, packaging and distribution of physical documents. (Mirko Minnich, ex Elsevier)
  • Self-organizing and transparent content, organizing itself automatically depending on your context, goals, and workflow, and allowing you to see why it's doing what it's doing. (Mark Stefik, Xerox PARC)
  • Granular at the appropriate level, semantically rich, useful across applications, and meaningful for collaborative interaction. (Gilbane Group)
  • Has good metadata (not lots), fit for purpose, uses classifications to provide context and aid discoverability (Madi Solomon, Pearson)

And there are several characteristic technologies that are supposed to facilitate smart content, among other things. Some of these technologies are linked to Sir Tim Berners-Lee. Come on Tim!

  • Semantic technologies to cross-reference and cross-polinate with other kinds of content. (Madi Solomon, Pearson)

As Natasha Fogel pointed out, "smart content is in the eye of the beholder" - in other words, the perceived smartness of content is relative to its context of use.

But in this post, I don't want to talk about the technologies themselves but about the emerging value propositions that may be supported by smart content. Last year, when he was a SVP at scientific and technical publisher Elsevier, Mirko Minnich talked about two key enablers for smart content. Firstly a value-adding process - transmuting content into scientific data, and transmuting scientific data into solutions. And secondly what he calls a product bridge, not only linking content with data but also linking the content business with the data analytics business. The product bridge appears to be a kind of platform, and Mirko was using the term "Smart Content" to refer to the platform itself as well as the content delivered on the platform.

Mirko's strategy at Elsevier represented a strong drive towards asymmetric design - in other words, recognizing that in order to deliver indirect value into a complex ecosystem you have to move away from a traditional product-based business model (in Elsevier's case, selling scientific journals) towards regarding your business as a multi-sided platform.


Mark Stefik (Xerox PARC) puts smart content into an organizational intelligence frame - the intelligence is now located (reified) in the content as well as in the people producing and consuming the content. Instead of the user asking "what content do I need", Mark wants the content to ask "who needs me?" Madi Solomon (Pearson) seems to be suggesting the exact opposite when he mentions the Big Shift from Push to Pull in his recent presentation on Smart Content. We can resolve this apparent contradiction only by understanding the intelligence as the property of the whole system rather than trying to locate it in one place - see my material on organizational intelligence.


Sources

Seth Grimes, Six definitions of smart content (Information Week, Sept 2010)Several of the quotes above come from this article.


Technology in Publishing (Editors Update, Elsevier, Jan 2011)
Next Generation Clinical Decision Support (Elsevier Press Release, Feb 2011)

Madi Solomon, Making Information Pay (April 2011)

Monday, October 17, 2011

Five Views of Business Architecture (OMG)

#OMG #BAWG The Object Management Group Business Architecture Working Group has identified five key views of a business.
  1. the Business Strategy view ("What the Business Wants")
  2. the Business Capabilities view ("What the Business Does")
  3. the Value Stream view ("How the Business Does")
  4. the Business Knowledge view ("What the Business Knows", "How the Business Thinks")
  5. the Organizational view ("What the Business Is")

While working with the CBDI Forum between 2002 and 2009, I developed an approach to business modelling for SOA, which included an emphasis on decoupling the WHAT (what the business does, what the business knows) from the HOW (how the business does). We felt that this decoupling provided the best basis for managing differentiation and integration across complex enterprises, and for achieving appropriate economics of scale and scope.

In my more recent work on Organizational Intelligence, I have sought to further decouple "What the Business Knows" from "How the Business Thinks". Different enterprises operating in the same ecosystem may be able to share a lot of common knowledge and information, but may each arrive at different judgements about What-Is-Going-On.

I shall be presenting the latest version of this schema in my Business Architecture Bootcamp and Organizational Intelligence Workshop, and explore how this schema helps to address a range of practical business problems.



 book now  Business Architecture Bootcamp (November 22-23, 2011)
 book now  Workshop: Organizational Intelligence (November 24th, 2011)

Sunday, October 16, 2011

Intelligence Failure at Kodak

@mkplantes sees the demise of Kodak as an intelligence failure.

Put yourself in their Kodak leaders’ chairs for a moment and consider the four expectations of a leadership team and, more importantly, consider the speed with which they had to work though all of the expectations:

Sense what’s going on around you? (“Digital is coming!”)
Make sense of what you see, hear, and feel (“Film is dying, but we can’t kill it now. It’s too important!”)
Decide on a course of action (“OMG! Nothing is as big as film is now. Let’s think about this and be careful.”)
Act on your decisions (“Well, this is a big ship! Hard to change course overnight!”)
Kay Plantes, A sad “Kodak moment” business model failure WTN News 7 October 2011


This is effectively an OODA loop. Dr Plantes identifies a number of possible errors in this loop.

1. Incorrect estimate of the pace of change. "Successful companies often underestimate the speed of industry evolution."

2. Incorrect understanding of the value proposition from the customers' perspective. "People don’t buy film, they use film to capture the pictures they want."

3. Incorrect optimization of the basis of competition - commodity wars.

If it was a strategic error for Kodak to get caught up in a dogfight with Fuji, we should also ask how Fuji is faring? Has Fuji committed the same errors as Kodak, and is it suffering the same fate? Meanwhile, Stuart Henshall compares Kodak with HP: two inventive companies, who "failed time and time again to find a more agile footing". (HP - What's Your strategy? August 2011).

Dr Plantes complains that Kodak was focused on the product rather than the value received by its customers - in other words, a platform strategy. But Kodak has been trying to shift its business model from product to a service-oriented platform for at least five years. In November 2006, an article in BusinessWeek described this transformation, and outlined some of the big challenges then facing Kodak (Mistakes made on the road to innovation, BusinessWeek November 2006). In February 2007, Clayton Christensen and Scott D. Anthony saw the Kodak strategy as an ambitious attempt to implement Christensen's concept of disruptive innovation (Will Kodak's New Strategy Work? Forbes February 2007).


Antonio Perez (who spent much of his career at HP) has been the CEO throughout this period, and has watched the Kodak share price drop from around $25 to less than $1. We may infer that Kodak has failed to overcome the challenges identified by BusinessWeek and Christensen. But why?


What's missing from Dr Plantes' analysis is an appreciation of how these four steps operated as an effective OODA loop, with feedback and learning, rather than merely repetition. In a detailed analysis of Kodak strategy, George Mendes concludes
Kodak is an example of repeat strategic failure – it was unable to grasp the future of digital quickly enough, and even when it did so, it was implemented too slowly under a continuous change strategy and ultimately it did not fit coherently as a core competency.

George Mendes, What went wrong at Eastman Kodak (pdf), TheStrategyTank


There is a great deal on the Internet about Kodak's social media strategy - but it seems to be largely about Kodak marketing communications. Journalist Courtney Boyd Myers (@CBM) invites us to Meet the brilliant and beautiful woman behind Kodak’s social media strategy (September 2011). The woman in question is extremely photogenic and obviously good at self-promotion, but there is nothing strategic in the article. The big strategic error here is to regard social media and content management as a marketing issue, separate from the business model itself. This seems to suggest a lack of joined-up thinking - and ultimately a failure of organizational intelligence.


In 2007, Jacob McNulty thought that that instilling the elements of a learning organization would have strongly contributed to a different story for Kodak’s recent years.
A learning organization is one that learns from its mistakes and successes, spots trends in the market and acts on them by being nimble enough to do so.  A culture of learning rewards knowledge sharing which reduces the chances that you’ll be blindsided by something like digital in 2007. Kodak could have presented themselves as a picture company many years ago - whether those pictures are on film or in a file it shouldn’t matter.  Part of making that transition would require a company that is ready to learn and develop.

Jacob McNulty, Not a Kodak Moment (2007)

Other sources claim that Kodak is a learning organization. In which case, why has it failed to learn the things that matter?


 book now  Business Architecture Bootcamp (November 22-23, 2011)
 book now  Workshop: Organizational Intelligence (November 24th, 2011)

Monday, October 10, 2011

Towards a VPEC-T analysis of Google

#entarch Enterprise architects need to understand values and policies. VPEC-T is an approach that is particularly useful for situations where there are multiple conflicting values and policies, or multiple interpretations of What-Is-Going-On.

In this post, I want to look at Google. Can we infer its values and policies from its observed behaviour (over time).


We may start by asking what events we think Google is paying attention to. Here are some of the events that are available to Google.

1. You search for "XYZ"
2. You skip over the first few items, and click on the third item on the second page.
3. You look at a webpage and then come back to continue your search.
4  You rephrase and clarify your enquiry.

Google is pretty coy on its exact use of these events, but most Google-watchers assume that these events have an influence on its search algorithms and/or its advertising algorithms. In other words, we may presume that Google is generating valuable content from these events.

Google has indulged in a wide range of initiatives over the years, many of which have no obvious line to revenue. But all of them have the potential to generate vast amounts of rich content - much of it related to the observed behaviour of internet users. On this interpretation of Google's strategy, initiatives are dropped, not because they fail to generate revenue but because they fail to generate enough of the desired kind of content. Google is betting its future on building and maintaining this content through powerful positive feedback.

Google's strategy is therefore surprisingly traditional - it involves capturing some territory and defending it against its competitors. Here's an example - Google provides the Android platform to mobile device manufacturers. When Motorola wanted to use Skyhook's voice recognition instead of Google's, Google forced it to fall into line. Daniel Soar argues that this was not because Google executives feared losing revenue but because they feared losing access to an important source of content. As Soar puts it, "Google faced the unfamiliar problem of the negative feedback loop: the fewer people that used its product, the less information it would have and the worse the product would get." (Google has since bought out Motorola Mobility, which presumably resolves some of the trust issues as well.)

Daniel Soar, You can't get away from Google, London Review of Books, Vol 33 No 19, 6 October 2011


Can we understand Google's phenomenal collection and use of data as an example of organizational intelligence? Google is certainly seeking to differentiate each Internet user's experience, as well as integrate across multiple domains (web browsing, email, blogging, voice, video, satnav, and so on). Google already has an army of brilliant engineers as well as an alarmingly large carbon footprint. There is lots of evidence of Google's integrating these resources into one of the most innovative sociotechnical systems on the planet.

(By the way, when I asked Google itself about its carbon footprint, it recommended I look at a recent story in the Guardian (8 September 2011). I can see that Google has been asked this question many times before, because it pops up so quickly as an expected search term. But why should I trust Google's recommendation, and how can I ever discover what newspapers would be recommended to a browser with a different browsing history to mine?)

But a lot of this learning looks suspiciously like first-order learning. So the content gets better, based on better capture of events, but to what extent is there any systematic evolution of policies or questioning of values? There may well be some second-order or third-order learning, but it's not easy to see from the outside. There is also an important question about the relationship between Google's own ability to learn from its accumulated content, and Google's ability or willingness to provide a rich platform for learning by others in its ecosystem - in other words, a broader notion of collective intelligence.

I wonder if there are any lessons for other organizations? Sometimes firms like Amazon, Apple, Facebook and Google (Eric Schmidt's Gang of Four) seem pretty far removed from most other organizations, but their platform strategies and operating patterns will surely become increasingly relevant in other sectors. A traditional retailer may now collect and analyse a much larger quantity of data about its customers' behaviour than ever before, even if this is still several orders of magnitude less than what Google does. A traditional telecoms or media company may now see itself as a platform business in a multisided market. Therefore instead of seeing Eric Schmidt's Gang of Four as impossibly remote and mysterious organizations, populated by unbelievably talented and creative engineers, we should start to think of them as harbingers of the enterprise of the future.



See also my post Google as a Platform (not)



 book now  Business Architecture Bootcamp (November 22-23, 2011)
 book now  Workshop: Organizational Intelligence (November 24th, 2011)