Monday, April 22, 2019

When the Single Version of Truth Kills People

@Greg_Travis has written an article on the Boeing 737 Max Disaster, which @jjn1 describes as "one of the best pieces of technical writing I’ve seen in ages". He explains why normal airplane design includes redundant sensors.

"There are two sets of angle-of-attack sensors and two sets of pitot tubes, one set on either side of the fuselage. Normal usage is to have the set on the pilot’s side feed the instruments on the pilot’s side and the set on the copilot’s side feed the instruments on the copilot’s side. That gives a state of natural redundancy in instrumentation that can be easily cross-checked by either pilot. If the copilot thinks his airspeed indicator is acting up, he can look over to the pilot’s airspeed indicator and see if it agrees. If not, both pilot and copilot engage in a bit of triage to determine which instrument is profane and which is sacred."

and redundant processors, to guard against a Single Point of Failure (SPOF).

"On the 737, Boeing not only included the requisite redundancy in instrumentation and sensors, it also included redundant flight computers—one on the pilot’s side, the other on the copilot’s side. The flight computers do a lot of things, but their main job is to fly the plane when commanded to do so and to make sure the human pilots don’t do anything wrong when they’re flying it. The latter is called 'envelope protection'."

But ...

"In the 737 Max, only one of the flight management computers is active at a time—either the pilot’s computer or the copilot’s computer. And the active computer takes inputs only from the sensors on its own side of the aircraft."

As a result of this design error, 346 people are dead. Travis doesn't pull his punches.

"It is astounding that no one who wrote the MCAS software for the 737 Max seems even to have raised the possibility of using multiple inputs, including the opposite angle-of-attack sensor, in the computer’s determination of an impending stall. As a lifetime member of the software development fraternity, I don’t know what toxic combination of inexperience, hubris, or lack of cultural understanding led to this mistake."

He may not know what led to this specific mistake, but he can certainly see some of the systemic issues that made this mistake possible. Among other things, the widespread idea that software provides a cheaper and quicker fix than getting the hardware right, together with what he calls cultural laziness.

"Less thought is now given to getting a design correct and simple up front because it’s so easy to fix what you didn’t get right later."

Agile, huh?



How a Single Point of Failure (SPOF) in the MCAS software could have caused the Boeing 737 Max crash in Ethiopia (DMD Solutions, 5 April 2019) - provides a simple explanation of Fault Tree Analysis (FTA) as a technique to identify SPOF.

By Mike Baker and Dominic Gates, Lack of redundancies on Boeing 737 MAX system baffles some involved in developing the jet (Seattle Times 26 March 2019)

George Leopold, Boeing 737 Max: Another Instance of ‘Go Fever”? (29 March 2019)

Gregory Travis, How the Boeing 737 Max Disaster Looks to a Software Developer (IEEE Spectrum, 18 April 2019) HT @jjn1 @ruskin147

And see my other posts on the Single Source of Truth.

Sunday, April 21, 2019

How Many Ethical Principles?

Although ethical principles have been put forward by philosophers through the ages, the first person to articulate ethical principles for information technology was Norbert Wiener. In his book The Human Use of Human Beings, first published in 1950, Wiener based his computer ethics on what he called four great principles of justice.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom.”

Meanwhile, Isaac Asimov's Three Laws of Robotics were developed in a series of short stories in the 1940s, so this was around the same time that Wiener was developing his ideas about cybernetics. Many writers on technology ethics argue that robots (or any other form of technology) should be governed by principles, and this idea is often credited to Asimov. But as far as I can recall, in every Asimov story that mentions the Three Laws of Robotics, some counter-example is produced to demonstrate that the Three Laws don't actually work as intended. I have therefore always regarded Asimov's work as being satirical rather than prescriptive. (Similarly J.K. Rowling's descriptions of the unsuccessful attempts by wizard civil servants to regulate the use of magical technologies.)

So for several decades, the Wiener approach to ethics prevailed, and discussion of computer ethics was focused on a common set of human values: life, health, security, happiness, freedom, knowledge, resources, power and opportunity. (Source: SEP: Computer and Information Ethics)

But these principles were essentially no different to the principles one would find in any other ethical domain. For many years, scholars disagreed as to whether computer technology introduced an entirely new set of ethical issues, and therefore called for a new set of principles. The turning point was at the ETHICOMP1995 conference in March 1995 (just two months before Bill Gates' Internet Tidal Wave memo), with important presentations from Walter Maner (who had been arguing this point for years) and Krystyna Górniak-Kocikowska. From this point onwards, computer ethics would have to address some additional challenges, including the global reach of the technology - beyond the control of any single national regulator - and the vast proliferation of actors and stakeholders. Terrell Bynum calls this the Górniak hypothesis.

Picking up the challenge, Luciano Floridi started to look at the ethical issues raised by autonomous and interactive agents in cyberspace. In a 2001 paper on Artificial Evil with Jeff Sanders, he stated "It is clear that something similar to Asimov's Laws of Robotics will need to be enforced for the digital environment (the infosphere) to be kept safe."

Floridi's work on Information Ethics (IE) represented an attempt to get away from the prevailing anthropocentric worldview. "IE suggests that there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy." He therefore articulated a set of principles concerning ontological equality (any instance of information/being enjoys a minimal, initial, overridable, equal right to exist and develop in a way which is appropriate to its nature) and information entropy (which ought not to be caused in the infosphere, ought to be prevented, ought to be removed). (Floridi 2006)

In the past couple of years, there has been a flood of ethical principles to choose from. In his latest blogpost, @Alan_Winfield lists over twenty sets of principles for robotics and AI published between January 2017 and April 2019, while AlgorithmWatch lists over fifty. Of particular interest may be the principles published by some of the technology giants, as well as the absence of such principles from some of the others. Meanwhile, Professor Floridi's more recent work on ethical principles appears to be more conventionally anthropocentric.

The impression one gets from all these overlapping sets of principles is of lots of experts and industry bodies competing to express much the same ideas in slightly different terms, in the hope that their version will be adopted by everyone else.

But what would "adopted" actually mean? One possible answer is that these principles might feed into what I call Upstream Ethics, contributing to a framework for both regulation and action. However some commentators have expressed scepticism as to the value of these principles. For example, @InternetDaniel thinks that these lists of ethical principles are "too vague to be effective", and suggests that this may even be intentional, these efforts being "largely designed to fail". And @EricNewcomer says "we're in a golden age for hollow corporate statements sold as high-minded ethical treatises".

As I wrote in an earlier piece on principles:
In business and engineering, as well as politics, it is customary to appeal to "principles" to justify some business model, some technical solution, or some policy. But these principles are usually so vague that they provide very little concrete guidance. Profitability, productivity, efficiency, which can mean almost anything you want them to mean. And when principles interfere with what we really want to do, we simply come up with a new interpretation of the principle, or another overriding principle, which allows us to do exactly what we want while dressing up the justification in terms of "principles". (January 2011)

The key question is about governance - how will these principles be applied and enforced, and by whom? What many people forget about Asimov's Three Laws of Robotics was that these weren't enforced by roving technology regulators, but were designed into the robots themselves, thanks to the fact that one corporation (U.S. Robots and Mechanical Men, Inc) had control over the relevant patents and therefore exercised a monopoly over the manufacture of robots. No doubt Google, IBM and Microsoft would like us to believe that they can be trusted to produce ethically safe products, but clearly this doesn't address the broader problem.

Following the Górniak hypothesis, if these principles are to mean anything, they need to be taken seriously not only by millions of engineers but also by billions of technology users. And I think this in turn entails something like what Julia Black calls Decentred Regulation, which I shall try to summarize in a future post. Hopefully this won't be just what Professor Floridi calls Soft Ethics.

Update: My post on Decentred Regulation and Responsible Technology is now available.



Algorithm Watch, AI Ethics Guidelines Global Inventory

Terrell Bynum, Computer and Information Ethics (Stanford Encyclopedia of Philosophy)

Luciano Floridi, Information Ethics - Its Nature and Scope (ACM SIGCAS Computers and Society · September 2006)

Luciano Floridi and Tim (Lord) Clement-Jones, The five principles key to any ethical framework for AI (New Statesman, 20 March 2019)

Luciano Floridi and J.W. Sanders, Artificial evil and the foundation of computer ethics (Ethics and Information Technology 3: 55–66, 2001)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)

Alan Winfield, An Updated Round Up of Ethical Principles of Robotics and AI (18 April 2019)


Wikipedia: Laws of Robotics, Three Laws of Robotics


Related posts: The Power of Principles (Not) (January 2011), Data and Intelligence Principles from Major Players (June 2018), Ethics Soft and Hard (February 2019), Upstream Ethics (March 2019), Ethics Committee Raises Alarm (April 2019), Decentred Regulation and Responsible Technology (April 2019)

Link corrected 26 April 2019

Tuesday, April 16, 2019

Is there a Single Version of Truth about Statins?

@bengoldacre provides some useful commentary on a BBC news item about statins. In particular, he notes a detail from the original research paper that didn't make it into the BBC news item - namely the remarkable lack of agreement between GPs and hospitals as to whether a given patient had experienced a cardiovascular event.

This is not a new observation: it was analysed in a 2013 paper by Emily Herrett and others. Dr Goldacre advised a previous Health Minister that "different data sources within the NHS were wildly discrepant wrt to the question of something as simple as whether a patient had had a heart attack". The minister asked which source was right - in other words, asking for a single source of truth. But the point is that there isn't one.

Data quality issues can be traced to a number of causes. While some of the issues may be caused by administrative or technical errors and omissions, others are caused by the way the data are recorded in the first place. This is why the comparison of health data between different countries is often misleading - because despite international efforts to standardize classification, different healthcare regimes still code things differently. And despite the huge amounts of NHS money thrown at IT projects to standardize medical records (as documented by @tonyrcollins), the fact remains that primary and secondary healthcare view the patient completely differently.

See my previous blogposts on Single Source of Truth


Tony Collins, Another NPfIT IT scandal in the making? (Campaign4Change, 9 February 2016)

Emily Herrett et al, Completeness and diagnostic validity of recording acute myocardial infarction events in primary care, hospital care, disease registry, and national mortality records: cohort study (BMJ 21 May 2013)

Michelle Roberts, Statins 'don't work well for one in two people' (BBC News, 15 April 2019)

Benoît Salanave et al, Classification differences and maternal mortality: a European study (International Journal of Epidemiology 28, 1999) pp 64–69

Saturday, February 02, 2019

What is a framework?

The term "framework" is much abused. Here's a statement picked at random from the Internet:
"By bringing these key aspects together on a process-oriented strategy, organizations are able to acquire agility and improve the ability to engage on a much more effective way. This translates into a solid, resilient and innovative framework that drives the ability to continuously improve and assure sustainability."
Does that actually mean anything? What does it take for a "framework" to be simultaneously solid and resilient, let alone innovative?

If you take a random set of concepts - social, mobile, analytics, cloud, internet of things - you can string the initial letters into a slightly suggestive acronym - SMACIT. But have you got a framework?

And if you take a random set of ideas and arrange them artistically, you can create a nice diagram. But have you got a framework, or is this just Methodology by PowerPoint?


I just did an image search for "framework" on Google. Here are the top ten examples. Pretty typical of the genre.


In 1987, John Zachman published an article in the IBM Systems Journal, entitled "A Framework for Information Systems Architecture", hypothesizing a set of architectural representations for information systems, arranged in a table. The international standard ISO/IEC 42010 defines an architectural framework as any set of architectural descriptions or viewpoints satisfying certain conditions, and the Zachman Framework is usually cited as a canonical example of this, along with RM/ODP and a selection of AFs (MOD, DOD, TOG, etc.). 

But there are a lot of so-called frameworks that don't satisfy this definition. Sometimes there is just a fancy diagram, which makes less sense the more you look at it. Let's look at the top example more closely.



The webpage asserts that "the summary graphic illustrates the main relationships between each heading and relevant sub-headings". Sorry, it doesn't. What does this tell me about the relationship between Knowledge and Governance? If Advocacy is about promoting things, does that mean that Knowledge is about preventing things? And if Prevent, Protect and Promote are verbs, does this mean that People is also a verb? I'm sure there is a great deal of insight and good intentions behind this diagram, but the diagram itself owes more to graphic design than systems thinking. All I can see in this "systems framework" is (1) some objectives (2) a summary diagram and (3) a list. If that's really all there is, I can't see how such a framework "can be used as a flexible assessment, planning and evaluation tool for policy-making".

For clarification, I think there is an important difference between a framework and a lens. A lens provides a viewpoint - a set of things that someone thinks you should pay attention to - but its value doesn't depend on its containing everything. VPEC-T is a great lens, but is Wikipedia right in characterizing as a thinking framework?



Commonwealth Health Hub, A systems framework for healthy policy (31 October 2016)

Filipe Janela, The 3 cornerstones of digital transformation (Processware, 30 June 2017)

Anders Jensen-Waud, On Enterprise Taxonomy Completeness (9 April 2010)

John Zachman, A Framework for Information Systems Architecture (IBM Systems Journal, Vol 26 No 3, 1987).

Wikipedia: ISO/IEC 42010, VPEC-T

Related posts: What's Missing from VPEC-T (September 2009), Evolving the Enterprise Architecture Body of Knowledge (October 2012), Arguing with Mendeleev (March 2013)


Updated 28 February 2019 
Added @ricphillips' tweet. See discussion following. I also remembered an old discussion with Anders Jensen-Ward.

Tuesday, November 06, 2018

Big Data and Organizational Intelligence

Ten years ago, the editor of Wired Magazine published an article claiming the end of theory. "With enough data, the numbers speak for themselves."

The idea that data (or facts) speak for themselves, with no need for interpretation or analysis, is a common trope. It is sometimes associated with a legal doctrine known as Res Ipsa Loquitur - the thing speaks for itself. However this legal doctrine isn't about truth but about responsibility: if a surgeon leaves a scalpel inside the patient, this fact alone is enough to establish the surgeon's negligence.

Or even the world speaks for itself. The world, someone once asserted, is all that is the case, the totality of facts not of things. Paradoxically, big data often means very large quantities of very small (atomic) data.

But data, however big, does not provide a reliable source of objective truth. This is one of the six myths of big data identified by Kate Crawford, who points out, "data and data sets are not objective; they are creations of human design". In other words, we don't just build models from data, we also use models to obtain data. This is linked to Piaget's account of how children learn to make sense of the world in terms of assimilation and accommodation. (Piaget called this Genetic Epistemology.)

Data also cannot provide explanation or understanding. Data can reveal correlation but not causation. Which is one of the reasons why we need models. As Kate Crawford also observes, "we get a much richer sense of the world when we ask people the why and the how not just the how many".

In the traditional world of data management, there is much emphasis on the single source of truth. Michael Brodie (who knows a thing or two about databases), while acknowledging the importance of this doctrine for transaction systems such as banking, argues that it is not appropriate everywhere. "In science, as in life, understanding of a phenomenon may be enriched by observing the phenomenon from multiple perspectives (models). ... Database products do not support multiple models, i.e., the reality of science and life in general.". One approach Brodie talks about to address this difficulty is ensemble modelling: running several different analytical models and comparing or aggregating the results. (I referred to this idea in my post on the Shelf-Life of Algorithms).

Along with the illusion that what the data tells you is true, we can identify two further illusions: that what the data tells you is important, and that what the data doesn't tell you is not important. These are not just illusions of big data of course - any monitoring system or dashboard can foster them. The panopticon affects not only the watched but also the watcher.

From the perspective of organizational intelligence, the important point is that data collection, sensemaking, decision-making, learning and memory form a recursive loop - each inextricably based on the others. An organization only perceives what it wants to perceive, and this depends on the conceptual models it already has - whether these are explicitly articulated or unconsciously embedded in the culture. Which is why real diversity - in other words, genuine difference of perspective, not just bureaucratic profiling - is so important, because it provides the organizational counterpart to the ensemble modelling mentioned above.





Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Wired, 23 June 2008)

Michael L Brodie, Why understanding of truth is important in Data Science? (KD Nuggets, January 2018)

Kate Crawford, The Hidden Biases in Big Data (HBR, 1 April 2013)

Kate Crawford, The Anxiety of Big Data (New Inquiry, 30 May 2014)

Bruno Gransche, The Oracle of Big Data – Prophecies without Prophets (International Review of Information Ethics, Vol. 24, May 2016)

Thomas McMullan, What does the panopticon mean in the age of digital surveillance? (Guardian, 23 July 2015)

Evelyn Ruppert, Engin Isin and Didier Bigo, Data politics (Big Data and Society, July–December 2017: 1–7)

Ian Steadman, Big Data and the Death of the Theorist (Wired, 25 January 2013)

Ludwig Wittgenstein, Tractatus Logico-Philosophicus (1922)


Related posts

Information Algebra (March 2008)
How Dashboards Work (November 2009)
Co-Production of Data and Knowledge (November 2012)
Real Criticism - The Subject Supposed to Know (January 2013)
The Purpose of Diversity (December 2014)
The Shelf-Life of Algorithms (October 2016)
The Transparency of Algorithms (October 2016)


Wikipedia: Ensemble LearningGenetic Epistemology, PanopticismRes ipsa loquitur (the thing speaks for itself)

Stanford Encyclopedia of Philosophy: Kant and Hume on Causality


For more on Organizational Intelligence, please read my eBook.
https://leanpub.com/orgintelligence/

Sunday, November 04, 2018

On Repurposing AI

With great power, as they say, comes great responsibility.


In London this week for Microsoft's Future Decoded event, according to reporter @richard_speed of @TheRegister, Satya Nadella asserted that an AI trained for one purpose being used for another was "an unethical use".

If Microsoft really believes this, it would certainly be a radical move. In April this year Mark Russinovich, Azure CTO, gave a presentation at the RSA Conference on Transfer Learning: Repurposing ML Algorithms from Different Domains to Cloud Defense.

Repurposing data and intelligence - using AI for a different purpose to its original intent - may certainly have ethical consequences. This doesn't necessarily mean it's wrong, simply that the ethics must be reexamined. Responsibility by design (like privacy by design, from which it inherits some critical ideas) considers a design project in relation to a specific purpose and use-context. So if the purpose and context change, it is necessary to reiterate the responsibility-by-design process.

A good analogy would be the off-label use of medical drugs. There is considerable discussion on the ethical implications of this very common practice. For example, Furey and Wilkins argue that off-label prescribing imposes additional responsibilities on a medical practitioner, including weighing the available evidence and proper disclosure to the patient.

There are often strong arguments in favour of off-label prescribing (in medicine) or transfer learning (in AI). Where a technology provides some benefit to some group of people, there may be good reasons for extending these benefits. For example, Rachel Silver argues that transfer learning has democratized machine learning, lowered the barriers to entry, thus promoting innovation. Interestingly, there seem to be some good examples of transfer learning in AI for medical purposes.

However, transfer learning in AI raises some ethical concerns. Not only the potential consequences on people affected by the repurposed algorithms, but also potential sources of error. For example, Wang and others identify a potential vulnerability to misclassification attacks.

There are also some questions of knowledge ownership and privacy that were relevant to older modes of knowledge transfer (see for example Baskerville and Dulipovici).



By the way, if you thought the opening quote was a reference to Spiderman, Quote Investigator has traced a version of it to the French Revolution. Other versions from various statesmen including Churchill and Roosevelt.


Richard Baskerville and Alina Dulipovici, The Ethics of Knowledge Transfers and Conversions: Property or Privacy Rights? (HICSS'06: Proceedings of the 39th Annual Hawaii International Conference on System Sciences, 2006)

Katrina Furey and Kirsten Wilkins, Prescribing “Off-Label”: What Should a Physician Disclose? (AMA Journal of Ethics, June 2016)

Marian McHugh, Microsoft makes things personal at this year's Future Decoded (Channel Web, 2 November 2018)

Rachel Silver, The Secret Behind the New AI Spring: Transfer Learning (TDWI, 24 August 2018)

Richard Speed, 'Privacy is a human right': Big cheese Sat-Nad lays out Microsoft's stall at Future Decoded (The Register, 1 November 2018)

Bolun Wang et al, With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning (Proceedings of the 27th USENIX Security Symposium, August 2018)


See also Off-Label (March 2005)

Thursday, October 18, 2018

Why Responsibility by Design now?

Excellent article by @riptari, providing some context for Gartner's current position on ethics and privacy.

Gartner has been talking about digital ethics for a while now - for example, it got a brief mention on the Gartner website last year. But now digital ethics and privacy has been elevated to the Top Ten Strategic Trends, along with (surprise, surprise) Blockchain.

Progress of a sort, says @riptari, as people are increasingly concerned about privacy.

The key point is really the strategic obfuscation of issues that people do in fact care an awful lot about, via the selective and non-transparent application of various behind-the-scenes technologies up to now — as engineers have gone about collecting and using people’s data without telling them how, why and what they’re actually doing with it. 
Therefore, the key issue is about the abuse of trust that has been an inherent and seemingly foundational principle of the application of far too much cutting edge technology up to now. Especially, of course, in the adtech sphere. 
And which, as Gartner now notes, is coming home to roost for the industry — via people’s “growing concern” about what’s being done to them via their data. (For “individuals, organisations and governments” you can really just substitute ‘society’ in general.) 
Technology development done in a vacuum with little or no consideration for societal impacts is therefore itself the catalyst for the accelerated concern about digital ethics and privacy that Gartner is here identifying rising into strategic view.

Over the past year or two, some of the major players have declared ethics policies for data and intelligence, including IBM (January 2017), Microsoft (January 2018) and Google (June 2018). @EricNewcomer reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises".

According to the Magic Sorting Hat, high-minded vision can get organizations into the Ravenclaw or Slytherin quadrants (depending on the sincerity of the intention behind the vision). But to get into the Hufflepuff or Gryffindor quadrants, organizations need the ability to execute. So it's not enough for Gartner simply to lecture organizations on the importance of building trust.

Here we go round the prickly pear
Prickly pear prickly pear
Here we go round the prickly pear
At five o'clock in the morning.




Natasha Lomas (@riptari), Gartner picks digital ethics and privacy as a strategic trend for 2019 (TechCrunch, 16 October 2018)

Sony Shetty, Getting Digital Ethics Right (Gartner, 6 June 2017)


Related posts (with further links)

Data and Intelligence Principles from Major Players (June 2018)
Practical Ethics (June 2018)
Responsibility by Design (June 2018)
What is Responsibility by Design (October 2018)

What is Responsibility by Design

Responsibility by design (RbD) represents a logical extension of Security by Design and Privacy by Design, as I stated in my previous post. But what does that actually mean?

X by design is essentially a form of governance that addresses a specific concern or set of concerns - security, privacy, responsibility or whatever.

  • What. A set of concerns that we want to pay attention to, supported by principles, guidelines, best practices, patterns and anti-patterns.
  • Why. A set of positive outcomes that we want to attain and/or a set of negative outcomes that we want to avoid.
  • When. What triggers this governance activity? Does it occur at a fixed point in a standard process or only when specific concerns are raised? Is it embedded in a standard operational or delivery model?
  • For Whom. How are the interests of stakeholders and expert opinions properly considered? To whom should this governance process be visible?
  • Who. Does this governance require specialist input or independent review, or can it usually be done by the designers themselves?
  • How. Does this governance include some degree of formal verification, independent audit or external certification, or is an informal review acceptable? How much documentation is needed?
  • How Much. Design typically involves a trade-off between different requirements, so this is about the weight given to X relative to anything else.


#JustAnEngineer
Check out @katecrawford talking at the Royal Society in London this summer. Just an Engineer.



Related posts

Practical Ethics (June 2018), Responsibility by Design (June 2018)

Friday, July 27, 2018

Standardizing Processes Worldwide

September 2015
Lidl is looking to press ahead with standardizing processes worldwide and chose SAP ERP Retail powered by SAP HANA to do the job (PressBox 2, September 2015)

November 2016
Lidl rolls out SAP for Retail powered by SAP HANA with KPS (Retail Times, 9 November 2016)

July 2018
Lidl stops million-dollar SAP project for inventory management (CIO, in German, 18 July 2018)

Lidl cancels SAP introduction after spending 500M Euro and seven years (An Oracle Executive, via Linked-In, 20 July 2018) 
Lidl software disaster another example of Germany’s digital failure (Handelsblatt Global, 30 July 2018)

I don't have any inside information about this project, but I have seen other large programmes fail on because of the challenges of process standardization. When you are spending so much money on the technology, people across the organization may start to think of this as primarily a technology project. Sometimes it is as if the knowledge of how to run the business is no longer grounded in the organization and its culture but (by some form of transference) is located in the software. To be clear, I don't know if this is what happened in this case.

Also to be clear, some organizations have been very successful at process standardization. This is probably more to do with management style and organizational culture than technology choices alone.

Writing in Handelsblatt Global, Florian Kolf and Christof Kerkmann suggest that Lidl's core mentality was "but this is how we always do it". Alexander Posselt refers to Schicksalsgemeinschaften, which can be roughly translated as collective wilful blindness. Kolf and Kerkmann also make a point related to the notion of shearing layers.
Altering existing software is like changing a prefab house, IT experts say — you can put the kitchen cupboards in a different place, but when you start moving the walls, there’s no stability.
But at least with a prefab house, it is reasonably clear what counts as Cupboard and what counts as Wall. Whereas with COTS software, people may have widely different perceptions about which elements are flexible and which elements need to be stable. So the IT experts may imagine it's cheaper to change the business process than the software, while the business imagines it's easier and quicker to change the software than the business process.

What will Lidl do now? Apparently it plans to fall back on its old ERP system, at least in the short term. It's hard to imagine that Lidl is going to be in a hurry to burn that amount of cash on another solution straightaway. (Sorry Oracle!) But the frustrations with the old system are surely going to get greater over time, and Lidl can't afford to spend another seven years tinkering around the edges. So what's the answer? Organic planning perhaps?


Thanks to @EnterprisingA for drawing this story to my attention.

Slideshare: Organic Planning (September 2008), Next Generation Enterprise Architecture (September 2011)

Related Posts: SOA and Holism (January 2009), Differentiation and Integration (May 2010), EA Effectiveness and Process Standardization (August 2012), Agile and Wilful Blindness (April 2015).


Updated 31 August 2018

Tuesday, July 24, 2018

Evidence-Based Planning

Everybody's favourite internet-book-retailer-cum-cloud-computing-giant is planning for a wide range of outcomes after Brexit.
"Like any business, we consider a wide range of scenarios in planning discussions so that we’re prepared to continue serving customers and small businesses who count on Amazon, even if those scenarios are very unlikely," a spokesperson said.

However, a Government spokesperson dismissed speculation about civil unrest, saying
"Where is the evidence to suggest that would happen?"

To which one might counter

"Where is the evidence to suggest that wouldn't happen?"



There is a methodological gulf between these two positions. One is planning for things you can't prove won't happen. The other is NOT planning for things you can't prove WILL happen.

The political problem with planning for things that might not happen, is that people may criticize you for wasting time and money on something that didn't happen. Whereas if you fail to plan for something that is unlikely to happen, and then it does happen, you can appeal to bad luck. Or the wrong kind of snow.

As with other modes of decision-making, planning simply to avoid censure is not necessarily conducive to good outcomes.


Gareth Corfield, I predict a riot: Amazon UK chief foresees 'civil unrest' for no-deal Brexit (The Register, 23 July 2018)

Rob Davies, No-deal Brexit risks 'civil unrest', warns Amazon's UK boss (The Guardian, 23 July 2018)

Related Post: Decision-Making Models (March 2017)