Showing posts with label DoubleLoop. Show all posts
Showing posts with label DoubleLoop. Show all posts

Sunday, February 23, 2020

An Alternative to the DIKW Pyramid

My 2012 post on the Co-Production of Data and Knowledge offered a critique of the #DIKW pyramid. When challenged recently to propose an alternative schema, I drew something quickly on the wall, against a past-present-future timeline. Here is a cleaned-up version.





Data is always given from the past – even if only a fraction of a second into the past.

We use our (accumulated) knowledge (or memory) to convert data into information – telling us what is going on right now. Without prior knowledge, we would be unable to do this. As Dave Snowden puts it, knowledge is the means by which we create information out of data. 

We then use this information to make various kinds of judgement into the future. In his book The Art of Judgment, Vickers identifies three types. We predict what will happen if we do nothing, we work out how to achieve what we want to happen, and we put these into an ethical frame.
 
Intelligence is about the smooth flow towards judgement, as well as effective feedback and learning back into the creation of new knowledge, or the revision/reinforcement of old knowledge.

And finally, wisdom is about maintaining a good balance between all of these elements - respecting data and knowledge without being trapped by them.

What the schema above doesn't show are the feedback and learning loops. Dave Snowden invokes the OODA loop, but a more elaborate schema would include many nested loops - double-loop learning and so on - which would make the diagram a lot more complex.
 
And although the schema roughly indicates the relationship between the various concepts, what it doesn't show is the fuzzy boundary between the concepts. I'm really not interested in discussing the exact criteria by which the content of a document can be classified as data or information or knowledge or whatever.

Update (July 2020): I have posted an alternative schema that identifies Three Types of Data. And for an alternative view of Wisdom, see my post on Reframing (February 2009).

Note: As an alternative to the word data (données), Bernard Stiegler (2019, p7) suggests the Husserlian word retention, and associates its opposite (protention) with desire, expectation, volition, will, etc. Thus lumping data and memory together, as well as lumping together Vickers' three types of judgement.



Dave Snowden, Sense-making and Path-finding (March 2007)

Bernard Stiegler, The Age of Disruption: Technology and Madness in Computational Capitalism (English translation, Polity Press 2019)

Geoffrey Vickers, The Art of Judgment: A Study of Policy-Making (1965)

Wikipedia: Retention and Protention

Related posts: Wisdom of the Tomato (March 2011), Co-Production of Data and Knowledge (November 2012), Three Types of Data (July 2020)

Wednesday, August 07, 2019

Process Automation and Intelligence

What kinds of automation are there, and is there a natural progression from basic to advanced? Do the terms intelligent automation and cognitive automation actually mean anything useful, or are they merely vendor hype? In this blogpost, I shall attempt an answer.


Robotic Automation

The simplest form of automation is known as robotic automation or robotic process automation (RPA). The word robot (from the Czech word for forced labour, robota) implies a pre-programmed response to a set of incoming events. The incoming events are represented as structured data, and may be held in a traditional database. The RPA tools also include the connectivity and workflow technology to receive incoming data, interrogate databases and drive action, based on a set of rules.




Cognitive Automation

People talk about cognitive technology or cognitive computing, but what exactly does this mean? In its marketing material, IBM uses these terms to describe whatever features of IBM Watson they want to draw our attention to – including adaptability, interactivity and persistence – but IBM’s usage of these terms is not universally accepted.

I understand cognition to be all about perceiving and making sense of the world, and we are now seeing man-made components that can achieve some degree of this, sometimes called Cognitive Agents.

Cognitive agents can also be used to detect patterns in vast volumes of structured and unstructured data and interpret their meaning. This is known as Cognitive Insight, which Thomas Davenport and Rajeev Ronanki refer to as “analytics on steroids”. The general form of the cognitive agent is as follows.



Cognitive agents can be wrapped as a service and presented via an API, in which case they are known as Cognitive Services. The major cloud platforms (AWS, Google Cloud, Microsoft Azure) provide a range of these services, including textual sentiment analysis.

At the current state-of-the-art, cognitive services may be of variable quality. Image recognition may be misled by shadows, and even old-fashioned OCR may struggle to generate meaningful text from poor resolution images. – but of course human cognition is also fallible.


Intelligent Automation

Meanwhile, one of the key characteristics of intelligence is adaptability – being able to respond flexibly to different conditions. Intelligence is developed and sustained by feedback loops – detecting outcomes and adjusting behaviour to achieve goals. Intelligent automation therefore includes a feedback loop, typically involving some kind of machine learning.



Complex systems and processes may require multiple feedback loops (Double-Loop or Triple-Loop Learning). 


Edge Computing

If we embed this automation into the Internet of Things, we can use sensors to perform the information gathering, and actuators to carry out the actions.



Closed-Loop Automation

Now what happens if we put all these elements together?



This fits into a more general framework of human-computer intelligence, in which intelligence is broken down into six interoperating capabilities.



I know that some people will disagree with me as to which parts of this framework are called "cognitive" and which parts "intelligent". Ultimately, this is just a matter of semantics. The real point is to understand how all the pieces of cognitive-intelligent automation work together.


The Limits of Machine Intelligence

There are clear limits to what machines can do – but this doesn’t stop us getting them to perform useful work, in collaboration with humans where necessary. (Collaborative robots are sometimes called cobots.) A well-designed collaboration between human and machine can achieve higher levels of productivity and quality than either human or machine alone. Our framework allows us to identify several areas where human abilities and artificial intelligence can usefully combine.

In the area of perception and cognition, there are big differences in the way that humans and machines view things, and therefore significant differences in the kinds of kinds of cognitive mistakes they are prone to. Machines may spot or interpret things that humans might miss, and vice versa. There is good evidence for this effect in medical diagnosis, where a collaboration between human medic and AI can often produce higher accuracy than either can achieve alone.

In the area of decision-making, robots can make simple decisions much faster, but may be unreliable with more complex or borderline decisions, so a hybrid “human-in-the-loop” solution may be appropriate. 

Decisions that affect real people are subject to particular concern – GDPR specifically regulates any automated decision-making or profiling that is made without human intervention, because of the potential impact on people’s rights and freedoms. In such cases, the “human-in-the-loop” solution reduces the perceived privacy risk. In the area of communication and collaboration, robots can help orchestrate complex interactions between multiple human experts, and allow human observations to be combined with automatic data gathering. Meanwhile, sophisticated chatbots are enabling more complex interactions between people and machines.

Finally there is the core capability of intelligence – learning. Machines learn by processing vast datasets of historical data – but that is also their limitation. So learning may involve fast corrective action by the robot (using machine learning), with a slower cycle of adjustment and recalibration by human operators (such as Data Scientists). This would be an example of Double-Loop learning.


Automation Roadmap

Some of the elements of this automation framework are already fairly well developed, with cost-effective components available from the technology vendors. So there are some modes of automation that are available for rapid deployment. Other elements are technologically immature, and may require a more cautious or experimental approach.

Your roadmap will need to align the growing maturity of your organization with the growing maturity of the technology, exploiting quick wins today while preparing the groundwork to be in a position to take advantage of emerging tools and techniques in the medium term.




Thomas Davenport and Rajeev Ronanki, Artificial Intelligence for the Real World (January–February 2018)

Related posts: Automation Ethics (August 2019), RPA - Real Value or Painful Experimentation? (August 2019)

Tuesday, March 13, 2012

Perspective in Depth

As the systems thinking pioneer Gregory Bateson pointed out, there is an important difference between seeing with one eye and seeing with two. With monocular vision even the much prized Big Picture can be merely a flat and featureless panorama. Binocular vision affords a sense of depth and contour, and enhances the view.

When I wrote recently about the "What the Business Wants" viewpoint, Nick Gall challenged me to state whether I was referring to nominal purpose or defacto purpose (POSIWID). My answer was that the "What the Business Wants" viewpoint gave us a vantage point from which we could view both nominal purposes and defacto purposes at the same time, and appreciate the rich dependencies and contradictions between them.

So what happens when I apply the same thinking to the other five viewpoints? Each viewpoint has a monocular version (simple, linear) and a binocular version (rich, multi-dimensional). Here are a few key differences.





Flat Rounded Links
Strategic View What the Business Wants Nominal purpose
Nominal strategy
Defacto purpose
Emergent strategy
Enterprise POSIWID
Capability View How the Business Does Operational capability
Hard dependencies
Top-down leadership
Sociotechnical capability and competency
Soft dependencies
Edge leadership

Activity View What the Business Does Linear synchronous process (value chain) Asynchronous collaboration (value network) Changing Conceptions of Business Process
Knowledge View What the Business Knows Formal information systems Informal information systems
Sensemaking
Appreciative systems

Management View How the Business Thinks Goal-directed behaviour
Management by objectives
Single loop learning
First order cybernetics (VSM)
Second-order cybernetics (Bateson/Maturana)
Double-loop and deutero learning
Organizations as Brains
Organization View What the Business Is Enterprise Business-as-a-Platform
Ecosystem


But how should I label the two columns? Should I succumb to the temptation to label the first column "traditional"? Any suggestions?

Saturday, October 30, 2010

Organizations as Brains

This post is based on Chapter 3 of Gareth Morgan's classic book Images of Organization (Sage 1986), which opens with the following question: "Is it possible to design organizations so that they have the capacity to be as flexible, resilient, and inventive as the functioning of a brain?"

To start with, Morgan makes two important distinctions. The first distinction is between two different notions of rationality, and the second involves two contrasting uses of the "brain" metaphor.

Mechanistic or bureaucratic organizations rely on what Morgan (following Karl Mannheim) calls "instrumental rationality", where people are valued for their ability to fit in and contribute to the efficient operation of a predetermined structure. Morgan contrasts this with "substantial rationality", where elements of organization are able to question the appropriateness of what they are doing and to modify their action to take account of new situations. Morgan states that the human brain possesses higher degrees of substantial rationality than any man-made system. (pp78-79)

Morgan also observes a common trend to use the term "brain" metaphorically to refer to a centralized planning or management function within an organization, the brain "of" the firm. Instead, Morgan wants to talk about brain-like capabilities distributed throughout the organization, the brain "as" the firm. Using the brain metaphor in this way leads to two important ideas. Firstly, that organizations are information processing systems, potentially capable of learning to learn. And secondly, that organizations may be holographic systems, in the sense that any part represents and can stand in for the whole. (p 80)

The first of these two ideas, organizations as information processing systems, goes back to the work of James March and Herbert Simon in the 1940s and 1950s. Simon's theory of decision-making leads us to understand organizations as kinds of institutionalized brains that fragment, routinize and bound the decision-making process in order to make it manageable. (p 81) According to this theory, the  organization chart does not merely define a structure of work activity, it also creates a structure of attention, interpretation and decision-making. (p 81) Later organization design theorists such as Jay Galbraith showed how this kind of decision-making structure coped with uncertainty and information overload, either by reducing the need for information or by increasing the capacity to process information. (pp 82-83)

Nowadays, of course, much of this information processing capacity is provided by man-made systems. Writing in the mid 1980s, Morgan could already see the emergence of the virtual organization, embedded not in human activity but in computer networks. If it wasn't already, the organization-as-brain is now indisputably a sociotechnical system. The really big question, Morgan asks, is whether such organizations will also become more intelligent. (p84)

The problem here is that man-made systems (bureaucratic as well as automatic) tend towards instrumental rationality rather than substantial rationality. Such systems can produce goal-directed behaviour under four conditions. (p87)
  1. The capacity to sense, monitor and scan significant aspects of their environment
  2. The ability to relate this information to the operating norms that guide system behaviour
  3. Ability to detect significant deviations from these norms
  4. Ability to initiate corrective action when discrepancies are detected.
But this is merely single-loop learning, whereas true learning-to-learn calls for double-loop learning.  Morgan identifies three factors that inhibit double-loop learning. (pp89-90)
  1. Division of responsibilities cause a fragmentation of knowledge and attention.
  2. Bureaucratic accountability and asymmetric information produce ethical problems such as deception. (This is a form of the principal-agent problem.) 
  3. Organizations also suffer from various forms of collective self-deception, resulting in a gap between "espoused theory" and "theory-in-use".
and he goes on to identify four design principles that may facilitate double-loop learning. (pp 91-95)
  1. Encourage and value openness and reflectivity. Accept error and uncertainty.
  2. Recognize the importance of exploring different viewpoints. 
  3. Avoid imposing structures of action. Allow intelligence and direction to emerge.
  4. Create organizational structures and principles that help implement these principles.
The flexible, self-organizing capacities of a brain depend on four further design principles, which help to instantiate the notion of the "holographic" organization. (pp 98-103)

  1. Redundancy of function - each individual or team has a broader range of knowledge and skills than is required for the immediate task-at-hand, thus building flexibility into the organization.
  2. Requisite variety - the internal diversity must match the challenges posed by the environment. All elements of an organization should embody critical dimensions of the environment.
  3. Minimal critical specification - allow each system to find its own form.
  4. Learning to learn - use autonomous intelligence and emergent connectivity to find novel and progressive solutions to complex problems.
In conclusion, innovative organizations must be designed as learning systems that place primary emphasis on being open to enquiry and self-criticism. The innovative attitudes and abilities of the whole must be enfolded in the parts. (p 105) Morgan identifies two major obstacles to implementing this ideal.
  1. The realities of power and control. (p 108)
  2. The inertia stemming from existing assumptions and beliefs. (p 109)
Morgan says he favours the brain metaphor because of the fundamental challenge it presents to the bureaucratic mode of organization. (pp 382-3) Writing in the mid 1980s, Morgan noted that computing facilities were often used to increase centralization, and to reinforce bureaucratic principles and top-down hierarchical control, and expressed a hope that this was a consequence of the limited vision of system designers rather than a necessary consequence of the new technologies. "The principles of cybernetics, organizational learning, and holographic self-organization provide valuable guidelines regarding the direction [technology] change might take." (p 108) A quarter of a century later, let's hope we're finally starting to move in the right direction.

Friday, March 26, 2010

Business rules are forking

Another good post from @tetradian on business rules in response to James Taylor’s recent piece reporting Gartner's view that Business rules are king.

Tom agrees with James Taylor about the importance of discipline around business rules, but objects to an interpretation of business rules that goes back to Taylor's namesake Frederick Winslow Taylor.

Tom distinguishes between decision-support (where the decision is made by a collaboration between an automated system and human judgement) and automated decision-making (where there is no space for human judgement). This distinction is not always clear-cut - if the human actants within a complex business process lack the information, intelligence, attention or confidence to overrule the computer's suggested answer, then a decision-support system becomes defacto a decision-making system.

We should also distinguish between simple rules (binary logic) and more complex rules (modal logic, probabilistic logic). As Tom points out, much of the rule industry appears to assume simple composition of simple binary rules, which are inadequate for most interesting business problems (three quarters of his context space mapping framework).

Tom makes three important points

  • We should not assume that the business rules are sufficient, invariant, accurate and complete, especially if they are derived from the people who run the existing processes. Therefore the identification and codification of business-rules generally leaves something to be desired. (One way of putting this is that the Real resists symbolization.)
  • There needs to be a very strong emphasis on rule-maintenance, otherwise placing all the business-rules into an automated system will lead to a fit and forget attitude. (One way of putting this is a demand for double-loop or deutero-learning.)
  • The viability of using automation for decision-making is dependent on the context.


    In his new book Obliquity, John Kay discusses the example of waiting for a bus. According to the timetable, a bus should come every ten minutes. There are two rules that should help you decide whether to wait for the bus or walk - except that these two rules contradict each other.

    Rule One says the longer you wait for the bus, the more likely it is to arrive soon. So if you have waited for nine minutes, it is practically certain to arrive in the next minute.

    Rule Two says that the longer you wait for the bus, the more likely it is that Rule One is incorrect. So if you have waited more than nine minutes for the bus, it is starting to look as if the bus will never come at all.

    If you have waited more than half-an-hour for the bus, then common sense suggests that Rule Two is in force. But as Kay points out, many people (including those who drove banks into bankruptcy) appear incapable of shifting from Rule One to Rule Two.

    In real business situations, there is always a balance between following a rule and questioning the rule. It is not just automated systems that may fail to strike this balance; many people with significant business responsibility lack common sense. This is an important aspect of the context for decision-making.


    Update September 2020

    Prompted by the current interest in RPA, I've been looking over my old blogposts on Business Rules.

    I'm not sure I agree with Tom's point that humans are good at modal logic - there are whole schools of psychotherapy devoted to unpacking common misuses of modal operators, for example reframing "always" and "never" into "sometimes". But individuals may be better than machines (whether computer machines or bureaucratic machines). 

    James Taylor's post quotes Jim Sinur recommending a focus on high volatility rules - which would seem to tell against the "fit and forget" attitude Tom warns us about. But jumping forward ten years, we find RPA vendors recommending a focus on using bots for the low volatility rules. Meanwhile "mutant algorithms" try to tackle probabilistic decision-making. The issues raised in his post are clearly relevant to this new technology landscape.