Showing posts with label BPM. Show all posts
Showing posts with label BPM. Show all posts

Wednesday, August 07, 2019

Process Automation and Intelligence

What kinds of automation are there, and is there a natural progression from basic to advanced? Do the terms intelligent automation and cognitive automation actually mean anything useful, or are they merely vendor hype? In this blogpost, I shall attempt an answer.


Robotic Automation

The simplest form of automation is known as robotic automation or robotic process automation (RPA). The word robot (from the Czech word for forced labour, robota) implies a pre-programmed response to a set of incoming events. The incoming events are represented as structured data, and may be held in a traditional database. The RPA tools also include the connectivity and workflow technology to receive incoming data, interrogate databases and drive action, based on a set of rules.




Cognitive Automation

People talk about cognitive technology or cognitive computing, but what exactly does this mean? In its marketing material, IBM uses these terms to describe whatever features of IBM Watson they want to draw our attention to – including adaptability, interactivity and persistence – but IBM’s usage of these terms is not universally accepted.

I understand cognition to be all about perceiving and making sense of the world, and we are now seeing man-made components that can achieve some degree of this, sometimes called Cognitive Agents.

Cognitive agents can also be used to detect patterns in vast volumes of structured and unstructured data and interpret their meaning. This is known as Cognitive Insight, which Thomas Davenport and Rajeev Ronanki refer to as “analytics on steroids”. The general form of the cognitive agent is as follows.



Cognitive agents can be wrapped as a service and presented via an API, in which case they are known as Cognitive Services. The major cloud platforms (AWS, Google Cloud, Microsoft Azure) provide a range of these services, including textual sentiment analysis.

At the current state-of-the-art, cognitive services may be of variable quality. Image recognition may be misled by shadows, and even old-fashioned OCR may struggle to generate meaningful text from poor resolution images. – but of course human cognition is also fallible.


Intelligent Automation

Meanwhile, one of the key characteristics of intelligence is adaptability – being able to respond flexibly to different conditions. Intelligence is developed and sustained by feedback loops – detecting outcomes and adjusting behaviour to achieve goals. Intelligent automation therefore includes a feedback loop, typically involving some kind of machine learning.



Complex systems and processes may require multiple feedback loops (Double-Loop or Triple-Loop Learning). 


Edge Computing

If we embed this automation into the Internet of Things, we can use sensors to perform the information gathering, and actuators to carry out the actions.



Closed-Loop Automation

Now what happens if we put all these elements together?



This fits into a more general framework of human-computer intelligence, in which intelligence is broken down into six interoperating capabilities.



I know that some people will disagree with me as to which parts of this framework are called "cognitive" and which parts "intelligent". Ultimately, this is just a matter of semantics. The real point is to understand how all the pieces of cognitive-intelligent automation work together.


The Limits of Machine Intelligence

There are clear limits to what machines can do – but this doesn’t stop us getting them to perform useful work, in collaboration with humans where necessary. (Collaborative robots are sometimes called cobots.) A well-designed collaboration between human and machine can achieve higher levels of productivity and quality than either human or machine alone. Our framework allows us to identify several areas where human abilities and artificial intelligence can usefully combine.

In the area of perception and cognition, there are big differences in the way that humans and machines view things, and therefore significant differences in the kinds of kinds of cognitive mistakes they are prone to. Machines may spot or interpret things that humans might miss, and vice versa. There is good evidence for this effect in medical diagnosis, where a collaboration between human medic and AI can often produce higher accuracy than either can achieve alone.

In the area of decision-making, robots can make simple decisions much faster, but may be unreliable with more complex or borderline decisions, so a hybrid “human-in-the-loop” solution may be appropriate. 

Decisions that affect real people are subject to particular concern – GDPR specifically regulates any automated decision-making or profiling that is made without human intervention, because of the potential impact on people’s rights and freedoms. In such cases, the “human-in-the-loop” solution reduces the perceived privacy risk. In the area of communication and collaboration, robots can help orchestrate complex interactions between multiple human experts, and allow human observations to be combined with automatic data gathering. Meanwhile, sophisticated chatbots are enabling more complex interactions between people and machines.

Finally there is the core capability of intelligence – learning. Machines learn by processing vast datasets of historical data – but that is also their limitation. So learning may involve fast corrective action by the robot (using machine learning), with a slower cycle of adjustment and recalibration by human operators (such as Data Scientists). This would be an example of Double-Loop learning.


Automation Roadmap

Some of the elements of this automation framework are already fairly well developed, with cost-effective components available from the technology vendors. So there are some modes of automation that are available for rapid deployment. Other elements are technologically immature, and may require a more cautious or experimental approach.

Your roadmap will need to align the growing maturity of your organization with the growing maturity of the technology, exploiting quick wins today while preparing the groundwork to be in a position to take advantage of emerging tools and techniques in the medium term.




Thomas Davenport and Rajeev Ronanki, Artificial Intelligence for the Real World (January–February 2018)

Related posts: Automation Ethics (August 2019), RPA - Real Value or Painful Experimentation? (August 2019)

Friday, July 27, 2018

Standardizing Processes Worldwide

September 2015
Lidl is looking to press ahead with standardizing processes worldwide and chose SAP ERP Retail powered by SAP HANA to do the job (PressBox 2, September 2015)

November 2016
Lidl rolls out SAP for Retail powered by SAP HANA with KPS (Retail Times, 9 November 2016)

July 2018
Lidl stops million-dollar SAP project for inventory management (CIO, in German, 18 July 2018)

Lidl cancels SAP introduction after spending 500M Euro and seven years (An Oracle Executive, via Linked-In, 20 July 2018) 
Lidl software disaster another example of Germany’s digital failure (Handelsblatt Global, 30 July 2018)

I don't have any inside information about this project, but I have seen other large programmes fail on because of the challenges of process standardization. When you are spending so much money on the technology, people across the organization may start to think of this as primarily a technology project. Sometimes it is as if the knowledge of how to run the business is no longer grounded in the organization and its culture but (by some form of transference) is located in the software. To be clear, I don't know if this is what happened in this case.

Also to be clear, some organizations have been very successful at process standardization. This is probably more to do with management style and organizational culture than technology choices alone.

Writing in Handelsblatt Global, Florian Kolf and Christof Kerkmann suggest that Lidl's core mentality was "but this is how we always do it". Alexander Posselt refers to Schicksalsgemeinschaften, which can be roughly translated as collective wilful blindness. Kolf and Kerkmann also make a point related to the notion of shearing layers.
Altering existing software is like changing a prefab house, IT experts say — you can put the kitchen cupboards in a different place, but when you start moving the walls, there’s no stability.
But at least with a prefab house, it is reasonably clear what counts as Cupboard and what counts as Wall. Whereas with COTS software, people may have widely different perceptions about which elements are flexible and which elements need to be stable. So the IT experts may imagine it's cheaper to change the business process than the software, while the business imagines it's easier and quicker to change the software than the business process.

What will Lidl do now? Apparently it plans to fall back on its old ERP system, at least in the short term. It's hard to imagine that Lidl is going to be in a hurry to burn that amount of cash on another solution straightaway. (Sorry Oracle!) But the frustrations with the old system are surely going to get greater over time, and Lidl can't afford to spend another seven years tinkering around the edges. So what's the answer? Organic planning perhaps?


Thanks to @EnterprisingA for drawing this story to my attention.

Slideshare: Organic Planning (September 2008), Next Generation Enterprise Architecture (September 2011)

Related Posts: SOA and Holism (January 2009), Differentiation and Integration (May 2010), EA Effectiveness and Process Standardization (August 2012), Agile and Wilful Blindness (April 2015).


Updated 31 August 2018

Friday, September 07, 2012

The Meaning of Iteration

There is a curious ambiguity about the word "iteration". Sometimes it means doing exactly the same thing, over and over again; sometimes it means doing something slightly different each time.

When a process is drawn as a loop, we need to understand what exactly this means. Which aspects of the process are the same for each execution of the process, and which aspects are different? What is the nature of the feedback, allowing each iteration to improve on previous iterations, and what is the scope for learning and development?

For example, a business process architecture may include a product development cycle. We usually understand this to mean not that the products themselves are recycled, but that there is some accumulation of knowledge and experience (trial and error, learning by doing) that allows each product and each product-related activity to learn from and improve upon previous iterations.

Therefore a process model that is drawn as a loop or cycle cannot be interpreted in the same way as a process model that is drawn as a simple production line or value chain, because there is something else important going on. This is one of the reasons I distinguish the Cybernetic View (which specifically looks at feedback and its effects) from the Activity View (which looks at the work itself). The Cybernetic View allows the architect to pay attention to the effectiveness and efficiency of the feedback, which is not the same thing as the effectiveness and efficiency of the underlying activities but is at a different logical level.

One process that is attracting a lot of interest at the moment is the commissioning process. Commissioning is an important component of a healthcare system, and I also heard Jennifer Saunders recently talking about commissioning at the BBC. I wanted to see how people were depicting the commissioning process, so I did an internet search for images associated with the term "commissioning". The top six diagrams (ignoring one diagram that wasn't really about commissioning but about paying commission) all showed commissioning as some kind of cycle.


Bristol Compact Commissioning Cycle Diagram Diagram: Showing Collaborative Commissioning Cycle graphical representation of questions commissioning cycle


This selection of diagrams reflects a common agreement that commissioning can be thought of as a cycle. However, although the wide variety of diagram styles might be regarded as merely a fashion statement, it could be a sign of a more fundamental methodological issue. It is not clear that all these sources share the same understanding of what it means for commissioning to be regarded as a cycle. What do these diagrams tell us about how the commissioning process develops and evolves and hopefully improves over time? Are there different categories of improvement, with different cycle times?


There is also the question of responsibility. Are the same people responsible for both executing and improving the commissioning process, or does improvement call for a different kind of distributed intelligence?

So there are lots of important and interesting questions here, which a traditional process model doesn't help much with.

Thursday, August 23, 2012

EA Effectiveness and Process Standardization

@tdegennaro of @Forrester spotted an interesting correlation in his company's 2009 survey. "Survey respondents who reported a high degree of business and IT process standardization also reported that EA was more effective and more influential within the organization."

DeGennaro suggests "there must be something that standardization does to an organization — a window or door that it creates — that enables IT functions such as EA to get better at what they do".

Is EA Effectiveness At The Mercy Of Process Standardization? (July 2010)

But there is another possible explanation for this correlation. According to DeGennaro, Forrester has convinced most of its clients that process standardization is a keystone to effectiveness across all areas of IT. If many EA practitioners are single-minded about process standardization, it is hardly surprising if these practitioners will be less successful in those organizations where process standardization isn't appropriate. Ross, Weill and Robertson ("Enterprise Architecture as Strategy") are among those who argue against a one-size-fits-all approach to EA. See my post on Differentiation and Integration (May 2010).

Correlations arising from surveys need to be carefully interpreted, to avoid circular reasoning masquerading as objective fact. I am always wary of industry analysts who are over-dependent on opinion surveys from their own customers and think that something must be true because their customers have swallowed it.

Friday, May 15, 2009

Customer Orientation

@adamshostack objects to something I quoted from Clayton Christensen: Understanding your customer isn't enough.

What Christensen is saying, and I absolutely agree with this, is that understanding the customer is the wrong level of analysis (granularity). What we need to focus on is the job the customers are trying to get done when they use your product or service.

Adam thinks this is "fascinating & wrong. W/o understanding customer orientation, you can't from job" (via Twitter).

I think pharmaceutical sales and marketing provides an excellent example of why Christensen is correct. As I have explained before, a typical twentieth-century business model involved drug company representatives making personal visits to doctors. To support this kind of model, you collected lots of information about the doctor - not just professional (size of practice, specialization, and so on) but also personal (ethnic group, sexual preference, ages of children, golf or squash). The drug company employed a range of representatives of different types, and selected the appropriate rep to visit a white gay squash-playing doctor.

The trouble with this business model is that it is not aligned with the products and services of the drug company. Doctors increasingly regard these kind of sales visits as a complete waste of time; even if they accept the hospitality of the drug companies, they have learned to be resistant to the sales messages.

What the drug company needs to focus on is how to provide more value to the doctor. For this purpose, you don't need information about the doctor, you need information about what the doctor actually does. In particular, you have to understand the decision process in which the doctor thinks about prescribing particular drugs to a particular patient, as well as the collaborative process in which the doctor and other healthcare practitioners discuss alternative courses of treatment with a patient.

Understanding the doctor is missing the point. The opportunity to create business value comes from understanding the work of the doctor. Different doctors may have different styles and habits, and may approach similar cases in different ways (although this is increasingly constrained by procedure and protocol imposed by health authorities or health insurance companies). That's the right level of analysis.

One technique we use for this analysis is business process modelling. But we're not interested in the company's own processes, we're interested in the customers' processes, and in the variations in these processes. Business survival depends on providing products and services that add value to these customer processes, so it's the customer processes we need to understand. Not the customer as a passive and indivisible entity (as in many CRM systems) but as an active bundle of behaviour and capability and purpose.


See also Clayton Christensen on jobs needing to get done (via Anders Sundelin).

Related post

Misunderstanding CRM and big data (November 2014)

Update: I have removed links to tweets by Chris Lawer and Gunnar Peterson, who were part of the original conversation, because the URLs in their tweets now point to irrelevant and NSFW content.

Sunday, May 10, 2009

Semi-structured processes 2

@jonerp (Jon Reed) kindly sends me a link to an article on the SAP community network website, Value Scenarios and the BPM continuum, discussing the requirement for semi-structured processes. 

 

The author of the article, one Dan Woods of Evolved Media, places "processes automated and supported by BPM" at the "loosely-structured" point of the continuum, but he also talks about "more advanced and structured process design using BPMN and other such techniques"; I don't fully understand his taxonomy of processes, but he certainly seems to understand the need to support semi-structured processes as well as the fully structured processes described by Ann Rosenberg. 

Dan's starting point is the SAP book written by Ann Rosenberg and others, but his own analysis goes a lot further. He picks up on a reference to the Geoffrey Moore core/context distinction (which CBDI Forum members will be familiar with - see link below), and takes this distinction to its logical conclusion. 

Dan also makes clear his opinion that SAP doesn't currently support this continuum. "SAP's integrated story about BPM, SOA, and Value Scenarios will have to be extended to include more unstructured collaboration." Dan is not an official spokesman for SAP, although it's worth noting that his opinion article is published on an SAP website.

 


 

CBDI Forum, SOA Fundamentals (2009) - key extract from page 222

Richard Veryard, Towards the Agile Business Process (CBDI Journal July/August 2007) 

Related posts: Activity-Based Computing (March 2006), Semi-Structured Processes (April 2009)

Tuesday, April 28, 2009

Semi-structured processes

A lot of the focus in the BPM world is on highly repeatable, controlled processes. These processes follow a strict management lifecycle.

  1. analysed and designed by process experts on behalf of process owners,
  2. implemented by process engineers using sophisticated BPM tools and workflow engines
  3. performed by armies of obedient employees or compliant customers
  4. monitored and calibrated by statisticians using sophisticated analytical tools
This is essentially the BPM lifecycle presented by Ann Rosenberg of SAP at the London TOGAF conference today.

In this lifecycle, we can achieve process agility and innovation through rapid redesign and recalibration. If the design and control phases can be make more efficient and effective (for example by being supported by tools), then the set-up costs are reduced, and this improves the economics of scope as well as the economics of scale.

At the other extreme, there may be some processes in an enterprise that are completely adhoc, with no repeatable structure. For example, a decision to merge with a competitor may involve a number of highly complex steps (investigation, evaluation, due diligence and so on) but without any systematic process structure.

Is there something in between these two extremes? Well there certainly should be. In my previous post on Activity-Based Computing, I mentioned four important examples.

  • Sales. The sales team produces a detailed proposal in response to a complex request from a customer or prospect.
  • Customer Complaints. Each complaint follows a different path, which depends on the nature and content of the complaint.
  • Problem solving.
  • Medical intervention.
All of these processes are semi-structured. The people doing these jobs are given some degrees of freedom to customize the process to their own understanding of the requirements of a particular customer or patient or situation. However, they are still accountable and their performance is measured; and therefore there must be some underlying process structure.

Who is doing the process management? All the participants in the process are participating in the management of the process, and should have access to all the management tools (not just BPM but also BAM and Business Intelligence). Then the job of the process architect is not to lay down the process but to create a collaborative process space in which everyone can be productive and appropriately innovative, with the right balance of variation and standardization, freedom and control.

So the big question for the process architect is not optimizing one process, or even optimizing lots of different processes, but managing the process continuum. In this way, the process architect becomes the enterprise architect.

Update: see Semi-structured processes 2

Thursday, April 23, 2009

Decision-Making Services

James Taylor (ebizQ) explains why we need a separation between process and decision.

One of the reasons for this separation is that decisions are subject to business policies or rules, and such policies are often subject to change over time and/or variation between organizational units. If you code the decisions into process logic, perhaps using BPML to generate process services, then you will have to modify the code or BPLM script whenever the policy changes, and implement different versions for different parts of the organization.

But there are some more flexible ways of architecting decision services.
  • General-purpose rule engine, driven by a standard policy language.
  • Clusters of related business decisions (for example planning and scheduling) can be wrapped into a capability service.
The advantage of these approaches is that they help to separate the generic parts of the process from the specific parts, and thus increase reuse and sharing of the generic parts, while increasing accuracy and differentiation in the specific parts.

In his post on business processes and decision services, James discusses some of the technical issues that arise with this separation. It is certainly true that some technical optimization may be desirable in some situations, and this may mean implementing and deploying the process and decision services together as a fairly tightly coupled unit of software (which CBDI SAE calls an Automation Unit). But at the specification level, the logical services should still remain distinct.

For more examples of policy separation see my articles on Business Modelling for the CBDI Journal, which are archived on the Everware-CBDI website http://everware-cbdi.com/cbdi-journal-archive.

Tuesday, November 11, 2008

Post Before Processing

In Talk to the Hand, Saul Caganoff describes his experiences of error entering his timesheet data into one of those time-recording systems many of us have to use. He goes on to draw some general lessons about error-handing in business process management (BPM). In Saul's account, this might sometimes necessitate suspending a business rule.

My own view of the problem starts further back - I think it stems from an incorrect conceptual model. Why should your perfectly reasonable data get labelled as error or invalid just because it is inconsistent with your project manager's data? This happens in a lot of old bureaucratic systems because they are designed on the implicit (hierarchical, top-down) assumption that the manager (or systems designer) is always right and the worker (or data entry clerk) is always the one that gets things wrong. It's also easier for the computer system to reject the new data items, rather than go back and question items (such as reference data) that have already been accepted into the database.

I prefer to label such inconsistencies as anomalies, because that doesn't imply anyone in particular being at fault.

It would be crazy to have a business rule saying that anomalies are not allowed. Anomalies happen. What makes sense is to have a business rule saying how and when anomalies are recognized (i.e. what counts as an anomaly) and resolved (i.e. what options are available to whom).

Then you never have to suspend the rule. It is just a different, more intelligent kind of rule.

One of my earliest experiences of systems analysis was designing order processing and book-keeping systems. When I visited the accounts department, I saw people with desks stacked with piles of paper. It turned out that these stacks were the transactions that the old computer system wouldn't accept, so the accounts clerks had developed a secondary manual system for keeping track of all these invalid transactions until they could be corrected and entered.

According to the original system designer, the book-keeping process had been successfully automated. But what had been automated was over 90% of the transactions - but less than 20% of the time and effort. So I said, why don't we build a computer system that supports the work that the accounts clerks actually do. Let them put all these dodgy transactions into the database and then sort them out later.

But I was very junior and didn't know how things were done. And of course the accounts clerks had even less status than I did. The high priests who commanded the database didn't want mere users putting dodgy data in, so it didn't happen.


Many years later, I came across the concept of Post Before Processing, especially in military or medical systems. If you are trying to load or unload an airplane in a hostile environment, or trying to save the life of a patient, you are not going to devote much time or effort to getting the paperwork correct. So all sorts of incomplete and inaccurate data get shoved quickly into the computer, and then sorted out later. These systems are designed on the principle that it is better to have some data, however incomplete or inaccurate, than none at all. This was a key element of the DoD Net-Centric Data Strategy (2003).

The Post Before Processing paradigm also applies to intelligence. For example, here is a US Department of Defense ruling on the sharing of intelligence data.
In the past, intelligence producers and others have held information pending greater completeness and further interpretative processing by analysts. This approach denies users the opportunity to apply their own context to data, interpret it, and act early on to clarify and/or respond. Information producers, particularly those at large central facilities, cannot know even a small percentage of potential users' knowledge (some of which may exceed that held by a center) or circumstances (some of which may be dangerous in the extreme). Accordingly, it should be the policy of DoD organizations to publish data assets at the first possible moment after acquiring them, and to follow-up initial publications with amplification as available. Net-Centric Enterprise Services Technical Guide


See also

Saul Caganoff, Talk to the Hand (11 November 2008), Progressive Data Constraints (21 November 2008)

Jeff Jonas, Introducing the concept of network-centric warfare and post before processing (21 January 2006), The Next Generation of Network-Centric Warfare: Process at Posting or Post at Processing (Same thing) (31 January 2007)(

Related Post: Progressive Design Constraints (November 2008)

Saturday, January 19, 2008

Technological Perfecta

There are several technologies that might work well together, indeed they certainly should work well together. At various times in this blog, I've talked about the potential synergies between (i) SOA and Business Intelligence, (ii) SOA and Business Process Management, and (iii) SOA/EDA and Complex Event Processing. The third of these synergies is currently getting some attention, following some enthusiastic remarks by Jerry Cuomo, WebSphere CTO (see Rich Seeley and Joe McKendrick).

All four together would be amazing, but a lot of organizations aren't ready for this. Moreover each technology has its own set of tools and platforms, and its own set of disciplines and disciples.

In Betting on the SOA Horse, Tim Bass describes this potential synergy using the language of gambling - exacta and trifecta. I'm not very familiar with this language, but what I think this means is that you only win the bet if the horses pass the post in the correct sequence. Tim writes:
"Betting on horses is a risky business. Exactas and trifecta have enormous payouts, but the odds are remote."

In On Trifecta and Event Processing, Opher Etzion disagrees with this metaphor. He argues that these technologies are mutually independent (he calls them "orthogonal"). If he is correct, this would have three consequences: (i) flexibility of deployment - you can implement and exploit them in any sequence; (ii) flexibility of benefit - you can get business benefits from any of them in isolation, and then additional benefits if and when they are all deployed together; and therefore (iii) considerably lower risk.

My position on this is closer to Opher. I think there are some mutual dependencies between these technologies, but they are what I call soft dependencies. P has a hard dependency on Q if Q is necessary for P. Whereas P has a soft dependency on Q if Q is desirable for P.

In planning a technology change programme, it is very useful to recognize soft dependencies, because it permits some deconfliction between different elements. Deconfliction here means forced decoupling, understanding that the results may be sub-optimal (at least initially), but accepting this in the interests of getting things done.

In a perfect world, we might want to deploy all four technologies together, or in a precisely defined sequence. But pragmatism suggests we don't bet on the impossible or highly improbable. The challenge for the technology architect is to organize a technology portfolio to get the best balance of risk and reward. This is not primarily about comparing the features of different products, but about understanding the fundamental structural principles that allow these technologies to be deployed in a flexible and efficient manner.

Discussion continues: Technological Perfecta 2

Thursday, May 31, 2007

Business Process

An interesting debate behind the scenes at Wikipedia about the nature of the Business Process [article, discussion] involving Keith Swenson (Fujitsu and WfMC), Kai Simon (Gartner) and myself. 

Kai had contributed a definition of Business Process to Wikipedia: "A business process is a set of linked activities that create value by transforming an input into a more valuable output." 

Keith objected that this definition assumed an "information system" view of the world, and was not necessarily valid for office work. How do activities such as document approval or answering the telephone fit into this definition? What happens if two people approve a document for different purposes? 

Document approval and answering the telephone are presumably useful to the business - otherwise why would the business employ people to perform them? Perhaps they create value in some way - an approved document is worth more than an unapproved document; an answered telephone more than an unanswered telephone. 

But Keith is correct to point out the potential complexities of this way of viewing business process. If there are n people who may approve a document, then there are potentially 2-to-the-power-n approval states of the document. (In this case it might be easier to regard the approval instead as a separate entity.) And once the document has been approved (by manager A), does a second act of approval (by manager B) confer any additional value? 

Understanding the business process as a chain of value-adding activities is a popular and useful view, often attributed to Michael Porter and not only found in IT. But this view (modelling perspective) has some limitations, and it is certainly not the only way of understanding the business process. It is particularly problematic with management processes, such as command and control or strategic planning, whose value depends on some indirect calculation. 

For developing service-oriented architectures and systems, it is often useful to think of the business process more abstractly in terms of events and capabilities, rather than a particular value chain. Even within IT this view provides a useful alternative to the value-chain perspective, and allows us to analyse the requirements for management systems (including business intelligence) as well as transaction systems.

Friday, February 16, 2007

BPM and SOA 4

An interesting discussion this week with some of the top BPM people at IBM, on the nature of the relationship between BPM and SOA.

One of the critical elements of this relationship is the way the business case for BPM is connected with the business case for SOA. One of the ways to get the business interested in SOA is to promise BPM-related benefits.

But why do we need SOA to deliver BPM benefits? It may be technically possible to deliver many of these BPM benefits without using full SOA. Indeed, there may be people who claim to have produced similar benefits in the past, before SOA technology was available. So the business case for SOA in this context reduces to a technical argument about the greater efficiency or flexibility of SOA in delivering BPM benefits.

If an organization has a broken business process, which requires a single one-off effort to fix, then the case for using SOA is based on a simple comparison of two alternative development projects. But if an organization has a volatile business process, which requires constant modification, then the case for using SOA is based on an expectation of frequent modification. We would normally expect the business case for SOA to become stronger as the volatility increases.

We can use a simple spreadsheet model to compare costs and timescales and risks between two technical alternatives. We can produce graphs that show how the cost-benefit curve shifts as we vary the assumptions about volatility.

But the spreadsheet models, useful though they are, only tell half the story. The critical element of the business case is the architectural dependency between BPM and SOA. How much does BPM need SOA? In order to get the spreadsheet to calculate the synergy between BPM and SOA, we need to be able to quantify the architectural dependency - pick a number that is greater than zero and less than one.

But where does the number come from? There are three possibilities.
  1. Guesswork. If you don't know any better, one number is as good as another.
  2. Statistics. If you have a large quantity of historical data, you may be able to demonstrate some degree of correlation between SOA expenditure and BPM expenditure.
  3. Algebraic reasoning. Infer the level of synergy from the structural characteristics of the architecture.
The business case as a whole is extremely sensitive to this number, and we need to develop better ways of determining it.

Technorati Tags:

Monday, July 10, 2006

BPM and SOA 3

In my previous post BPM and SOA 2, I identified two propositions.
  1. BPM and SOA are not tightly coupled. It is possible to have BPM without SOA; and it is possible to have SOA without BPM.
  2. There is synergy between BPM and SOA. There are some potential advantages from combining BPM with SOA, which are not available from BPM alone or SOA alone.
One way of analysing these two propositions is to identify the dependencies between BPM capabilities and SOA capabilities, expressed within the respective capability maturity models.

BPM Maturity and SOA Maturity

What this picture shows is that there is a fair amount of latitude - an organization may choose to push ahead with SOA first or push ahead with BPM first. But ultimately, the BPM adoption programme needs to converge with the SOA adoption programme.

[Sources for SOA Capability/Maturity]
CBDI Forum SOA Adoption Roadmap

[Selected discussion of BPM Capability/Maturity]
Peter Abrahams (Bloor) quoting Howard Smith (CSC), Jesper Joergensen (BEA), Jim Sinur (Gartner) with comments by Sandy Kemsley and Bruce Silver.

Previous Posts: BPM and SOA, BPM and SOA 2, SOA Maturity Models
Technorati Tags:

Tuesday, July 04, 2006

BPM and SOA 2

Lots of interesting debate about the relationship between BPM and SOA: Phil Gilbert (Lombardi), Jesper Joergensen (BEA), Steve Jones, Mike Rosen (BPTrends, pdf), (via Sandy Kemsley). And loads by Bruce Silver: Phoney War, Still the Exception, Will Deliver. See also Sandy Kemsley's BPM lens.

So what is the relationship between BPM and SOA? In a series of articles entitled "BPM inside the belly of the SOA whale" (1, 2, 3), Colleen Frye of SearchWebServices collects a range of ideas from different experts:
  • BPM is putting a business face on SOA
  • BPM and SOA as ... two sides of the same coin
  • BPM and SOA are ... orthogonal
  • BPM sits on top
  • BPM is one of the main entry points for the business side of SOA
  • BPM ... hand-in-hand with SOA
Do any of these metaphors have a precise meaning? If so, do they contradict one another? And how can we tell? (This is a prevailing problem in the IT industry, where experts commonly try to explain abstract structural relationships using vague metaphors. I confess: I have done it myself sometimes.)

And what about the "Inside the Whale" metaphor used in the title of the articles? This metaphor is commonly associated with George Orwell's essay Inside the Whale, where it was used to denote (and criticize) a disengagement or decoupling between the writer (inside) and the political environment (outside). Salman Rushdie wrote an article called Outside the Whale, which called for a reengagement (alignment, tight coupling) between the writer and the political environment.

The separation between Inside/Outside (or Private/Public) is of course a crucial element of the component/service story. The belly of the whale is a metaphor for encapsulation. In Frye's version, BPM is on the inside and SOA is on the outside. From a business perspective we might have expected this to be the other way around. (There is a story here about business/IT alignment that I have blogged about separately. Update: URL added.)

So I think I can identify two propositions that I think many people would agree with.
  1. BPM and SOA are not tightly coupled. It is possible to have BPM without SOA; and it is possible to have SOA without BPM.
  2. There is synergy between BPM and SOA. There are some potential advantages from combining BPM with SOA, which are not available from BPM alone or SOA alone.
Previous Post: BPM and SOA
Technorati Tags:

Tuesday, March 07, 2006

BPM and SOA

At the BCS Business Information Systems SIG last night to hear a talk by Howard Smith of CSC. Meanwhile, I've been listening to an ACM Queue podcast with Mike Vizard and Edwin Khodabakchian (formerly Collaxa, now Oracle). 

I have spoken to Edwin and his (present and former) colleagues in the past. I hadn't met Howard before, but I was already familiar with his work and I am in contact with some of his associates in the BPMI world. 

The following post is something I've been thinking about for a while. I am happy to acknowledge the influence of Howard and Edwin, who both know a lot more about the BPM side than I do. But there are some things I'm saying here that I haven't heard either of them say, so blame me (not them) if you don't understand or agree.


What is the relationship between Business Process Management and SOA?

A good place to start is with the idea that SOA gives you the decomposition of functionality into (standardized) services, while BPM gives you the assembly of these services into (flexible) business solutions.
  1. We can decouple the specification of the process from the specification of the units of work making up the process. (In my Business Modelling for SOA material, I describe this as separating WHAT from HOW.)
  2. We can put the specification of the process into a standard process language. (The preferred choice here is BPEL, for various reasons, although there are some known issues.)
  3. We can automate and distribute the assembly of the process. (With appropriate tools, the assembly of the process or the selection of an appropriate process variant can be done either in real-time or dynamically at the point of need.)
  4. We can then take power to the edge - delegating process management to the people working at the edge of the organization.
  5. Ultimately, not just the process but the process management becomes event-driven. This gives us requisite variety at the process level.
  6. Process architects now need to work at a higher level of abstraction. Instead of specifying a standard one-size-fits-all process, they need to produce what we might call a metaprocess - frameworks and patterns that support process management. They need to pay attention to architectural process issues, such as coordination and interoperability risk.
These ideas are probably some way from mainstream adoption; but there seem to be enough early adopters experimenting with some of these ideas to keep the BPM market moving forward.

Business Process Innovation

The second part of Howard's talk was about Business Process Innovation. If we imagine that BPM plus SOA gives us a reasonable way of automating the business process, then the next challenge is that of automating business process change. 

Howard's working hypothesis is that business process change is driven by problems. CSC has adopted a Russian problem-solving methodology called TRIZ, and has been working on a process-oriented version of TRIZ called P-TRIZ. 

Most of the audience (including myself) were not familiar with TRIZ, and there was a lively discussion about its strengths and weaknesses. My first impression is that TRIZ would not be suitable for all types of business problem, as it seems to lack adequate constructs for modelling complex dynamic systems. However, it does have some interesting characteristics that may make it particularly amenable to automation. Firstly, there is a systematic approach to problem decomposition. Secondly, there is a complete enumeration of solution strategies, and a useful collection of abstract solution patterns. 

Howard showed us a stand-alone prototype tool that supported a simple version of TRIZ. What would be really interesting (and this is presumably CSC's plan) would be to have this functionality integrated into the BPM tools. Then we would be able to have local process-oriented problem-solving, with process architects able to concentrate on the emergent properties of the whole. 

This looks like a significant contribution to the overall BPM/SOA vision outlined above.

Thursday, September 15, 2005

SOA Chaos?

Britton Manasco (ZDNet) picks up a story from

The advantage is the disadvantage. You can break business processes down to their most granular, logical elements; focus your development efforts on where you can provide the most differentiation; and let someone else handle the overflow or the low-profit transactions. But you open yourself up to a management challenge the likes of which you've never seen." 

Britton adds: "SOA and BPO (business process outsourcing) aren't a natural match, apparently. Some think the two together can even unleash the forces of chaos." 

But the challenge of SOA has always been a dual one: not just the decomposition of business processes into separate (loosely coupled, reusable and outsourceable) services, but also the composition of these services back into a coherent (integrated) business process. In the project described by Howard, at least in the first iteration, it looks as if lots of effort went into the decomposition, but not enough thought went into the composition. That sounds like a recipe for disaster whether you mix in BPO or not. 

So is BPO irrelevant to this story? Not quite. With BPO it's not so easy to fudge the service-orientation, and the composition failure is more obvious. Perhaps that's not such a bad thing. 

And Howard's story has a happy end. The company was able to to redesign the way messages were handled to ensure that the system-management application only logged the most critical ones. In other words, the service decomposition apparently performed okay once they got the composition right; the solution didn't perform properly at first but could be adapted until it did. Perhaps this is evidence that SOA works after all. 

 

Related posts: SOA Chaos 2, SOA Stupidity

Tuesday, June 07, 2005

Workflow

Nick Malik (Microsoft) asks for feedback input on a workflow component he is working on. Are groups responsible for business processes?

If you are assuming that one instance of the workflow engine handles the entire business process, then perhaps you can afford to be prescriptive. But if you want to allow federation or interoperability between workflow engines, then you probably want to allow some flexibility about the handover from one engine (instance) to another. For example, this might mean that engine A regards the entire group as responsible for something, while engine B manages intra-group delegation and accountability. And this also seems to reflect more accurately the way management processes often work in large organizations.

Thursday, November 11, 2004

BPEL Usage Patterns

I am currently writing a series of articles on BPEL for the CBDI Journal. In the first of these articles, Orchestration Patterns Using BPEL, I discuss the following six ways in which a company can get benefits from BPEL.

Usage Pattern Description
Process Separation
(e.g. EAI/Workflow)
Process flow logic is contained in a separate layer, which is developed and maintained separately. This may correspond to a technical architecture in which the workflow is executed by a separate software engine.
Process Instrumentation
(e.g. Business Activity Management)
An interface is provided into the orchestration layer, exposing it for direct monitoring and intervention. Now the process flow can be defined and managed by the process manager, often without IT involvement.
Process Componentization
(e.g. grid-like solutions)
BPEL can be used for the orchestration of informal processes, where process components can be assembled rapidly or dynamically as required. This might also support a form of grid-like solution, where process components provide elementary integration scripts.
Process Generalization
(e.g. Application Packages)
BPEL is used to increase the flexibility of an application package, thus allowing it to be implemented in a wider range of organizations with greatly reduced customization.
Process Standardization
(e.g. External Services and Exchanges)
Service-based interfaces using BPEL processes
Process Flexibility
(e.g. Dynamic Outsourcing)
A BPEL-based process allows dynamic changes to orchestration. For example late binding with your service providers, allowing process steps to be easily moved to and fro across the organizational boundary.
 
References
Related posts

Sunday, October 10, 2004

Urgent Process Change

Once we articulate the business process in a separate layer of the SOA, the potential business benefits include crisis management and business continuity.

A business may need to roll out an amended process very quickly, to deal with some crisis. Subject to appropriate controls, we can envisage a company defining and testing a small change to the BPEL script, and then making the new service available for use around the enterprise. The elapsed time from detecting the problem to operating the amended BPEL script might be measured in hours. (Please let me know of any actual benchmarks.)

Of course, an urgent crisis might call for an even faster response than this. There may be a sudden change in the environment, making the current way of doing business uneconomic. There may be a serious security breach. The company is haemorrhaging value, but the prospect of totally shutting down the business process until the problem is fixed is not an appealing one. With BPEL, the process manager should be able to switch over to a previously defined (and thoroughly tested) safe version of the business process (with reduced functionality, increased security, perhaps increased manual intervention, and hopefully reduced vulnerability) to maintain some form of business continuity, in the same manner as reverting to an alternative service.

There is a general point here that we have made before in other contexts. If you want to be confident of flexibility, you need to regularly exercise that flexibility. Don’t wait until you have a crisis before you discover whether your process management is flexible enough. Run the safe version of the process for (say) an hour every month, to make sure it still works.

Saturday, January 31, 2004

Changing Conceptions of Business Process


Traditional View
Business Process as Production Line
Network View
Business Process as Service Network
Linear – designed as a series of steps

Chronological – steps executed in time-sequence

Cumulative – adding value at each step

Synchronous – each step dependent and waiting upon the previous steps

Transforming raw materials and components into finished product
Non-Linear – designed as a set of services

Logical – services put together in logical combinations

Modulative – services modulating one another

Asynchronous – services executed independently

Transmuting input services into output services

Originally published in the CBDI Journal, January 2004.