Monday, May 23, 2022

Risk Algebra

In this post, I want to explore some important synergies between architectural thinking and risk management. 


The first point is that if we want to have an enterprise-wide understanding of risk, then it helps to have an enterprise-wide view of how the business is configured to deliver against its strategy. Enterprise architecture should provide a unified set of answers to the following questions.

  • What capabilities delivering what business outcomes?
  • Delivering what services to what customers?
  • What information, process and resources to support these?
  • What organizations, systems, technologies and external partnerships to support these?
  • Who is accountable for what?
  • And how is all of this monitored, controlled and governed?

Enterprise architecture should also provide an understanding of the dependencies between these, and which ones are business-critical or time-critical. For example, there may be some components of the business that are important in the long-term, but could easily be unavailable for a few weeks before anyone really noticed. But there are other components (people, systems, processes) where any failure would have an immediate impact on the business and its customers, so major issues have to be fixed urgently to maintain business continuity. For some critical elements of the business, appropriate contingency plans and backup arrangements will need to be in place.

Risk assessment can then look systematically across this landscape, reviewing the risks associated with assets of various kinds, activities of various kinds (processes, projects, etc), as well as other intangibles (motivation, brand image, reputation). Risk assessment can also review how risks are shared between the organization and its business partners, both officially (as embedded in contractual agreements) and in actual practice.

 

Architects have an important concept, which should also be of great interest for enterprise risk management - the idea of a single point of failure (SPOF). When this exists, it is often the result of a poor design, or an over-zealous attempt to standardize and strip out complexity. But sometimes this is the result of what I call Creeping Business Dependency - in other words, not noticing that we have become increasingly reliant on something outside our control.


There are also important questions of scale and aggregation. Some years ago, I did some risk management consultancy for a large supermarket chain. One of the topics we were looking at was fridge failure. Obviously the supermarket had thousands and thousands of fridges, some in the customer-facing parts of the stores, some at the back, and some in the warehouses.

Fridges fail all the time, and there is a constant processes of inspecting, maintaining and replacing fridges. So a single fridge failing is not regarded as a business risk. But if several thousand fridges were all to fail at the same time, presumably for the same reason, that would cause a significant disruption to the business.

So this raised some interesting questions. Could we define a cut-off point? How many fridges would have to fail before we found ourselves outside business-as-usual territory? What kind of management signals or dashboard could be put in place to get early warning of such problems, or to trigger a switch to a "safety" mode of operation.

Obviously these questions aren't only relevant to fridges, but can apply to any category of resource, including people. During the pandemic, some organizations had similar issues in relation to staff absences.


Aggregation is also relevant when we look beyond a single firm to the whole ecosystem. Suppose we have a market or ecosystem with n players, and the risk carried by each player is R(n). Then what is the aggregate risk of the market or ecosystem as a whole? 

If we assume complete independence between the risks of each player, then we may assume that there is a significant probability of a few players failing, but a very small probability of a large number of players failing - the so-called Black Swan event. Unfortunately, the assumption of independence can be flawed, as we have seen in financial markets, where there may be a tangled knot of interdependence between players. Regulators often think they can regulate markets by imposing rules on individual players. And while this might sometimes work, it is easy to see why it doesn’t always work. In some cases, the regulator draws a line in the sand (for example defining a minimum capital ratio) and then checks that nobody crosses the line. But then if everyone trades as close as possible to this line, how much capacity does the market as a whole have for absorbing unexpected shocks?

 

Both here and in the fridge example, there is a question of standardization versus diversity. On the one hand, it's a lot simpler for the supermarket if all the fridges are the same type, with a common set of spare parts. But on the other hand, having more than one type of fridge helps to mitigate the risk of them all failing at the same time. It also gives some space for experimentation, thus addressing the longer term risk of getting stuck with an out-of-date fridge estate. The fridge example also highlights the importance of redundancy - in other words, having spare fridges. 

So there are some important trade-offs here between pure economic optimization and a more balanced approach to enterprise risk.

Saturday, May 21, 2022

Uber and the Economics of Disruption

In a previous post on Uber Mathematics, I asked how the magic of digital would allow a centralized company with international overheads (Uber or Lyft) to provide a service more cheaply and cost-effectively than local cab companies.

Uber once promised it would achieve economies of scale, thanks to network effects. But then, as Ali Griswald comments,

What is scale if not a company that operates in 72 countries and more than 10,500 cities, which last year had 118 million active users every month and completed 6.3 billion rides/trips/deliveries? Uber is the definition of scale, yet it is still nowhere near consistent and reliable profitability.

Uber appears to retain an advantage over local cab companies in terms of consumer convenience, which we can frame in terms of the economics of alignment. But as Henry Graber points out, it remains to be seen how much consumers are willing to pay for this convenience when they are no longer being heavily subsidized by over-optimistic investors, and Uber is no longer the cheapest option.

Another point Graber makes about Uber is that it has disrupted alternative modes of transport, by a combination of predatory pricing and political influence, and the regulatory environment in many cities has been altered in Uber's favour.

There are processes here that operate at different tempos. There is a demand tempo - getting sufficient numbers of drivers to a specific location at a specific time to satisfy peaks of demand. And an acquisition tempo - how long does it take to increase the number of available drivers, given the necessary investment in vehicles and equipment, driver training, and a whole range of safety issues.

Because there's a big difference between these two tempos, the challenge for a healthy ecosystem is in the intermediate tempo, which we call the integration tempo. What are the options for cities to restore requisite variety to local transportation?



Aaron Gordon, Uber And Lyft Don't Have A Right To Exist (Jalopnik, 30 August 2019), Uber and Lyft Are Out of Ideas, Jacking Up Prices in Desperation for Profit (Vice, 27 May 2022)

Henry Grabar, The Decade of Cheap Rides Is Over (Slate, 18 May 2022)

Ali Griswold, Uber wants to show you the money (Oversharing, 10 May 2022)

Thursday, May 19, 2022

Enterprise as Noun

@tetradian spots a small, subtle yet very welcome sentence in the latest version of TOGAF. 

"An enterprise may include partners, suppliers, and customers as well as internal business units."

Although similar statements can be found in earlier versions of TOGAF, and the Open Group (which produces TOGAF) has been talking about the "extended enterprise" and "boundarylessness" for over twenty years, these ideas have often seemed to take a back seat. So let's hope that the launch of TOGAF 10 will trigger a new emphasis on the "extended enterprise". 

But why is this so difficult? It seems people are much more comfortable thinking about "the enterprise" or "the organization" as a fixed thing with a clearly defined perimeter. An enterprise is assumed to be a legal entity, with a dedicated workforce, one or more bank accounts and perhaps even a stock exchange listing. 

Among other things, this encourages a perimeter-based approach to risk and security, where the primary focus is to control events crossing the perimeter in order to protect and manage what is inside. Companies often seek to export risk and uncertainty onto their customers or suppliers.

Alternatives to the fortress model of security are known as de-perimeterization. The aptly named Jericho Forum was set up by the Open Group in 2004 to promote this idea, now merged into the Open Group's Security Forum.

Instead of enterprise as noun, what about enterprise as verb, thinking of enterprise simply as a way of being enterprising? This would allow us to consider transient collaborations as well as permanent legal entities. Businesses are sometimes described as "going concerns", and this phrase shifts our attention from the identity of the business to questions of activity, viability and intentionality.

  • Activity - What is going on?
  • Viability - Is it going? Is it going to go on going?
  • Intentionality - Where is it supposed to be going? Whose concern is it?

With this way of talking about enterprise, does it still make sense to talk about inside and outside, what belongs to the organization and what doesn't? I think it probably does, as long as we are careful not to take these terms too literally. Enterprises are assemblages of knowledge, power, responsibility, etc, and these elements are never going to be distributed uniformly. So even if we don't have a strict line dividing the inside from the outside, we can may be able to detect a gradient from inwards to outwards, and there may be opportunities to alter the gradient and make some strategic improvements, which we might frame in terms of agility or requisite variety. Hence Power to the Edge.

 


Related posts: Dividing Risk (April 2005), Jericho (May 2005), Power to the Edge (December 2005), Boundary, Perimeter, Edge (March 2007), Outside-In Architecture (August 2007), Ecosystem SOA (June 2010)

Tuesday, April 19, 2022

Data-Driven Reasoning (COVID)

Following the data is all very well, but how should we decide which data to follow?

 

Nate Silver argues that the total data are more important than the marginal data. There is more virus transmission in restaurants than in aeroplanes.

 

Several people have challenged this, including Carl Bergstrom.


The first point I want to make is about risk. When calculating the risk of transmission in aeroplanes, the level of risk somewhere else is not really relevant. As Bergstrom points out, the calculation should be based simply on the costs and benefits of wearing masks in that particular setting.

The problem here, of course, is determining the cost of wearing masks. If some people regard mask-wearing as a minor inconvenience, while others regard it as a major infringement of their human rights, what sort of data would be relevant to this calculation? Or if mask-wearing impedes communication to some extent, can we quantify this effect?

The broader question is whether we should be looking at total costs and benefits, or marginal costs and benefits? In the case of mask-wearing, if we assume that most people possess a usable mask, then the marginal cost of wearing one might simply include the increased frequency of washing or replacing it, plus whatever psychological, physiological or social costs we can determine. 

But for some people, opposition to mask-wearing is much more fundamental than that. It is not about the costs and benefits of mask-wearing in a particular setting, but an overall objection to living in the kind of society where mask-wearing is mandated at all. Some people even object (violently) to the sight of other people wearing masks. So any kind of mask-wearing may cause social division, therefore incurring a sociopolitical cost, and this needs to be set against the overall benefits of controlling the transmission of disease.

Nate Silver acknowledges that a mask-wearing mandate may be useful at the margin, but questions its overall value. Presumably people aren't going to wear masks in restaurants while they are eating. So his tweet seems to be asking what is the point of making them wear masks on planes?

This then brings us onto a question about policy, and the extent to which policy can be evidence-based. Is it better to have a consistent policy on mask-wearing across different settings, or to focus policy on those areas that are regarded as the highest risk? And should policies be optimized for effectiveness (what will have the greatest effect on suppressing transmission of the virus) or social acceptability (what will most people accept as reasonable)? Nate Silver's data may be relevant to this calculation, at least to the extent that they influence people's opinions.


While there are some particularly emotive features of this example, it raises some important general points about data-driven reasoning, especially in relation to cost-benefit calculations and evidence-based policy.



Saturday, March 19, 2022

Christopher Alexander 1936-2022

The architectural theorist Christopher Alexander, who has died at the age of 85, had a major influence on software architecture as well as buildings.

His first book Notes on the Synthesis of Form (1964) was referenced by the structured methods gurus of the 1970s (Ed Yourdon, Tom deMarco). This was followed by A Pattern Language (1977), which was adopted as a core text by the Object Oriented school of software engineering.

In 1987, Alexander and a few colleagues published an account of a design experiment under the title A New Theory of Urban Design, showing how order can be created without top-down planning, by a governance process that imposes some simple structural principles on all design activity. I believe I was one of the first to apply this approach within the software world, and he remains a major influence on my thinking on SOA and enterprise architecture. See for example the following posts

I also drew on Alexander in my books Component-Based Business (Springer 2001) and Next Practice Enterprise Architecture (LeanPub 2013). 

In 2004, Pat Helland, who was then a senior architect at Microsoft, wrote an article entitled Metropolis, in which he suggested computer system architects could learn something from city planning. Philip Boxer and I wrote a reply, mentioning Christopher Alexander. Microsoft subsequently flew me over to the USA for a closed workshop (SPARK), sending all attendees copies of The Timeless Way of Building in advance. However, although Microsoft had recognized Alexander's importance, I'm not sure all the other attendees at the workshop did, and I don't recall much discussion of his work. I think we spent more time discussing Stewart Brand's notion of shearing layers / pace layering.

My observation was that it had taken around 15 years for the ideas in Notes on the Synthesis of Form to be taken up in the software world in the late 1970s, and another 15 years from the publication of A Pattern Language to the popularity of pattern thinking for software in the early 1990s. So in the early 2000s, I was publishing ideas taken from A New Theory of Urban Design, in the hope that the software world would now be ready for them. Unfortunately it wasn't. For many people in the software world, Alexander continued and continues to be seen as the man who invented Design Patterns. Alexander's material from this period is also credited as having been an inspiration for The Sims game.

Perhaps the same is true of people in the architecture world. A book was published last year called Relational Theories of Urban Form, in which Alexander was represented by an extract from A Pattern Language, plus a few papes of Timeless Way.

Alexander initially expressed some ambivalence and suspicion about the use of his work by software engineers. Later he was persuaded to make occasional keynote speeches at software conferences and to write prefaces for software books. It is possible that some software practitioners understood his work better than most architects. However, I believe he remained concerned that software practitioners mostly only picked up fragments of his work, and ignored the holistic aspects of his thinking that he regarded as crucial.

I was once privileged to observe Christopher Alexander teaching first year students at the Prince of Wales Institute of Architecture in London. An amazing experience. Click here for the full story >> Christopher Alexander as Teacher.

 

Most of the tributes on Twitter mention Notes on the Synthesis of Form and A Pattern Language. Some mention The Timeless Way of Building. But the culmination of his work was the four-volume Nature of Order. I'm not sure how much I understood when it first came out, but I'm now re-reading. More than 15 years have passed since its publication, so maybe its time has come?



Good sources of material on Christopher Alexander include Architecture's New Scientific Foundations (Architexturez) and New Science, New Urbanism, New Architecture (Katarxis 3, September 2004)

David Brussat, Christopher Alexander RIP (Architecture Here and There, 20 March 2022)

Howard Davis, Christopher Alexander Obituary (The Guardian, 29 March 2022)

Pat Helland, Metropolis (Microsoft Architecture Journal, April 2004)

Daniel Kiss and Simon Kretz (eds) Relational Theories of Urban Form (Birkhäuser, March 2021) Review by John Hill (9 July 2021)

Mae-Wan Ho, The Architect of Life (Science and Society, 4/11/2003)

Bin Jiang and Nikos Salingaros (eds), New Applications and Development of Christopher Alexander’s The Nature of Order (Urban Science Special Issue, 3/1, 2019)

Ben Sledge, Home comforts: how The Sims let millennials live out a distant dream (The Guardian, 5 February 2020)

Richard Veryard and Philip Boxer, Metropolis and SOA Governance (Microsoft Architecture Journal,. July 2005). See also Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal, August 2006)

updated 29 March 2022

Monday, February 28, 2022

Does the Algorithm have the Last Word?

In my post on the performativity of data (August 2021), I looked at some of the ways in which data and information can make something true. In this post, I want to go further. What if an an algorithm can make something final.

I've just read a very interesting paper by the Canadian sociologist Arthur Frank, which traces the curious history of a character called Devushkin - from a story by Gogol via another story by Dostoevsky into some literary analysis by Bakhtin.

In Dostoevsky's version, Devushkin complained that Gogol's account of him was complete and final, leaving him no room for change or development, hopelessly determined and finished off, as if he were already quite dead.

For Bakhtin, all that is unethical begins and ends when one human being claims to determine all that another is and can be; when one person claims that the other has not, cannot, and will not change, that she or he will die just as she or he always has been. Frank

But that's pretty much what many algorithms do. Machine learning algorithms extrapolate from historical data, captured and coded in ways that reinforce the past, while more traditionally programmed algorithms simply betray the opinions and assumptions of their developers. For example, we see recruitment algorithms that select men with a certain profile, while rejecting women with equal or superior qualifications. Because that's what's happened in the past, and the algorithm has no way of escaping from this.

 


The inbuilt bias of algorithms has been widely studied. See for example Safiya Noble and Cathy O'Neil.

David Beer makes two points in relation to the performativity of algorithms. Firstly through their material interventions.

Algorithms might be understood to create truths around things like riskiness, taste, choice, lifestyle, health and so on. The search for truth becomes then conflated with the perfect algorithmic design – which is to say the search for an algorithm that is seen to make the perfect material intervention.

And secondly through what he calls discursive interventions.

The notion of the algorithm is part of a wider vocabulary, a vocabulary that we might see deployed to promote a certain rationality, a rationality based upon the virtues of calculation, competition, efficiency, objectivity and the need to be strategic. As such, the notion of the algorithm can be powerful in shaping decisions, influencing behaviour and ushering in certain approaches and ideals.

As Massimo Airoldi argues, both of these fall under what Bourdieu calls Habitus - which means an inbuilt bias towards the status quo. And once the algorithm has decided your fate, what chance do you have of breaking free?



Massimo Airoldi, Machine Habitus: Towards a sociology of algorithms (Polity Press, 2022)

David Beer, The social power of algorithms (Information, Communication & Society, 20:1, 2017) 1-13, DOI: 10.1080/1369118X.2016.1216147

Safiya Noble, Algorithms of Oppression (New York University Press, 2018)

Cathy O'Neil, Weapons of Math Destruction (2016)

Arthur W. Frank, What Is Dialogical Research, and Why Should We Do It? (Qual Health Res 2005; 15; 964) DOI: 10.1177/1049732305279078

Carissa Véliz, If AI Is Predicting Your Future, Are You Still Free? (Wired, 27 December 2021)

Related posts: Could we switch the algorithms off? (July 2017), Algorithms and Governmentality (July 2019), Algorithmic Bias (March 2021), On the performativity of data (August 2021)

Sunday, February 06, 2022

Network Effects and the Decline of Facebook

Let me start with a story. You have a friend, let's call him Paul, who used to work for a big shot consultancy. A few years ago, the consultancy was undergoing a periodic cost-cutting exercise; Paul happened to be on the bench and was let go. He didn't immediately find another full-time job, but he found a series of short-term contracts to pay the bills until something else turned up. And of course he changed his profile on Linked-In.

Three years later, he's still in the same position. You receive a notification from Linked-In, asking if you would like to congratulate Paul on the anniversary of his losing his job. Doesn't really seem appropriate, does it?

One of the problems of social networks is maintaining a reasonably signal-to-noise ratio. When most of the notifications on Linked-In are to tell you that someone you barely remember is attending some random event, it's tempting to switch off notifications altogether.

And Facebook notifications can be even more annoying. Many years ago, Bill Gates quit Facebook because of the volume of friend requests he was receiving. Taylor Buley cites this as an illustration of Beckstrom's Law. 

The best-known model of network effects is Metcalfe's Law, which states that the value of a network is proportional to the square of the number of users. Beckstrom's Law is a more advanced model, and includes the net value of the transactions on the network. Thus subtracting the aggregated costs of belonging to the network, including all those bothersome requests and notifications.

This week, the Facebook shareprice crashed. In his analysis of this event, John Naughton suggests that the value of Facebook depends not on the absolute number of users, but in the rate of change, and this is what explains Mark Zuckerberg's obsession with growth (as shown in Ben Grosser's montage "Order of Magnitude"). If user numbers start to decline, then the virtuous circle suddenly turns vicious, leading to a downward spiral.

What is also clear from this event is the extent to which Facebook's commercial success is dependent on other players in the ecosystem. Access to Facebook services is almost entirely channelled through devices running software from three other tech giants - Apple, Google and Microsoft.

And with Microsoft's recent acquisition of Activision Blizzard, we may expect more shifts in the balance of power between them. According to Microsoft's announcement (18 January 2022)

This acquisition will accelerate the growth in Microsoft’s gaming business across mobile, PC, console and cloud and will provide building blocks for the metaverse.

 

Your move I think, Zuck.

 


Dina Bass, Five Reasons Microsoft Is Making Activision Blizzard Its Biggest Deal Ever (Bloomberg,18 January 2022)

Taylor Buley, How To Value Your Networks (Forbes, 31 July 2009) 

Michael Hiltzik, Is this the beginning of Facebook’s downfall? (LA Times, 4 February 2022)

Natasha Lomas (@riptari), On Meta’s regulatory headwinds and adtech’s privacy reckoning (TechCrunch, 4 February 2022)

John Naughton, For the first time in its history, Facebook is in decline. Has the tech giant begun to crumble? (The Guardian, 6 February 2022)

Wikipedia: Beckstrom's Law, Metcalfe's Law

Related posts: What makes a platform open or closed? (May 2012), Making the World more Open and Connected (March 2018), Metrication and Demetrication (August 2021)