Thursday, May 30, 2019

Responsibility by Design - Activity View

In my ongoing work on #TechnologyEthics. I have identified Five Elements of Responsibility by Design. One of these elements is what I'm calling the Activity View - defining effective and appropriate action at different points in the lifecycle of a technological innovation or product - who does what when. (Others may wish to call it the Process View.)

So in this post, I shall sketch some of the things that may need to be done at each of the following points: planning and requirements; risk assessment; design; verification, validation and test; deployment and operation; incident management; decommissioning. For the time being, I shall assume that these points can be interpreted within any likely development or devops lifecycle, be it sequential ("waterfall"), parallel, iterative, spiral, agile, double diamond or whatever.

Please note that this is an incomplete sketch, and I shall continue to flesh this out.


Planning and Requirements

This means working out what you are going to do, how you are going to do it, who is going to do it, who is going to pay for it, and who is going to benefit from it. What is the problem or opportunity you are addressing, and what kind of solution / output are you expecting to produce? It also means looking at the wider context - for example, exploring potential synergies with other initiatives.

The most obvious ethical question here is to do with the desirability of the solution. What is the likely impact of the solution on different stakeholders, and can this be justified? This is often seen in terms of an ethical veto - should we do this at all - but it is perhaps equally valid to think of it in more positive terms - could we do more?

But who gets to decide on desirability - in other words, whose notion of desirability counts - is itself an ethical question. So ethical planning includes working out who shall have a voice in this initiative, and how shall this voice be heard, making sure the stakeholders are properly identified and given a genuine stake. This was always a key element of participative design methodologies such as Enid Mumford's ETHICS method.

Planning also involves questions of scope and interoperability - how is the problem space divided up between multiple separate initiatives, to what extent do these separate initiatives need to be coordinated, and are there any deficiencies in coverage or resource allocation. See my post on the Ethics of Interoperability.

For example, an ethical review might question why medical devices were being developed for certain conditions and not others, or why technologies developed for the police were concentrated on certain categories of crime, and what the social implications of this might be. Perhaps the ethical judgement could be that a solution proposed for condition/crime X can be developed provided that there is a commitment to develop similar solutions for Y and Z. In other words, a full ethical review should look at what is omitted from the plan as well as what is included.

There may be ethical implications of organization and method, especially on large complicated developments involving different teams in different jurisdictions. Chinese Walls, Separation of Concerns, etc.

In an ethical plan, responsibility will be clear and not diffused. Just saying "we are all responsible" is naive and unhelpful. We all know what this looks like: an individual engineer raises an issue to get it off her conscience, a busy project manager marks the issue as "non-critical", the product owner regards the issue as a minor technicality, and so on. I can't even be bothered to explain what's wrong with this, I'll let you look it up in Wikipedia, because it's Somebody Else's Problem.

Regardless of the development methodology, most projects start with a high-level plan, filling in the details as they go along, and renegotiating with the sponsors and other stakeholders for significant changes in scope, budget or timescale. However, some projects are saddled with commercial, contractual or political constraints that make the plans inflexible, and this inflexibility typically generates unethical behaviours (such as denial or passing the buck).

In short, ethical planning is about making sure you are doing the right things, and doing them right. 


Risk Assessment

The risk assessment and impact analysis may often be done at the same time as the planning, but I'm going to regard it as a logically distinct activity. Like planning, it may be appropriate to revisit the risk assessment from time to time: our knowledge and understanding of risks may evolve, new risks may become apparent, while other risks can be discounted.

There are some standards for risk assessment in particular domains. For example, Data Protection Impact Assessment (DPIA) is mandated by GDPR, Information Security Risk Assessment is included in ISO 27001, and risk/hazard assessment for robotics is covered by BS 8611.

The first ethical question here is How Much. It is clearly important that the risk assessment is done with sufficient care and attention, and the results taken seriously. But there is no ethical argument under the sun that says that one should never take any risks at all, or that risk assessment should be taken to such extremes that it becomes paralysing. In some situations (think Climate Change), risk-averse procrastination may be the position that is hardest to justify ethically.

We also need to think about Scope and Perspective. Which categories of harm/hazard/risk are relevant, whose risk is it (in other words, who would be harmed), and from whose point of view? The voice of the stakeholder needs to be heard here as well.


Design

Responsible design takes care of all the requirements, risks and other stakeholder concerns already identified, as well as giving stakeholders full opportunity to identity additional concerns as the design takes shape.

Among other things, the design will need to incorporate any mechanisms and controls that have been agreed as appropriate for the assessed risks. For example, security controls, safety controls, privacy locks. Also designing in mechanisms to support responsible operations - for example, monitoring and transparency.

There is an important balance between Separation of Concerns and Somebody Else's Problem. So while you shouldn't expect every designer on the team to worry about every detail of the design, you do need to ensure that the pieces fit together and that whole system properties (safety, robustness, etc.) are designed in. So you may have a Solution Architecture role (one person or a whole team, depending on scale and complexity) responsible for overall design integrity.

And when I say whole system, I mean whole system. In general, an IoT device isn't a whole system, it's a component of a larger system. A responsible designer doesn't just design a sensor that collects a load of data and sends it into the ether, she thinks about the destination and possible uses and abuses of the data. Likewise, a responsible designer doesn't just design a robot to whizz around in a warehouse, she thinks about the humans who have to work with the robot - the whole sociotechnical system.

(How far does this argument extend? That's an ethical question as well: as J.P. Eberhard wrote in a classic paper, we ought to know the difference.)


Verification, Validation and Testing

This is where we check that the solution actually works reliably and safely, is accessible by and acceptable to all the possible users in a broad range of use contexts, and that the mechanisms and controls are effective in eliminating unnecessary risks and hazards.

See separate post on Responsible Beta Testing.

These checks don't only apply to the technical system, but also the organizational and institional arrangements, including any necessary contractual agreements, certificates, licences, etc. Is the correct user documentation available, and have the privacy notices been updated? Of course, some of these checks may need to take place even before beta testing can start.


Deployment and Operation

As the solution is rolled out, and during its operation, monitoring is required to ensure that the solution is working properly, and that all the controls are effective.

Regulated industries typically have some form of market surveillance or vigilance, whereby the regulator keeps an eye on what is going on. This may include regular inspections and audits. But of course this doesn't diminish the responsibility of the producer or distributor to be aware of how the technology is being used, and its effects. (Including unplanned or "off-label" uses.)

(And if the actual usage of the technology differs significantly from its designed purpose, it may be necessary to loop back through the risk assessment and the design. See my post On Repurposing AI).

There should also be some mechanism for detecting unusual and unforeseen events. For example, the MHRA, the UK regulator for medicines and medical devices, operates a Yellow Card scheme, which allows any interested party (not just healthcare professionals) to report any unusual event. This is significantly more inclusive than the vigilance maintained by regulators in other industries, because it can pick up previously unknown hazards (such as previously undetected adverse reactions) as well as collecting statistics on known side-effects.


Incident Management

In some domains, there are established procedures for investigating incidents such as vehicle accidents, security breaches, and so on. There may also be specialist agencies and accident investigators.

One of the challenges here is that there is typically a fundamental asymmetry of information. Someone who believes they may have suffered harm may be unable to invoke these procedures until they can conclusively demonstrate the harm, and so the burden of proof lies unfairly on the victim.



Decommissioning

Finally, we need to think about taking the solution out of service or replacing it with something better. Some technologies (such as blockchain) are designed on the assumption of eternity and immutability, and we are stuck for good or ill with our original design choices, as @moniquebachner pointed out at a FinTech event I attended last year. With robots, people always worry whether we can ever switch the things off.

Other technologies may be just as sticky. Consider the QWERTY keyboard, which was designed to slow the typist down to prevent the letters on a manual typewriter from jamming. The laptop computer on which I am writing this paragraph has a QWERTY keyboard.

Just as the responsible design of physical products needs to consider the end of use, and the recycling or disposal of the materials, so technological solutions need graceful termination.

Note that decommissioning doesn't necessarily remove the need for continued monitoring and investigation. If a drug is withdrawn following safety concerns, the people who took the drug will still need to be monitored; similar considerations may apply for other technological innovations as well.


Final Remarks

As already indicated, this is just an outline (plan). The detailed design may include checklists and simple tools, standards and guidelines, illustrations and instructions, as well as customized versions for different development methodologies and different classes of product. And I am hoping to find some opportunities to pilot the approach.

There are already some standards existing or under development to address specific areas here. For example, I have seen some specific proposals circulating for accident investigation, with suggested mechanisms to provide transparency to accident investigators. Hopefully the activity framework outlined here will provide a useful context for these standards.

Comments and suggestions for improving this framework always welcome.


Notes and References

For my use of the term Activity Viewpoint, see my blogpost Six Views of Business Architecture, and my eBook Business Architecture Viewpoints.


John P. Eberhard, "We Ought to Know the Difference," Emerging Methods in Environmental Design and Planning, Gary T. Moore, ed. (MIT Press, 1970) pp 364-365. See my blogpost We Ought To Know The Difference (April 2013)

Amany Elbanna and Mike Newman, The rise and decline of the ETHICS methodology of systems implementation: lessons for IS research (Journal of Information Technology 28, 2013) pp 124–136

Regulator Links: What is a DPIA? (ICO), Yellow Card Scheme (MHRA)

Wikipedia: Diffusion of Responsibility, Separation of Concerns, Somebody Else's Problem 

No comments:

Post a Comment