Saturday, June 15, 2019

The Road Less Travelled

Are algorithms trustworthy, asks @NizanGP.
"Many of us routinely - and even blindly - rely on the advice of algorithms in all aspects of our lives, from choosing the fastest route to the airport to deciding how to invest our retirement savings. But should we trust them as much as we do?"

Dr Packin's main point is about the fallibility of algorithms, and the excessive confidence people place in them. @AnnCavoukian reinforces this point.


But there is another reason to be wary of the advice of the algorithm, summed up by the question: Whom does the algorithm serve?

Because the algorithm is not working for you alone. There are many people trying to get to the airport, and if they all use the same route they may all miss their flights. If the algorithm is any good, it will be advising different people to use different routes. (Most well-planned cities have more than one route to the airport, to avoid a single point of failure.) So how can you trust the algorithm to give you the fastest route? However much you may be paying for the navigation service, someone else may be paying a lot more. For the road less travelled.

The same is obviously true of investment advice. The best time to buy a stock is just before everyone else buys, and the best time to sell a stock is just after everyone else buys. Which means that there are massive opportunities for unethical behaviour when advising people where / when to invest their retirement savings, and there is no reason to suppose that the people programming the algorithms are immune from this temptation, or that regulators are able to protect investors properly.

So remember the Weasley Doctrine: "Never trust anything that can think for itself if you can't see where it keeps its brain."



Nizan Geslevich Packin, Why Investors Should Be Wary of Automated Advice (Wall Street Journal, 14 June 2019)


Related posts: Towards Chatbot Ethics (May 2019), Whom does the technology serve? (May 2019)

Thursday, May 30, 2019

Responsibility by Design - Activity View

In my ongoing work on #TechnologyEthics. I have identified Five Elements of Responsibility by Design. One of these elements is what I'm calling the Activity View - defining effective and appropriate action at different points in the lifecycle of a technological innovation or product - who does what when. (Others may wish to call it the Process View.)

So in this post, I shall sketch some of the things that may need to be done at each of the following points: planning and requirements; risk assessment; design; verification, validation and test; deployment and operation; incident management; decommissioning. For the time being, I shall assume that these points can be interpreted within any likely development or devops lifecycle, be it sequential ("waterfall"), parallel, iterative, spiral, agile, double diamond or whatever.

Please note that this is an incomplete sketch, and I shall continue to flesh this out.


Planning and Requirements

This means working out what you are going to do, how you are going to do it, who is going to do it, who is going to pay for it, and who is going to benefit from it. What is the problem or opportunity you are addressing, and what kind of solution / output are you expecting to produce? It also means looking at the wider context - for example, exploring potential synergies with other initiatives.

The most obvious ethical question here is to do with the desirability of the solution. What is the likely impact of the solution on different stakeholders, and can this be justified? This is often seen in terms of an ethical veto - should we do this at all - but it is perhaps equally valid to think of it in more positive terms - could we do more?

But who gets to decide on desirability - in other words, whose notion of desirability counts - is itself an ethical question. So ethical planning includes working out who shall have a voice in this initiative, and how shall this voice be heard, making sure the stakeholders are properly identified and given a genuine stake. This was always a key element of participative design methodologies such as Enid Mumford's ETHICS method.

There may be ethical implications of organization and method, especially on large complicated developments involving different teams in different jurisdictions. Chinese Walls, Separation of Concerns, etc.

In an ethical plan, responsibility will be clear and not diffused. Just saying "we are all responsible" is naive and unhelpful. We all know what this looks like: an individual engineer raises an issue to get it off her conscience, a busy project manager marks the issue as "non-critical", the product owner regards the issue as a minor technicality, and so on. I can't even be bothered to explain what's wrong with this, I'll let you look it up in Wikipedia, because it's Somebody Else's Problem.

Regardless of the development methodology, most projects start with a high-level plan, filling in the details as they go along, and renegotiating with the sponsors and other stakeholders for significant changes in scope, budget or timescale. However, some projects are saddled with commercial, contractual or political constraints that make the plans inflexible, and this inflexibility typically generates unethical behaviours (such as denial or passing the buck).

In short, ethical planning is about making sure you are doing the right things, and doing them right. 


Risk Assessment

The risk assessment and impact analysis may often be done at the same time as the planning, but I'm going to regard it as a logically distinct activity. Like planning, it may be appropriate to revisit the risk assessment from time to time: our knowledge and understanding of risks may evolve, new risks may become apparent, while other risks can be discounted.

There are some standards for risk assessment in particular domains. For example, Data Protection Impact Assessment (DPIA) is mandated by GDPR, Information Security Risk Assessment is included in ISO 27001, and risk/hazard assessment for robotics is covered by BS 8611.

The first ethical question here is How Much. It is clearly important that the risk assessment is done with sufficient care and attention, and the results taken seriously. But there is no ethical argument under the sun that says that one should never take any risks at all, or that risk assessment should be taken to such extremes that it becomes paralysing. In some situations (think Climate Change), risk-averse procrastination may be the position that is hardest to justify ethically.

We also need to think about Scope and Perspective. Which categories of harm/hazard/risk are relevant, whose risk is it (in other words, who would be harmed), and from whose point of view? The voice of the stakeholder needs to be heard here as well.


Design

Responsible design takes care of all the requirements, risks and other stakeholder concerns already identified, as well as giving stakeholders full opportunity to identity additional concerns as the design takes shape.

Among other things, the design will need to incorporate any mechanisms and controls that have been agreed as appropriate for the assessed risks. For example, security controls, safety controls, privacy locks. Also designing in mechanisms to support responsible operations - for example, monitoring and transparency.

There is an important balance between Separation of Concerns and Somebody Else's Problem. So while you shouldn't expect every designer on the team to worry about every detail of the design, you do need to ensure that the pieces fit together and that whole system properties (safety, robustness, etc.) are designed in. So you may have a Solution Architecture role (one person or a whole team, depending on scale and complexity) responsible for overall design integrity.

And when I say whole system, I mean whole system. In general, an IoT device isn't a whole system, it's a component of a larger system. A responsible designer doesn't just design a sensor that collects a load of data and sends it into the ether, she thinks about the destination and possible uses and abuses of the data. Likewise, a responsible designer doesn't just design a robot to whizz around in a warehouse, she thinks about the humans who have to work with the robot - the whole sociotechnical system.

(How far does this argument extend? That's an ethical question as well: as J.P. Eberhard wrote in a classic paper, we ought to know the difference.)


Verification, Validation and Testing

This is where we check that the solution actually works reliably and safely, is accessible by and acceptable to all the possible users in a broad range of use contexts, and that the mechanisms and controls are effective in eliminating unnecessary risks and hazards.

See separate post on Responsible Beta Testing.

These checks don't only apply to the technical system, but also the organizational and institional arrangements, including any necessary contractual agreements, certificates, licences, etc. Is the correct user documentation available, and have the privacy notices been updated? Of course, some of these checks may need to take place even before beta testing can start.


Deployment and Operation

As the solution is rolled out, and during its operation, monitoring is required to ensure that the solution is working properly, and that all the controls are effective.

Regulated industries typically have some form of market surveillance or vigilance, whereby the regulator keeps an eye on what is going on. This may include regular inspections and audits. But of course this doesn't diminish the responsibility of the producer or distributor to be aware of how the technology is being used, and its effects. (Including unplanned or "off-label" uses.)

(And if the actual usage of the technology differs significantly from its designed purpose, it may be necessary to loop back through the risk assessment and the design. See my post On Repurposing AI).

There should also be some mechanism for detecting unusual and unforeseen events. For example, the MHRA, the UK regulator for medicines and medical devices, operates a Yellow Card scheme, which allows any interested party (not just healthcare professionals) to report any unusual event. This is significantly more inclusive than the vigilance maintained by regulators in other industries, because it can pick up previously unknown hazards (such as previously undetected adverse reactions) as well as collecting statistics on known side-effects.


Incident Management

In some domains, there are established procedures for investigating incidents such as vehicle accidents, security breaches, and so on. There may also be specialist agencies and accident investigators.

One of the challenges here is that there is typically a fundamental asymmetry of information. Someone who believes they may have suffered harm may be unable to invoke these procedures until they can conclusively demonstrate the harm, and so the burden of proof lies unfairly on the victim.



Decommissioning

Finally, we need to think about taking the solution out of service or replacing it with something better. Some technologies (such as blockchain) are designed on the assumption of eternity and immutability, and we are stuck for good or ill with our original design choices, as @moniquebachner pointed out at a FinTech event I attended last year. With robots, people always worry whether we can ever switch the things off.

Other technologies may be just as sticky. Consider the QWERTY keyboard, which was designed to slow the typist down to prevent the letters on a manual typewriter from jamming. The laptop computer on which I am writing this paragraph has a QWERTY keyboard.

Just as the responsible design of physical products needs to consider the end of use, and the recycling or disposal of the materials, so technological solutions need graceful termination.

Note that decommissioning doesn't necessarily remove the need for continued monitoring and investigation. If a drug is withdrawn following safety concerns, the people who took the drug will still need to be monitored; similar considerations may apply for other technological innovations as well.


Final Remarks

As already indicated, this is just an outline (plan). The detailed design may include checklists and simple tools, standards and guidelines, illustrations and instructions, as well as customized versions for different development methodologies and different classes of product. And I am hoping to find some opportunities to pilot the approach.

There are already some standards existing or under development to address specific areas here. For example, I have seen some specific proposals circulating for accident investigation, with suggested mechanisms to provide transparency to accident investigators. Hopefully the activity framework outlined here will provide a useful context for these standards.

Comments and suggestions for improving this framework always welcome.


Notes and References

For my use of the term Activity Viewpoint, see my blogpost Six Views of Business Architecture, and my eBook Business Architecture Viewpoints.


John P. Eberhard, "We Ought to Know the Difference," Emerging Methods in Environmental Design and Planning, Gary T. Moore, ed. (MIT Press, 1970) pp 364-365. See my blogpost We Ought To Know The Difference (April 2013)

Amany Elbanna and Mike Newman, The rise and decline of the ETHICS methodology of systems implementation: lessons for IS research (Journal of Information Technology 28, 2013) pp 124–136

Regulator Links: What is a DPIA? (ICO), Yellow Card Scheme (MHRA)

Wikipedia: Diffusion of Responsibility, Separation of Concerns, Somebody Else's Problem 

Wednesday, May 29, 2019

Responsible Beta Testing

Professor Luciano Floridi has recently made clear his opposition to irresponsible beta testing, by which he means "trying something without exactly knowing what one is doing, to see what happens, on people who may not have volunteered to be the subjects of the test at all".

In a private communication, Professor Floridi indicates that he would regard some of the recent experiments in facial recognition as examples of irresponsible beta testing. Obviously if the police are going to arrest people who decline to participate, these are not exactly willing volunteers.

Some of the tech giants have got into the habit of releasing unreliable software to willing volunteers, and calling this a "beta programme". There are also voices in favour of something called permanent beta, which some writers regard as a recipe, not just for technology but also for living in a volatile world. So the semantics of "beta" has become thoroughly unclear.

However, I think this kind of activity does not represent the original purpose of beta testing, which was the testing of a product without the development team being present. Let me call this responsible beta testing. While it is understood that beta testing cannot be guaranteed to find all the problems in a new product, it typically uncovers problems that other forms of verification and validation have missed, so it is generally regarded as a useful approach for many classes of system, and probably essential for safety-critical systems.

This is how this might work in a robotic context. Let's suppose you are building a care robot for the elderly. Before you put the robot into production, you are going to want to test it thoroughly - perhaps first with able-bodied volunteers, then with some elderly volunteers. During the testing, the robot will be surrounded with engineers and other observers, who will be able to intervene to protect the volunteer in the event of any unexpected or inappropriate behaviour on the part of the robot. Initially, this testing may take place in the lab with the active participation of the development team, but at least some of this testing would need to take place in a real care home without the development team being present. This may be called beta-testing or field testing. It certainly cannot be regarded as merely "trial and error".

For medical devices, this kind of testing is called a clinical trial, and there are strict regulations about how this should be done, including consent from those taking part in the trial based on adequate information and explanation, proper reporting of the results and any unexpected or unwanted effects, and with the ability to halt the trial early if necessary. It might be possible to establish similar codes of practice or even regulations for testing other classes of technology, including robotics.


(In the same communication, Professor Floridi fully agrees with the distinction made here, and affirms the crucial importance of responsible beta testing.)


Links

Lizzie Dearden, Police stop people for covering their faces from facial recognition camera then fine man £90 after he protested (Independent, 31 January 2019)

This blogpost forms part of an ongoing project to articulate Responsibility by Design. See Reponsibility by Design - Activity View (May 2019)


Friday, May 10, 2019

The Ethics of Interoperability

In many contexts (such as healthcare) interoperability is considered to be a Good Thing. Johns and Stead argue that "we have an ethical obligation to develop and implement plug-and-play clinical devices and information technology systems", while Olaronke and Olusola point out some of the ethical challenges produced by such interoperability, including "data privacy, confidentiality, control of access to patients’ information, the commercialization of de-identified patients’ information and ownership of patients’ information". All these authors agree that interoperability should be regarded as an ethical issue.

Citing interoperability as an example of a technical standard, Alan Winfield argues that "all standards can ... be thought of as implicit ethical standards". He also includes standards that promote "shared ways of doing things", "expressing the values of cooperation and harmonization".

But should cooperation and harmonization be local or global? American standards differ from European standards in so many ways - voltages and plugs, paper sizes, writing the date the wrong way. Is the local/global question also an ethical one?

One problem with interoperability is that it is often easy to find places where additional interoperability would deliver some benefits to some stakeholders. However, if we keep adding more interoperability, we may end up with a hyperconnected system of systems that is vulnerable to unpredictable global shocks, and where fundamental structural change becomes almost impossible. The global financial systems may be a good example of this.

So the ethics of interoperability is linked with the ethics of other whole-system properties, including complexity and stability. See my posts on Efficiency and Robustness. Each of these whole-system properties may be the subject of an architectural principle. And, following Alan's argument, an (implicitly) ethical standard.

With interoperability, there are questions of degree as well as of kind. We often distinguish between tight coupling and loose coupling, and there are important whole-system properties that may depend on the degree of coupling. (This is a subject that I have covered extensively elsewhere.)

What if we apply the ethics of interoperability to complex systems of systems involving multiple robots? Clearly, cordination between robots might be necessary in some situations to avoid harm to humans. So the ethics of interoperability should include the potential communication or interference between heterogeneous robots, and this raises the topic of deconfliction - not just for airborne robots (drones) but for any autonomous vehicles. Clearly deconfliction is another (implicitly) ethical issue. 



Note: for a more detailed account of the relationship between interoperability and deconfliction, see my paper with Philip Boxer on Taking Governance to the Edge (Microsoft Architecture Journal, August 2006). For deconfliction in relation to Self-Driving Cars, see my post Whom does the technology serve? (May 2019).



Mauricio Castillo-Effen and Nikita Visnevski, Analysis of autonomous deconfliction in Unmanned Aircraft Systems for Testing and Evaluation (IEEE Aerospace conference 2009)

Michael M. E. Johns and William Stead, Interoperability is an ethical issue (Becker's Hospital Review, 15 July 2015)

Milecia Matthews, Girish Chowdhary and Emily Kieson, Intent Communication between Autonomous Vehicles and Pedestrians (2017)

Iroju Olaronke and Olaleke Janet Olusola, Ethical Issues in Interoperability of Electronic Healthcare Systems (Communications on Applied Electronics, Vol 1 No 8, May 2015)

Richard Veryard, Component-Based Business (Springer 2001)

Richard Veryard, Business Adaptability and Adaptation in SOA (CBDI Journal, February 2004)

Alan Winfield, Ethical standards in robotics and AI (Nature Electronics Vol 2, February 2019) pp 46-48



Related posts: Deconfliction and Interoperability (April 2005), Loose Coupling (July 2005), Efficiency and Robustness (September 2005), Making the world more open and connected (March 2018) Whom does the technology serve? (May 2019)

Monday, April 22, 2019

When the Single Version of Truth Kills People

@Greg_Travis has written an article on the Boeing 737 Max Disaster, which @jjn1 describes as "one of the best pieces of technical writing I’ve seen in ages". He explains why normal airplane design includes redundant sensors.

"There are two sets of angle-of-attack sensors and two sets of pitot tubes, one set on either side of the fuselage. Normal usage is to have the set on the pilot’s side feed the instruments on the pilot’s side and the set on the copilot’s side feed the instruments on the copilot’s side. That gives a state of natural redundancy in instrumentation that can be easily cross-checked by either pilot. If the copilot thinks his airspeed indicator is acting up, he can look over to the pilot’s airspeed indicator and see if it agrees. If not, both pilot and copilot engage in a bit of triage to determine which instrument is profane and which is sacred."

and redundant processors, to guard against a Single Point of Failure (SPOF).

"On the 737, Boeing not only included the requisite redundancy in instrumentation and sensors, it also included redundant flight computers—one on the pilot’s side, the other on the copilot’s side. The flight computers do a lot of things, but their main job is to fly the plane when commanded to do so and to make sure the human pilots don’t do anything wrong when they’re flying it. The latter is called 'envelope protection'."

But ...

"In the 737 Max, only one of the flight management computers is active at a time—either the pilot’s computer or the copilot’s computer. And the active computer takes inputs only from the sensors on its own side of the aircraft."

As a result of this design error, 346 people are dead. Travis doesn't pull his punches.

"It is astounding that no one who wrote the MCAS software for the 737 Max seems even to have raised the possibility of using multiple inputs, including the opposite angle-of-attack sensor, in the computer’s determination of an impending stall. As a lifetime member of the software development fraternity, I don’t know what toxic combination of inexperience, hubris, or lack of cultural understanding led to this mistake."

He may not know what led to this specific mistake, but he can certainly see some of the systemic issues that made this mistake possible. Among other things, the widespread idea that software provides a cheaper and quicker fix than getting the hardware right, together with what he calls cultural laziness.

"Less thought is now given to getting a design correct and simple up front because it’s so easy to fix what you didn’t get right later."

Agile, huh?


Update: CNN finds an unnamed Boeing spokesman to defend the design.

"Single sources of data are considered acceptable in such cases by our industry".

OMG, does that mean that there are more examples of SSOT elsewhere in the Boeing design!?




How a Single Point of Failure (SPOF) in the MCAS software could have caused the Boeing 737 Max crash in Ethiopia (DMD Solutions, 5 April 2019) - provides a simple explanation of Fault Tree Analysis (FTA) as a technique to identify SPOF.

Mike Baker and Dominic Gates, Lack of redundancies on Boeing 737 MAX system baffles some involved in developing the jet (Seattle Times 26 March 2019)

Curt Devine and Drew Griffin, Boeing relied on single sensor for 737 Max that had been flagged 216 times to FAA (CNN, 1 May 2019) HT @marcusjenkins

George Leopold, Boeing 737 Max: Another Instance of ‘Go Fever”? (29 March 2019)

Mary Poppendieck, What If Your Team Wrote the Code for the 737 MCAS System? (4 April 2019) HT @CharlesTBetz with reply from @jpaulreed

Gregory Travis, How the Boeing 737 Max Disaster Looks to a Software Developer (IEEE Spectrum, 18 April 2019) HT @jjn1 @ruskin147

And see my other posts on the Single Source of Truth.


Updated  2 May 2019

Sunday, April 21, 2019

How Many Ethical Principles?

Although ethical principles have been put forward by philosophers through the ages, the first person to articulate ethical principles for information technology was Norbert Wiener. In his book The Human Use of Human Beings, first published in 1950, Wiener based his computer ethics on what he called four great principles of justice.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom.”

Meanwhile, Isaac Asimov's Three Laws of Robotics were developed in a series of short stories in the 1940s, so this was around the same time that Wiener was developing his ideas about cybernetics. Many writers on technology ethics argue that robots (or any other form of technology) should be governed by principles, and this idea is often credited to Asimov. But as far as I can recall, in every Asimov story that mentions the Three Laws of Robotics, some counter-example is produced to demonstrate that the Three Laws don't actually work as intended. I have therefore always regarded Asimov's work as being satirical rather than prescriptive. (Similarly J.K. Rowling's descriptions of the unsuccessful attempts by wizard civil servants to regulate the use of magical technologies.)

So for several decades, the Wiener approach to ethics prevailed, and discussion of computer ethics was focused on a common set of human values: life, health, security, happiness, freedom, knowledge, resources, power and opportunity. (Source: SEP: Computer and Information Ethics)

But these principles were essentially no different to the principles one would find in any other ethical domain. For many years, scholars disagreed as to whether computer technology introduced an entirely new set of ethical issues, and therefore called for a new set of principles. The turning point was at the ETHICOMP1995 conference in March 1995 (just two months before Bill Gates' Internet Tidal Wave memo), with important presentations from Walter Maner (who had been arguing this point for years) and Krystyna Górniak-Kocikowska. From this point onwards, computer ethics would have to address some additional challenges, including the global reach of the technology - beyond the control of any single national regulator - and the vast proliferation of actors and stakeholders. Terrell Bynum calls this the Górniak hypothesis.

Picking up the challenge, Luciano Floridi started to look at the ethical issues raised by autonomous and interactive agents in cyberspace. In a 2001 paper on Artificial Evil with Jeff Sanders, he stated "It is clear that something similar to Asimov's Laws of Robotics will need to be enforced for the digital environment (the infosphere) to be kept safe."

Floridi's work on Information Ethics (IE) represented an attempt to get away from the prevailing anthropocentric worldview. "IE suggests that there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy." He therefore articulated a set of principles concerning ontological equality (any instance of information/being enjoys a minimal, initial, overridable, equal right to exist and develop in a way which is appropriate to its nature) and information entropy (which ought not to be caused in the infosphere, ought to be prevented, ought to be removed). (Floridi 2006)

In the past couple of years, there has been a flood of ethical principles to choose from. In his latest blogpost, @Alan_Winfield lists over twenty sets of principles for robotics and AI published between January 2017 and April 2019, while AlgorithmWatch lists over fifty. Of particular interest may be the principles published by some of the technology giants, as well as the absence of such principles from some of the others. Meanwhile, Professor Floridi's more recent work on ethical principles appears to be more conventionally anthropocentric.

The impression one gets from all these overlapping sets of principles is of lots of experts and industry bodies competing to express much the same ideas in slightly different terms, in the hope that their version will be adopted by everyone else.

But what would "adopted" actually mean? One possible answer is that these principles might feed into what I call Upstream Ethics, contributing to a framework for both regulation and action. However some commentators have expressed scepticism as to the value of these principles. For example, @InternetDaniel thinks that these lists of ethical principles are "too vague to be effective", and suggests that this may even be intentional, these efforts being "largely designed to fail". And @EricNewcomer says "we're in a golden age for hollow corporate statements sold as high-minded ethical treatises".

As I wrote in an earlier piece on principles:
In business and engineering, as well as politics, it is customary to appeal to "principles" to justify some business model, some technical solution, or some policy. But these principles are usually so vague that they provide very little concrete guidance. Profitability, productivity, efficiency, which can mean almost anything you want them to mean. And when principles interfere with what we really want to do, we simply come up with a new interpretation of the principle, or another overriding principle, which allows us to do exactly what we want while dressing up the justification in terms of "principles". (January 2011)

The key question is about governance - how will these principles be applied and enforced, and by whom? What many people forget about Asimov's Three Laws of Robotics was that these weren't enforced by roving technology regulators, but were designed into the robots themselves, thanks to the fact that one corporation (U.S. Robots and Mechanical Men, Inc) had control over the relevant patents and therefore exercised a monopoly over the manufacture of robots. No doubt Google, IBM and Microsoft would like us to believe that they can be trusted to produce ethically safe products, but clearly this doesn't address the broader problem.

Following the Górniak hypothesis, if these principles are to mean anything, they need to be taken seriously not only by millions of engineers but also by billions of technology users. And I think this in turn entails something like what Julia Black calls Decentred Regulation, which I shall try to summarize in a future post. Hopefully this won't be just what Professor Floridi calls Soft Ethics.

Update: My post on Decentred Regulation and Responsible Technology is now available.



Algorithm Watch, AI Ethics Guidelines Global Inventory

Terrell Bynum, Computer and Information Ethics (Stanford Encyclopedia of Philosophy)

Luciano Floridi, Information Ethics - Its Nature and Scope (ACM SIGCAS Computers and Society · September 2006)

Luciano Floridi and Tim (Lord) Clement-Jones, The five principles key to any ethical framework for AI (New Statesman, 20 March 2019)

Luciano Floridi and J.W. Sanders, Artificial evil and the foundation of computer ethics (Ethics and Information Technology 3: 55–66, 2001)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)

Alan Winfield, An Updated Round Up of Ethical Principles of Robotics and AI (18 April 2019)


Wikipedia: Laws of Robotics, Three Laws of Robotics


Related posts: The Power of Principles (Not) (January 2011), Data and Intelligence Principles from Major Players (June 2018), Ethics Soft and Hard (February 2019), Upstream Ethics (March 2019), Ethics Committee Raises Alarm (April 2019), Decentred Regulation and Responsible Technology (April 2019)

Link corrected 26 April 2019

Tuesday, April 16, 2019

Is there a Single Version of Truth about Statins?

@bengoldacre provides some useful commentary on a BBC news item about statins. In particular, he notes a detail from the original research paper that didn't make it into the BBC news item - namely the remarkable lack of agreement between GPs and hospitals as to whether a given patient had experienced a cardiovascular event.

This is not a new observation: it was analysed in a 2013 paper by Emily Herrett and others. Dr Goldacre advised a previous Health Minister that "different data sources within the NHS were wildly discrepant wrt to the question of something as simple as whether a patient had had a heart attack". The minister asked which source was right - in other words, asking for a single source of truth. But the point is that there isn't one.

Data quality issues can be traced to a number of causes. While some of the issues may be caused by administrative or technical errors and omissions, others are caused by the way the data are recorded in the first place. This is why the comparison of health data between different countries is often misleading - because despite international efforts to standardize classification, different healthcare regimes still code things differently. And despite the huge amounts of NHS money thrown at IT projects to standardize medical records (as documented by @tonyrcollins), the fact remains that primary and secondary healthcare view the patient completely differently.

See my previous blogposts on Single Source of Truth


Tony Collins, Another NPfIT IT scandal in the making? (Campaign4Change, 9 February 2016)

Emily Herrett et al, Completeness and diagnostic validity of recording acute myocardial infarction events in primary care, hospital care, disease registry, and national mortality records: cohort study (BMJ 21 May 2013)

Michelle Roberts, Statins 'don't work well for one in two people' (BBC News, 15 April 2019)

Benoît Salanave et al, Classification differences and maternal mortality: a European study (International Journal of Epidemiology 28, 1999) pp 64–69

Saturday, February 02, 2019

What is a framework?

The term "framework" is much abused. Here's a statement picked at random from the Internet:
"By bringing these key aspects together on a process-oriented strategy, organizations are able to acquire agility and improve the ability to engage on a much more effective way. This translates into a solid, resilient and innovative framework that drives the ability to continuously improve and assure sustainability."
Does that actually mean anything? What does it take for a "framework" to be simultaneously solid and resilient, let alone innovative?

If you take a random set of concepts - social, mobile, analytics, cloud, internet of things - you can string the initial letters into a slightly suggestive acronym - SMACIT. But have you got a framework?

And if you take a random set of ideas and arrange them artistically, you can create a nice diagram. But have you got a framework, or is this just Methodology by PowerPoint?


I just did an image search for "framework" on Google. Here are the top ten examples. Pretty typical of the genre.


In 1987, John Zachman published an article in the IBM Systems Journal, entitled "A Framework for Information Systems Architecture", hypothesizing a set of architectural representations for information systems, arranged in a table. The international standard ISO/IEC 42010 defines an architectural framework as any set of architectural descriptions or viewpoints satisfying certain conditions, and the Zachman Framework is usually cited as a canonical example of this, along with RM/ODP and a selection of AFs (MOD, DOD, TOG, etc.). 

But there are a lot of so-called frameworks that don't satisfy this definition. Sometimes there is just a fancy diagram, which makes less sense the more you look at it. Let's look at the top example more closely.



The webpage asserts that "the summary graphic illustrates the main relationships between each heading and relevant sub-headings". Sorry, it doesn't. What does this tell me about the relationship between Knowledge and Governance? If Advocacy is about promoting things, does that mean that Knowledge is about preventing things? And if Prevent, Protect and Promote are verbs, does this mean that People is also a verb? I'm sure there is a great deal of insight and good intentions behind this diagram, but the diagram itself owes more to graphic design than systems thinking. All I can see in this "systems framework" is (1) some objectives (2) a summary diagram and (3) a list. If that's really all there is, I can't see how such a framework "can be used as a flexible assessment, planning and evaluation tool for policy-making".

For clarification, I think there is an important difference between a framework and a lens. A lens provides a viewpoint - a set of things that someone thinks you should pay attention to - but its value doesn't depend on its containing everything. VPEC-T is a great lens, but is Wikipedia right in characterizing as a thinking framework?



Commonwealth Health Hub, A systems framework for healthy policy (31 October 2016)

Filipe Janela, The 3 cornerstones of digital transformation (Processware, 30 June 2017)

Anders Jensen-Waud, On Enterprise Taxonomy Completeness (9 April 2010)

John Zachman, A Framework for Information Systems Architecture (IBM Systems Journal, Vol 26 No 3, 1987).

Wikipedia: ISO/IEC 42010, VPEC-T

Related posts: What's Missing from VPEC-T (September 2009), Evolving the Enterprise Architecture Body of Knowledge (October 2012), Arguing with Mendeleev (March 2013)


Updated 28 February 2019 
Added @ricphillips' tweet. See discussion following. I also remembered an old discussion with Anders Jensen-Ward.

Tuesday, November 06, 2018

Big Data and Organizational Intelligence

Ten years ago, the editor of Wired Magazine published an article claiming the end of theory. "With enough data, the numbers speak for themselves."

The idea that data (or facts) speak for themselves, with no need for interpretation or analysis, is a common trope. It is sometimes associated with a legal doctrine known as Res Ipsa Loquitur - the thing speaks for itself. However this legal doctrine isn't about truth but about responsibility: if a surgeon leaves a scalpel inside the patient, this fact alone is enough to establish the surgeon's negligence.

Or even the world speaks for itself. The world, someone once asserted, is all that is the case, the totality of facts not of things. Paradoxically, big data often means very large quantities of very small (atomic) data.

But data, however big, does not provide a reliable source of objective truth. This is one of the six myths of big data identified by Kate Crawford, who points out, "data and data sets are not objective; they are creations of human design". In other words, we don't just build models from data, we also use models to obtain data. This is linked to Piaget's account of how children learn to make sense of the world in terms of assimilation and accommodation. (Piaget called this Genetic Epistemology.)

Data also cannot provide explanation or understanding. Data can reveal correlation but not causation. Which is one of the reasons why we need models. As Kate Crawford also observes, "we get a much richer sense of the world when we ask people the why and the how not just the how many".

In the traditional world of data management, there is much emphasis on the single source of truth. Michael Brodie (who knows a thing or two about databases), while acknowledging the importance of this doctrine for transaction systems such as banking, argues that it is not appropriate everywhere. "In science, as in life, understanding of a phenomenon may be enriched by observing the phenomenon from multiple perspectives (models). ... Database products do not support multiple models, i.e., the reality of science and life in general.". One approach Brodie talks about to address this difficulty is ensemble modelling: running several different analytical models and comparing or aggregating the results. (I referred to this idea in my post on the Shelf-Life of Algorithms).

Along with the illusion that what the data tells you is true, we can identify two further illusions: that what the data tells you is important, and that what the data doesn't tell you is not important. These are not just illusions of big data of course - any monitoring system or dashboard can foster them. The panopticon affects not only the watched but also the watcher.

From the perspective of organizational intelligence, the important point is that data collection, sensemaking, decision-making, learning and memory form a recursive loop - each inextricably based on the others. An organization only perceives what it wants to perceive, and this depends on the conceptual models it already has - whether these are explicitly articulated or unconsciously embedded in the culture. Which is why real diversity - in other words, genuine difference of perspective, not just bureaucratic profiling - is so important, because it provides the organizational counterpart to the ensemble modelling mentioned above.





Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Wired, 23 June 2008)

Michael L Brodie, Why understanding of truth is important in Data Science? (KD Nuggets, January 2018)

Kate Crawford, The Hidden Biases in Big Data (HBR, 1 April 2013)

Kate Crawford, The Anxiety of Big Data (New Inquiry, 30 May 2014)

Bruno Gransche, The Oracle of Big Data – Prophecies without Prophets (International Review of Information Ethics, Vol. 24, May 2016)

Thomas McMullan, What does the panopticon mean in the age of digital surveillance? (Guardian, 23 July 2015)

Evelyn Ruppert, Engin Isin and Didier Bigo, Data politics (Big Data and Society, July–December 2017: 1–7)

Ian Steadman, Big Data and the Death of the Theorist (Wired, 25 January 2013)

Ludwig Wittgenstein, Tractatus Logico-Philosophicus (1922)


Related posts

Information Algebra (March 2008)
How Dashboards Work (November 2009)
Co-Production of Data and Knowledge (November 2012)
Real Criticism - The Subject Supposed to Know (January 2013)
The Purpose of Diversity (December 2014)
The Shelf-Life of Algorithms (October 2016)
The Transparency of Algorithms (October 2016)


Wikipedia: Ensemble LearningGenetic Epistemology, PanopticismRes ipsa loquitur (the thing speaks for itself)

Stanford Encyclopedia of Philosophy: Kant and Hume on Causality


For more on Organizational Intelligence, please read my eBook.
https://leanpub.com/orgintelligence/

Sunday, November 04, 2018

On Repurposing AI

With great power, as they say, comes great responsibility.


In London this week for Microsoft's Future Decoded event, according to reporter @richard_speed of @TheRegister, Satya Nadella asserted that an AI trained for one purpose being used for another was "an unethical use".

If Microsoft really believes this, it would certainly be a radical move. In April this year Mark Russinovich, Azure CTO, gave a presentation at the RSA Conference on Transfer Learning: Repurposing ML Algorithms from Different Domains to Cloud Defense.

Repurposing data and intelligence - using AI for a different purpose to its original intent - may certainly have ethical consequences. This doesn't necessarily mean it's wrong, simply that the ethics must be reexamined. Responsibility by design (like privacy by design, from which it inherits some critical ideas) considers a design project in relation to a specific purpose and use-context. So if the purpose and context change, it is necessary to reiterate the responsibility-by-design process.

A good analogy would be the off-label use of medical drugs. There is considerable discussion on the ethical implications of this very common practice. For example, Furey and Wilkins argue that off-label prescribing imposes additional responsibilities on a medical practitioner, including weighing the available evidence and proper disclosure to the patient.

There are often strong arguments in favour of off-label prescribing (in medicine) or transfer learning (in AI). Where a technology provides some benefit to some group of people, there may be good reasons for extending these benefits. For example, Rachel Silver argues that transfer learning has democratized machine learning, lowered the barriers to entry, thus promoting innovation. Interestingly, there seem to be some good examples of transfer learning in AI for medical purposes.

However, transfer learning in AI raises some ethical concerns. Not only the potential consequences on people affected by the repurposed algorithms, but also potential sources of error. For example, Wang and others identify a potential vulnerability to misclassification attacks.

There are also some questions of knowledge ownership and privacy that were relevant to older modes of knowledge transfer (see for example Baskerville and Dulipovici).



By the way, if you thought the opening quote was a reference to Spiderman, Quote Investigator has traced a version of it to the French Revolution. Other versions from various statesmen including Churchill and Roosevelt.


Richard Baskerville and Alina Dulipovici, The Ethics of Knowledge Transfers and Conversions: Property or Privacy Rights? (HICSS'06: Proceedings of the 39th Annual Hawaii International Conference on System Sciences, 2006)

Katrina Furey and Kirsten Wilkins, Prescribing “Off-Label”: What Should a Physician Disclose? (AMA Journal of Ethics, June 2016)

Marian McHugh, Microsoft makes things personal at this year's Future Decoded (Channel Web, 2 November 2018)

Rachel Silver, The Secret Behind the New AI Spring: Transfer Learning (TDWI, 24 August 2018)

Richard Speed, 'Privacy is a human right': Big cheese Sat-Nad lays out Microsoft's stall at Future Decoded (The Register, 1 November 2018)

Bolun Wang et al, With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning (Proceedings of the 27th USENIX Security Symposium, August 2018)


See also Off-Label (March 2005)