Showing posts with label ResponsibilityByDesign. Show all posts
Showing posts with label ResponsibilityByDesign. Show all posts

Sunday, July 14, 2019

Trial by Ordeal

Some people think that ethical principles only apply to implemented systems, and that experimental projects (trials, proofs of concept, and so on) don't need the same level of transparency and accountability.

Last year, Google employees (as well as US senators from both parties) expressed concern about Google's Dragonfly project, which appeared to collude with the Chinese government in censorship and suppression of human rights. A secondary concern was that Dragonfly was conducted in secrecy, without involving Google's privacy team.  

Google's official position (led by CEO Sundar Pinchai) was that Dragonfly was "just an experiment". Jack Poulson, who left Google last year over this issue and has now started a nonprofit organization called Tech Inquiry, has also seen this pattern in other technology projects.
"I spoke to coworkers and they said 'don’t worry, by the time the thing launches, we'll have had a thorough privacy review'. When you do R and D, there's this idea that you can cut corners and have the privacy team fix it later." (via Alex Hern)
A few years ago, Microsoft Research ran an experiment on "emotional eating", which involved four female employees wearing smart bras. "Showing an almost shocking lack of sensitivity for gender stereotyping", wrote Sebastian Anthony. While I assume that the four subjects willingly volunteered to participate in this experiment, and I hope the privacy of their emotional data was properly protected, it does seem to reflect the same pattern - that you can get away with things in the R and D stage that would be highly problematic in a live product.

Poulson's position is that the engineers working on these projects bear some responsibility for the outcomes, and that they need to see that the ethical principles are respected. He therefore demands transparency to avoid workers being misled. He also notes that if the ethical considerations are deferred to a late stage of a project, with the bulk of the development costs already incurred and many stakeholders now personally invested in the success of the project, the pressure to proceed quickly to launch may be too strong to resist.




Sebastian Anthony, Microsoft’s new smart bra stops you from emotionally overeating (Extreme Tech, 9 December 2013)

Erin Carroll et al, Food and Mood: Just-in-Time Support for Emotional Eating (Humaine Association Conference on Affective Computing and Intelligent Interaction, 2013)

Ryan Gallagher, Google’s Secret China Project “Effectively Ended” After Internal Confrontation (The Intercept, 17 December 2018)

Alex Hern, Google whistleblower launches project to keep tech ethical (Guardian, 13 July 2019)

Casey Michel, Google’s secret ‘Dragonfly’ project is a major threat to human rights (Think Progress, 11 Dec 2018)

Iain Thomson, Microsoft researchers build 'smart bra' to stop women's stress eating (The Register, 6 Dec 2013) 

 

Related posts: Have you got Big Data in your Underwear? (December 2014), Affective Computing (March 2019)

Thursday, May 30, 2019

Responsibility by Design - Activity View

In my ongoing work on #TechnologyEthics. I have identified Five Elements of Responsibility by Design. One of these elements is what I'm calling the Activity View - defining effective and appropriate action at different points in the lifecycle of a technological innovation or product - who does what when. (Others may wish to call it the Process View.)

So in this post, I shall sketch some of the things that may need to be done at each of the following points: planning and requirements; risk assessment; design; verification, validation and test; deployment and operation; incident management; decommissioning. For the time being, I shall assume that these points can be interpreted within any likely development or devops lifecycle, be it sequential ("waterfall"), parallel, iterative, spiral, agile, double diamond or whatever.

Please note that this is an incomplete sketch, and I shall continue to flesh this out.


Planning and Requirements

This means working out what you are going to do, how you are going to do it, who is going to do it, who is going to pay for it, and who is going to benefit from it. What is the problem or opportunity you are addressing, and what kind of solution / output are you expecting to produce? It also means looking at the wider context - for example, exploring potential synergies with other initiatives.

The most obvious ethical question here is to do with the desirability of the solution. What is the likely impact of the solution on different stakeholders, and can this be justified? This is often seen in terms of an ethical veto - should we do this at all - but it is perhaps equally valid to think of it in more positive terms - could we do more?

But who gets to decide on desirability - in other words, whose notion of desirability counts - is itself an ethical question. So ethical planning includes working out who shall have a voice in this initiative, and how shall this voice be heard, making sure the stakeholders are properly identified and given a genuine stake. This was always a key element of participative design methodologies such as Enid Mumford's ETHICS method.

Planning also involves questions of scope and interoperability - how is the problem space divided up between multiple separate initiatives, to what extent do these separate initiatives need to be coordinated, and are there any deficiencies in coverage or resource allocation. See my post on the Ethics of Interoperability.

For example, an ethical review might question why medical devices were being developed for certain conditions and not others, or why technologies developed for the police were concentrated on certain categories of crime, and what the social implications of this might be. Perhaps the ethical judgement could be that a solution proposed for condition/crime X can be developed provided that there is a commitment to develop similar solutions for Y and Z. In other words, a full ethical review should look at what is omitted from the plan as well as what is included.

There may be ethical implications of organization and method, especially on large complicated developments involving different teams in different jurisdictions. Chinese Walls, Separation of Concerns, etc.

In an ethical plan, responsibility will be clear and not diffused. Just saying "we are all responsible" is naive and unhelpful. We all know what this looks like: an individual engineer raises an issue to get it off her conscience, a busy project manager marks the issue as "non-critical", the product owner regards the issue as a minor technicality, and so on. I can't even be bothered to explain what's wrong with this, I'll let you look it up in Wikipedia, because it's Somebody Else's Problem.

Regardless of the development methodology, most projects start with a high-level plan, filling in the details as they go along, and renegotiating with the sponsors and other stakeholders for significant changes in scope, budget or timescale. However, some projects are saddled with commercial, contractual or political constraints that make the plans inflexible, and this inflexibility typically generates unethical behaviours (such as denial or passing the buck).

In short, ethical planning is about making sure you are doing the right things, and doing them right. 


Risk Assessment

The risk assessment and impact analysis may often be done at the same time as the planning, but I'm going to regard it as a logically distinct activity. Like planning, it may be appropriate to revisit the risk assessment from time to time: our knowledge and understanding of risks may evolve, new risks may become apparent, while other risks can be discounted.

There are some standards for risk assessment in particular domains. For example, Data Protection Impact Assessment (DPIA) is mandated by GDPR, Information Security Risk Assessment is included in ISO 27001, and risk/hazard assessment for robotics is covered by BS 8611.

The first ethical question here is How Much. It is clearly important that the risk assessment is done with sufficient care and attention, and the results taken seriously. But there is no ethical argument under the sun that says that one should never take any risks at all, or that risk assessment should be taken to such extremes that it becomes paralysing. In some situations (think Climate Change), risk-averse procrastination may be the position that is hardest to justify ethically.

We also need to think about Scope and Perspective. Which categories of harm/hazard/risk are relevant, whose risk is it (in other words, who would be harmed), and from whose point of view? The voice of the stakeholder needs to be heard here as well.


Design

Responsible design takes care of all the requirements, risks and other stakeholder concerns already identified, as well as giving stakeholders full opportunity to identity additional concerns as the design takes shape.

Among other things, the design will need to incorporate any mechanisms and controls that have been agreed as appropriate for the assessed risks. For example, security controls, safety controls, privacy locks. Also designing in mechanisms to support responsible operations - for example, monitoring and transparency.

There is an important balance between Separation of Concerns and Somebody Else's Problem. So while you shouldn't expect every designer on the team to worry about every detail of the design, you do need to ensure that the pieces fit together and that whole system properties (safety, robustness, etc.) are designed in. So you may have a Solution Architecture role (one person or a whole team, depending on scale and complexity) responsible for overall design integrity.

And when I say whole system, I mean whole system. In general, an IoT device isn't a whole system, it's a component of a larger system. A responsible designer doesn't just design a sensor that collects a load of data and sends it into the ether, she thinks about the destination and possible uses and abuses of the data. Likewise, a responsible designer doesn't just design a robot to whizz around in a warehouse, she thinks about the humans who have to work with the robot - the whole sociotechnical system.

(How far does this argument extend? That's an ethical question as well: as J.P. Eberhard wrote in a classic paper, we ought to know the difference.)


Verification, Validation and Testing

This is where we check that the solution actually works reliably and safely, is accessible by and acceptable to all the possible users in a broad range of use contexts, and that the mechanisms and controls are effective in eliminating unnecessary risks and hazards.

See separate post on Responsible Beta Testing.

These checks don't only apply to the technical system, but also the organizational and institional arrangements, including any necessary contractual agreements, certificates, licences, etc. Is the correct user documentation available, and have the privacy notices been updated? Of course, some of these checks may need to take place even before beta testing can start.


Deployment and Operation

As the solution is rolled out, and during its operation, monitoring is required to ensure that the solution is working properly, and that all the controls are effective.

Regulated industries typically have some form of market surveillance or vigilance, whereby the regulator keeps an eye on what is going on. This may include regular inspections and audits. But of course this doesn't diminish the responsibility of the producer or distributor to be aware of how the technology is being used, and its effects. (Including unplanned or "off-label" uses.)

(And if the actual usage of the technology differs significantly from its designed purpose, it may be necessary to loop back through the risk assessment and the design. See my post On Repurposing AI).

There should also be some mechanism for detecting unusual and unforeseen events. For example, the MHRA, the UK regulator for medicines and medical devices, operates a Yellow Card scheme, which allows any interested party (not just healthcare professionals) to report any unusual event. This is significantly more inclusive than the vigilance maintained by regulators in other industries, because it can pick up previously unknown hazards (such as previously undetected adverse reactions) as well as collecting statistics on known side-effects.


Incident Management

In some domains, there are established procedures for investigating incidents such as vehicle accidents, security breaches, and so on. There may also be specialist agencies and accident investigators.

One of the challenges here is that there is typically a fundamental asymmetry of information. Someone who believes they may have suffered harm may be unable to invoke these procedures until they can conclusively demonstrate the harm, and so the burden of proof lies unfairly on the victim.



Decommissioning

Finally, we need to think about taking the solution out of service or replacing it with something better. Some technologies (such as blockchain) are designed on the assumption of eternity and immutability, and we are stuck for good or ill with our original design choices, as @moniquebachner pointed out at a FinTech event I attended last year. With robots, people always worry whether we can ever switch the things off.

Other technologies may be just as sticky. Consider the QWERTY keyboard, which was designed to slow the typist down to prevent the letters on a manual typewriter from jamming. The laptop computer on which I am writing this paragraph has a QWERTY keyboard.

Just as the responsible design of physical products needs to consider the end of use, and the recycling or disposal of the materials, so technological solutions need graceful termination.

Note that decommissioning doesn't necessarily remove the need for continued monitoring and investigation. If a drug is withdrawn following safety concerns, the people who took the drug will still need to be monitored; similar considerations may apply for other technological innovations as well.


Final Remarks

As already indicated, this is just an outline (plan). The detailed design may include checklists and simple tools, standards and guidelines, illustrations and instructions, as well as customized versions for different development methodologies and different classes of product. And I am hoping to find some opportunities to pilot the approach.

There are already some standards existing or under development to address specific areas here. For example, I have seen some specific proposals circulating for accident investigation, with suggested mechanisms to provide transparency to accident investigators. Hopefully the activity framework outlined here will provide a useful context for these standards.

Comments and suggestions for improving this framework always welcome.


Notes and References

For my use of the term Activity Viewpoint, see my blogpost Six Views of Business Architecture, and my eBook Business Architecture Viewpoints.


John P. Eberhard, "We Ought to Know the Difference," Emerging Methods in Environmental Design and Planning, Gary T. Moore, ed. (MIT Press, 1970) pp 364-365. See my blogpost We Ought To Know The Difference (April 2013)

Amany Elbanna and Mike Newman, The rise and decline of the ETHICS methodology of systems implementation: lessons for IS research (Journal of Information Technology 28, 2013) pp 124–136

Regulator Links: What is a DPIA? (ICO), Yellow Card Scheme (MHRA)

Wikipedia: Diffusion of Responsibility, Separation of Concerns, Somebody Else's Problem 

Wednesday, May 29, 2019

Responsible Beta Testing

Professor Luciano Floridi has recently made clear his opposition to irresponsible beta testing, by which he means "trying something without exactly knowing what one is doing, to see what happens, on people who may not have volunteered to be the subjects of the test at all".

In a private communication, Professor Floridi indicates that he would regard some of the recent experiments in facial recognition as examples of irresponsible beta testing. Obviously if the police are going to arrest people who decline to participate, these are not exactly willing volunteers.

Some of the tech giants have got into the habit of releasing unreliable software to willing volunteers, and calling this a "beta programme". There are also voices in favour of something called permanent beta, which some writers regard as a recipe, not just for technology but also for living in a volatile world. So the semantics of "beta" has become thoroughly unclear.

However, I think this kind of activity does not represent the original purpose of beta testing, which was the testing of a product without the development team being present. Let me call this responsible beta testing. While it is understood that beta testing cannot be guaranteed to find all the problems in a new product, it typically uncovers problems that other forms of verification and validation have missed, so it is generally regarded as a useful approach for many classes of system, and probably essential for safety-critical systems.

This is how this might work in a robotic context. Let's suppose you are building a care robot for the elderly. Before you put the robot into production, you are going to want to test it thoroughly - perhaps first with able-bodied volunteers, then with some elderly volunteers. During the testing, the robot will be surrounded with engineers and other observers, who will be able to intervene to protect the volunteer in the event of any unexpected or inappropriate behaviour on the part of the robot. Initially, this testing may take place in the lab with the active participation of the development team, but at least some of this testing would need to take place in a real care home without the development team being present. This may be called beta-testing or field testing. It certainly cannot be regarded as merely "trial and error".

For medical devices, this kind of testing is called a clinical trial, and there are strict regulations about how this should be done, including consent from those taking part in the trial based on adequate information and explanation, proper reporting of the results and any unexpected or unwanted effects, and with the ability to halt the trial early if necessary. It might be possible to establish similar codes of practice or even regulations for testing other classes of technology, including robotics.


(In the same communication, Professor Floridi fully agrees with the distinction made here, and affirms the crucial importance of responsible beta testing.)


Links

Lizzie Dearden, Police stop people for covering their faces from facial recognition camera then fine man £90 after he protested (Independent, 31 January 2019)

This blogpost forms part of an ongoing project to articulate Responsibility by Design. See Reponsibility by Design - Activity View (May 2019)


Friday, May 10, 2019

The Ethics of Interoperability

In many contexts (such as healthcare) interoperability is considered to be a Good Thing. Johns and Stead argue that "we have an ethical obligation to develop and implement plug-and-play clinical devices and information technology systems", while Olaronke and Olusola point out some of the ethical challenges produced by such interoperability, including "data privacy, confidentiality, control of access to patients’ information, the commercialization of de-identified patients’ information and ownership of patients’ information". All these authors agree that interoperability should be regarded as an ethical issue.

Citing interoperability as an example of a technical standard, Alan Winfield argues that "all standards can ... be thought of as implicit ethical standards". He also includes standards that promote "shared ways of doing things", "expressing the values of cooperation and harmonization".

But should cooperation and harmonization be local or global? American standards differ from European standards in so many ways - voltages and plugs, paper sizes, writing the date the wrong way. Is the local/global question also an ethical one?

One problem with interoperability is that it is often easy to find places where additional interoperability would deliver some benefits to some stakeholders. However, if we keep adding more interoperability, we may end up with a hyperconnected system of systems that is vulnerable to unpredictable global shocks, and where fundamental structural change becomes almost impossible. The global financial systems may be a good example of this.

So the ethics of interoperability is linked with the ethics of other whole-system properties, including complexity and stability. See my posts on Efficiency and Robustness. Each of these whole-system properties may be the subject of an architectural principle. And, following Alan's argument, an (implicitly) ethical standard.

With interoperability, there are questions of degree as well as of kind. We often distinguish between tight coupling and loose coupling, and there are important whole-system properties that may depend on the degree of coupling. (This is a subject that I have covered extensively elsewhere.)

What if we apply the ethics of interoperability to complex systems of systems involving multiple robots? Clearly, cordination between robots might be necessary in some situations to avoid harm to humans. So the ethics of interoperability should include the potential communication or interference between heterogeneous robots, and this raises the topic of deconfliction - not just for airborne robots (drones) but for any autonomous vehicles. Clearly deconfliction is another (implicitly) ethical issue. 



Note: for a more detailed account of the relationship between interoperability and deconfliction, see my paper with Philip Boxer on Taking Governance to the Edge (Microsoft Architecture Journal, August 2006). For deconfliction in relation to Self-Driving Cars, see my post Whom does the technology serve? (May 2019).



Mauricio Castillo-Effen and Nikita Visnevski, Analysis of autonomous deconfliction in Unmanned Aircraft Systems for Testing and Evaluation (IEEE Aerospace conference 2009)

Michael M. E. Johns and William Stead, Interoperability is an ethical issue (Becker's Hospital Review, 15 July 2015)

Milecia Matthews, Girish Chowdhary and Emily Kieson, Intent Communication between Autonomous Vehicles and Pedestrians (2017)

Iroju Olaronke and Olaleke Janet Olusola, Ethical Issues in Interoperability of Electronic Healthcare Systems (Communications on Applied Electronics, Vol 1 No 8, May 2015)

Richard Veryard, Component-Based Business (Springer 2001)

Richard Veryard, Business Adaptability and Adaptation in SOA (CBDI Journal, February 2004)

Alan Winfield, Ethical standards in robotics and AI (Nature Electronics Vol 2, February 2019) pp 46-48



Related posts: Deconfliction and Interoperability (April 2005), Loose Coupling (July 2005), Efficiency and Robustness (September 2005), Making the world more open and connected (March 2018) Whom does the technology serve? (May 2019)

Sunday, April 21, 2019

How Many Ethical Principles?

Although ethical principles have been put forward by philosophers through the ages, the first person to articulate ethical principles for information technology was Norbert Wiener. In his book The Human Use of Human Beings, first published in 1950, Wiener based his computer ethics on what he called four great principles of justice.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom.”

Meanwhile, Isaac Asimov's Three Laws of Robotics were developed in a series of short stories in the 1940s, so this was around the same time that Wiener was developing his ideas about cybernetics. Many writers on technology ethics argue that robots (or any other form of technology) should be governed by principles, and this idea is often credited to Asimov. But as far as I can recall, in every Asimov story that mentions the Three Laws of Robotics, some counter-example is produced to demonstrate that the Three Laws don't actually work as intended. I have therefore always regarded Asimov's work as being satirical rather than prescriptive. (Similarly J.K. Rowling's descriptions of the unsuccessful attempts by wizard civil servants to regulate the use of magical technologies.)

So for several decades, the Wiener approach to ethics prevailed, and discussion of computer ethics was focused on a common set of human values: life, health, security, happiness, freedom, knowledge, resources, power and opportunity. (Source: SEP: Computer and Information Ethics)

But these principles were essentially no different to the principles one would find in any other ethical domain. For many years, scholars disagreed as to whether computer technology introduced an entirely new set of ethical issues, and therefore called for a new set of principles. The turning point was at the ETHICOMP1995 conference in March 1995 (just two months before Bill Gates' Internet Tidal Wave memo), with important presentations from Walter Maner (who had been arguing this point for years) and Krystyna Górniak-Kocikowska. From this point onwards, computer ethics would have to address some additional challenges, including the global reach of the technology - beyond the control of any single national regulator - and the vast proliferation of actors and stakeholders. Terrell Bynum calls this the Górniak hypothesis.

Picking up the challenge, Luciano Floridi started to look at the ethical issues raised by autonomous and interactive agents in cyberspace. In a 2001 paper on Artificial Evil with Jeff Sanders, he stated "It is clear that something similar to Asimov's Laws of Robotics will need to be enforced for the digital environment (the infosphere) to be kept safe."

Floridi's work on Information Ethics (IE) represented an attempt to get away from the prevailing anthropocentric worldview. "IE suggests that there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy." He therefore articulated a set of principles concerning ontological equality (any instance of information/being enjoys a minimal, initial, overridable, equal right to exist and develop in a way which is appropriate to its nature) and information entropy (which ought not to be caused in the infosphere, ought to be prevented, ought to be removed). (Floridi 2006)

In the past couple of years, there has been a flood of ethical principles to choose from. In his latest blogpost, @Alan_Winfield lists over twenty sets of principles for robotics and AI published between January 2017 and April 2019, while AlgorithmWatch lists over fifty. Of particular interest may be the principles published by some of the technology giants, as well as the absence of such principles from some of the others. Meanwhile, Professor Floridi's more recent work on ethical principles appears to be more conventionally anthropocentric.

The impression one gets from all these overlapping sets of principles is of lots of experts and industry bodies competing to express much the same ideas in slightly different terms, in the hope that their version will be adopted by everyone else.

But what would "adopted" actually mean? One possible answer is that these principles might feed into what I call Upstream Ethics, contributing to a framework for both regulation and action. However some commentators have expressed scepticism as to the value of these principles. For example, @InternetDaniel thinks that these lists of ethical principles are "too vague to be effective", and suggests that this may even be intentional, these efforts being "largely designed to fail". And @EricNewcomer says "we're in a golden age for hollow corporate statements sold as high-minded ethical treatises".

As I wrote in an earlier piece on principles:
In business and engineering, as well as politics, it is customary to appeal to "principles" to justify some business model, some technical solution, or some policy. But these principles are usually so vague that they provide very little concrete guidance. Profitability, productivity, efficiency, which can mean almost anything you want them to mean. And when principles interfere with what we really want to do, we simply come up with a new interpretation of the principle, or another overriding principle, which allows us to do exactly what we want while dressing up the justification in terms of "principles". (January 2011)

The key question is about governance - how will these principles be applied and enforced, and by whom? What many people forget about Asimov's Three Laws of Robotics was that these weren't enforced by roving technology regulators, but were designed into the robots themselves, thanks to the fact that one corporation (U.S. Robots and Mechanical Men, Inc) had control over the relevant patents and therefore exercised a monopoly over the manufacture of robots. No doubt Google, IBM and Microsoft would like us to believe that they can be trusted to produce ethically safe products, but clearly this doesn't address the broader problem.

Following the Górniak hypothesis, if these principles are to mean anything, they need to be taken seriously not only by millions of engineers but also by billions of technology users. And I think this in turn entails something like what Julia Black calls Decentred Regulation, which I shall try to summarize in a future post. Hopefully this won't be just what Professor Floridi calls Soft Ethics.

Update: My post on Decentred Regulation and Responsible Technology is now available.



Algorithm Watch, AI Ethics Guidelines Global Inventory

Terrell Bynum, Computer and Information Ethics (Stanford Encyclopedia of Philosophy)

Luciano Floridi, Information Ethics - Its Nature and Scope (ACM SIGCAS Computers and Society · September 2006)

Luciano Floridi and Tim (Lord) Clement-Jones, The five principles key to any ethical framework for AI (New Statesman, 20 March 2019)

Luciano Floridi and J.W. Sanders, Artificial evil and the foundation of computer ethics (Ethics and Information Technology 3: 55–66, 2001)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)

Alan Winfield, An Updated Round Up of Ethical Principles of Robotics and AI (18 April 2019)


Wikipedia: Laws of Robotics, Three Laws of Robotics


Related posts: The Power of Principles (Not) (January 2011), Data and Intelligence Principles from Major Players (June 2018), Ethics Soft and Hard (February 2019), Upstream Ethics (March 2019), Ethics Committee Raises Alarm (April 2019), Decentred Regulation and Responsible Technology (April 2019), Automation Ethics (August 2019)

Link corrected 26 April 2019

Sunday, November 04, 2018

On Repurposing AI

With great power, as they say, comes great responsibility. Michael Krigsman of #CXOTALK tweets that AI is powerful because the results are transferable from one domain to another. Possibly quoting Bülent Kiziltan.

In London this week for Microsoft's Future Decoded event, according to reporter @richard_speed of @TheRegister, Satya Nadella asserted that an AI trained for one purpose being used for another was "an unethical use".

If Microsoft really believes this, it would certainly be a radical move. In April this year Mark Russinovich, Azure CTO, gave a presentation at the RSA Conference on Transfer Learning: Repurposing ML Algorithms from Different Domains to Cloud Defense.

Repurposing data and intelligence - using AI for a different purpose to its original intent - may certainly have ethical consequences. This doesn't necessarily mean it's wrong, simply that the ethics must be reexamined. Responsibility by design (like privacy by design, from which it inherits some critical ideas) considers a design project in relation to a specific purpose and use-context. So if the purpose and context change, it is necessary to reiterate the responsibility-by-design process.

A good analogy would be the off-label use of medical drugs. There is considerable discussion on the ethical implications of this very common practice. For example, Furey and Wilkins argue that off-label prescribing imposes additional responsibilities on a medical practitioner, including weighing the available evidence and proper disclosure to the patient.

There are often strong arguments in favour of off-label prescribing (in medicine) or transfer learning (in AI). Where a technology provides some benefit to some group of people, there may be good reasons for extending these benefits. For example, Rachel Silver argues that transfer learning has democratized machine learning, lowered the barriers to entry, thus promoting innovation. Interestingly, there seem to be some good examples of transfer learning in AI for medical purposes.

However, transfer learning in AI raises some ethical concerns. Not only the potential consequences on people affected by the repurposed algorithms, but also potential sources of error. For example, Wang and others identify a potential vulnerability to misclassification attacks.

There are also some questions of knowledge ownership and privacy that were relevant to older modes of knowledge transfer (see for example Baskerville and Dulipovici).



By the way, if you thought the opening quote was a reference to Spiderman, Quote Investigator has traced a version of it to the French Revolution. Other versions from various statesmen including Churchill and Roosevelt.

Richard Baskerville and Alina Dulipovici, The Ethics of Knowledge Transfers and Conversions: Property or Privacy Rights? (HICSS'06: Proceedings of the 39th Annual Hawaii International Conference on System Sciences, 2006)

Katrina Furey and Kirsten Wilkins, Prescribing “Off-Label”: What Should a Physician Disclose? (AMA Journal of Ethics, June 2016)

Marian McHugh, Microsoft makes things personal at this year's Future Decoded (Channel Web, 2 November 2018)

Rachel Silver, The Secret Behind the New AI Spring: Transfer Learning (TDWI, 24 August 2018)

Richard Speed, 'Privacy is a human right': Big cheese Sat-Nad lays out Microsoft's stall at Future Decoded (The Register, 1 November 2018)

Bolun Wang et al, With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning (Proceedings of the 27th USENIX Security Symposium, August 2018)


See also Off-Label (March 2005)

Thursday, October 18, 2018

Why Responsibility by Design now?

Excellent article by @riptari, providing some context for Gartner's current position on ethics and privacy.

Gartner has been talking about digital ethics for a while now - for example, it got a brief mention on the Gartner website last year. But now digital ethics and privacy has been elevated to the Top Ten Strategic Trends, along with (surprise, surprise) Blockchain.

Progress of a sort, says @riptari, as people are increasingly concerned about privacy.

The key point is really the strategic obfuscation of issues that people do in fact care an awful lot about, via the selective and non-transparent application of various behind-the-scenes technologies up to now — as engineers have gone about collecting and using people’s data without telling them how, why and what they’re actually doing with it. 
Therefore, the key issue is about the abuse of trust that has been an inherent and seemingly foundational principle of the application of far too much cutting edge technology up to now. Especially, of course, in the adtech sphere. 
And which, as Gartner now notes, is coming home to roost for the industry — via people’s “growing concern” about what’s being done to them via their data. (For “individuals, organisations and governments” you can really just substitute ‘society’ in general.) 
Technology development done in a vacuum with little or no consideration for societal impacts is therefore itself the catalyst for the accelerated concern about digital ethics and privacy that Gartner is here identifying rising into strategic view.

Over the past year or two, some of the major players have declared ethics policies for data and intelligence, including IBM (January 2017), Microsoft (January 2018) and Google (June 2018). @EricNewcomer reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises".

According to the Magic Sorting Hat, high-minded vision can get organizations into the Ravenclaw or Slytherin quadrants (depending on the sincerity of the intention behind the vision). But to get into the Hufflepuff or Gryffindor quadrants, organizations need the ability to execute. So it's not enough for Gartner simply to lecture organizations on the importance of building trust.

Here we go round the prickly pear
Prickly pear prickly pear
Here we go round the prickly pear
At five o'clock in the morning.




Natasha Lomas (@riptari), Gartner picks digital ethics and privacy as a strategic trend for 2019 (TechCrunch, 16 October 2018)

Sony Shetty, Getting Digital Ethics Right (Gartner, 6 June 2017)


Related posts (with further links)

Data and Intelligence Principles from Major Players (June 2018)
Practical Ethics (June 2018)
Responsibility by Design (June 2018)
What is Responsibility by Design (October 2018)

What is Responsibility by Design

Responsibility by design (RbD) represents a logical extension of Security by Design and Privacy by Design, as I stated in my previous post. But what does that actually mean?

X by design is essentially a form of governance that addresses a specific concern or set of concerns - security, privacy, responsibility or whatever.

  • What. A set of concerns that we want to pay attention to, supported by principles, guidelines, best practices, patterns and anti-patterns.
  • Why. A set of positive outcomes that we want to attain and/or a set of negative outcomes that we want to avoid.
  • When. What triggers this governance activity? Does it occur at a fixed point in a standard process or only when specific concerns are raised? Is it embedded in a standard operational or delivery model?
  • For Whom. How are the interests of stakeholders and expert opinions properly considered? To whom should this governance process be visible?
  • Who. Does this governance require specialist input or independent review, or can it usually be done by the designers themselves?
  • How. Does this governance include some degree of formal verification, independent audit or external certification, or is an informal review acceptable? How much documentation is needed?
  • How Much. Design typically involves a trade-off between different requirements, so this is about the weight given to X relative to anything else.


#JustAnEngineer
Check out @katecrawford talking at the Royal Society in London this summer. Just an Engineer.



Related posts

Practical Ethics (June 2018), Responsibility by Design (June 2018)

Tuesday, June 05, 2018

Responsibility by Design

Over the past twelve months or so, we have seen a big shift in the public attitude towards new technology. More people are becoming aware of the potential abuses of data and other cool stuff. Scandals involving Facebook and other companies have been headline news.

Security professionals have been pushing the idea of security by design for ages, and the push to comply with GDPR has made a lot of people aware of privacy by design. Responsibility by design (RbD) represents a logical extension of these ideas to include a range of ethical issues around new technology.

Here are some examples of the technologies that might be covered by this.

Technologies such as
...
Benefits such as
...
Dangers such as
...
Principles such as
...
Big Data Personalization Invasion of Privacy Consent
Algorithms Optimization Algorithmic Bias Fairness
Automation Productivity Fragmentation of Work Human-Centred Design
Internet of Things Cool Devices Weak Security Ecosystem Resilience
User Experience Convenience Dark Patterns, Manipulation Accessibility, Transparency


Ethics is not just a question of bad intentions, it includes bad outcomes through misguided action. Here are some of the things we need to look at.
  • Unintended outcomes - including longer-term or social consequences. For example, platforms like Facebook and YouTube are designed to maximize engagement. The effect of this is to push people into progressively more extreme content in order to keep them on the platform for longer.
  • Excluded users - this may be either deliberate (we don't have time to include everyone, so let's get something out that works for most people) or unwitting (well it works for people like me, so what's the problem)
  • Neglected stakeholders - people or communities that may be indirectly disadvantaged - for example, a healthy politics that may be undermined by the extremism promoted by platforms such as Facebook and YouTube.
  • Outdated assumptions - we used to think that data was scarce, so we grabbed as much as we could and kept it for ever. We now recognize that data is a liability as well as an asset, and we now prefer data minimization - only collect and store data for a specific and valid purpose. A similar consideration applies to connectivity. We are starting to see the dangers of a proliferation of "always on" devices, especially given the weak security of the IoT world. So perhaps we need to replace a connectivity-maximization assumption with a connectivity minimization principle. There are doubtless other similar assumptions that need to be surfaced and challenged.
  • Responsibility break - potential for systems being taken over and controlled by less responsible stakeholders, or the chain of accountability being broken. This occurs when the original controls are not robust enough.
  • Irreversible change - systems that cannot be switched off when they are no longer providing the benefits and safeguards originally conceived.


Wikipedia: Algorithmic Bias (2017), Dark Pattern (2017), Privacy by Design (2011), Secure by Design (2005), Weapons of Math Destruction (2017). The date after each page shows when it first appeared on Wikipedia.

Ted Talks: Cathy O'Neil, Zeynep Tufekci, Sherry Turkle

Related Posts: Pax Technica (November 2017), Risk and Security (November 2017), Outdated Assumptions - Connectivity Hunger (June 2018)



Updated 12 June 2018

Thursday, December 14, 2017

Expert Systems

Is there a fundamental flaw in AI implementation, as @jrossCISR suggests in her latest article for Sloan Management Review? She and her colleagues have been studying how companies insert value-adding AI algorithms into their processes. A critical success factor for the effective use of AI algorithms (or what we used to call expert systems) is the ability to partner smart machines with smart people, and this calls for changes in working practices and human skills.

As an example of helping people to use probabilistic output to guide business actions, Ross uses the example of smart recruitment.
But what’s the next step when a recruiter learns from an AI application that a job candidate has a 50% likelihood of being a good fit for a particular opening?

Let's unpack this. The AI application indicates that at this point in the process, given the information we currently have about the candidate, we have a low confidence in predicting the performance of this candidate on the job. Unless we just toss a coin and hope for the best, obviously the next step is to try and obtain more information and insight about the candidate.

But which information is most relevant? An AI application (guided by expert recruiters) should be able to identify the most efficient path to reaching the desired level of confidence. What are the main reasons for our uncertainty about this candidate, and what extra information would make the most difference?

Simplistic decision support assumes you only have one shot at making a decision. The expert system makes a prognostication, and then the human accepts or overrules its advice.

But in the real world, decision-making is often a more extended process. So the recruiter should be able to ask the AI application some follow-up questions. What if we bring the candidate in for another interview? What if we run some aptitude tests? How much difference would each of these options make to our confidence level?

When recruiting people for a given job, it is not just that the recruiters don't know enough about the candidate, they also may not have much detail about the requirements of the job. Exactly what challenges will the successful candidate face, and how will they interact with the rest of the team? So instead of shortlisting the candidates that score most highly on a given set of measures, it may be more helpful to shortlist candidates with a range of different strengths and weaknesses, as this will allow interviewers to creatively imagine how each will perform. So there are a lot more probabilistic calculations we could get the algorithms to perform, if we can feed enough historical data into the machine learning hopper.

Ross sees the true value of machine learning applications to be augmenting intelligence - helping people accomplish something. This means an effective collaboration between one or more people and one or more algorithms. Or what I call organizational intelligence.


Postscript (18 December 2017)

In his comment on Twitter, @AidanWard3 extends the analysis to multiple stakeholders.
This broader view brings some of the ethical issues into focus, including asymmetric information and algorithmic transparency


Jeanne Ross, The Fundamental Flaw in AI Implementation (Sloan Management Review, 14 July 2017)