Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Monday, April 01, 2024

As Shepherds Watched

We can find a useful metaphor for data ethics in Tolkein's Lord of the Rings. Palantíri are indestructable stones or crystal balls that enable events to be seen from afar. They also allow communication between two stones. The word comes from Tolkein's invented language Quenya - palan means far, tir means to watch over.

The stones are powerful but dangerously unreliable. Even in the hands of an evil wizard such as Sauron, the stones cannot present a completely false image, but they can conceal enough to mislead, and at one point in the story Sauron himself is deceived.

This links to my oft-repeated point about data and dashboards: along with the illusion that what the data tells you is true, there are two further illusions: that what the data tells you is important, and that what the data doesn't tell you is not important. (See my eBook How To Do Things With Data.)

Joseph Pearce notes the parallel between palantíri and another device whose name also denotes watching from afar - television. "The palantir stones, the seeing stones employed by Sauron, the Dark Lord, to broadcast propaganda and sow the seeds of despair among his enemies, are uncannily similar in their mode of employment to the latest technology in mass communication media" Pearce p244.

The big data company Palantir Technologies was named after Tolkein's stones: according to Caroline Haskins, its employees are sometimes called Hobbits. It has pitched itself as providing the power to see the world, without becoming corrupted by that power Maus. Not everyone is convinced.

Former employee Juan Sebastián Pinto, who has openly criticized the company, believes that it has purposefully cultivated a mysterious public image. As Haskins explains, Palantir’s main audience is sprawling government agencies and Fortune 500 companies. What it’s ultimately selling them is not just software, but the idea of a seamless, almost magical solution to complex problems.

See also The Purpose of Surveillance (August 2024) 


Denis Campbell, NHS England gives key role in handling patient data to US spy tech firm Palantir (Guardian, 20 November 2023)

Caroline Haskins, What Does Palantir Actually Do? (Wired, 11 Aug 2025)

Maus Strategic Consulting, A (Pretty) Complete History of Palantir (27 April 2014)

Joseph Pearce, Catholic Literary Giants: A Field Guide to the Catholic Literary Landscape (Ignatius Press 2014)

Wikipedia: Palantír, Palantir Technologies

 

Sunday, July 14, 2019

Trial by Ordeal

Some people think that ethical principles only apply to implemented systems, and that experimental projects (trials, proofs of concept, and so on) don't need the same level of transparency and accountability.

Last year, Google employees (as well as US senators from both parties) expressed concern about Google's Dragonfly project, which appeared to collude with the Chinese government in censorship and suppression of human rights. A secondary concern was that Dragonfly was conducted in secrecy, without involving Google's privacy team.  

Google's official position (led by CEO Sundar Pinchai) was that Dragonfly was "just an experiment". Jack Poulson, who left Google last year over this issue and has now started a nonprofit organization called Tech Inquiry, has also seen this pattern in other technology projects.
"I spoke to coworkers and they said 'don’t worry, by the time the thing launches, we'll have had a thorough privacy review'. When you do R and D, there's this idea that you can cut corners and have the privacy team fix it later." (via Alex Hern)
A few years ago, Microsoft Research ran an experiment on "emotional eating", which involved four female employees wearing smart bras. "Showing an almost shocking lack of sensitivity for gender stereotyping", wrote Sebastian Anthony. While I assume that the four subjects willingly volunteered to participate in this experiment, and I hope the privacy of their emotional data was properly protected, it does seem to reflect the same pattern - that you can get away with things in the R and D stage that would be highly problematic in a live product.

Poulson's position is that the engineers working on these projects bear some responsibility for the outcomes, and that they need to see that the ethical principles are respected. He therefore demands transparency to avoid workers being misled. He also notes that if the ethical considerations are deferred to a late stage of a project, with the bulk of the development costs already incurred and many stakeholders now personally invested in the success of the project, the pressure to proceed quickly to launch may be too strong to resist.




Sebastian Anthony, Microsoft’s new smart bra stops you from emotionally overeating (Extreme Tech, 9 December 2013)

Erin Carroll et al, Food and Mood: Just-in-Time Support for Emotional Eating (Humaine Association Conference on Affective Computing and Intelligent Interaction, 2013)

Ryan Gallagher, Google’s Secret China Project “Effectively Ended” After Internal Confrontation (The Intercept, 17 December 2018)

Alex Hern, Google whistleblower launches project to keep tech ethical (Guardian, 13 July 2019)

Casey Michel, Google’s secret ‘Dragonfly’ project is a major threat to human rights (Think Progress, 11 Dec 2018)

Iain Thomson, Microsoft researchers build 'smart bra' to stop women's stress eating (The Register, 6 Dec 2013) 

 

Related posts: Have you got Big Data in your Underwear? (December 2014), Affective Computing (March 2019)

Saturday, June 15, 2019

The Road Less Travelled

Are algorithms trustworthy, asks @NizanGP.
"Many of us routinely - and even blindly - rely on the advice of algorithms in all aspects of our lives, from choosing the fastest route to the airport to deciding how to invest our retirement savings. But should we trust them as much as we do?"

Dr Packin's main point is about the fallibility of algorithms, and the excessive confidence people place in them. @AnnCavoukian reinforces this point.


But there is another reason to be wary of the advice of the algorithm, summed up by the question: Whom does the algorithm serve?

Because the algorithm is not working for you alone. There are many people trying to get to the airport, and if they all use the same route they may all miss their flights. If the algorithm is any good, it will be advising different people to use different routes. (Most well-planned cities have more than one route to the airport, to avoid a single point of failure.) So how can you trust the algorithm to give you the fastest route? However much you may be paying for the navigation service (either directly, or bundled into the cost of the car/device), someone else may be paying a lot more. For the road less travelled.

The algorithm-makers may also try to monetize the destinations. If a particular road is used for getting to a sports venue as well as the airport, then the two destinations can be invited to bid to get the "best" routes for their customers - or perhaps for themselves. ("Best" may not mean fastest - it could mean the most predictable. And the venue may be ambivalent about this - the more unpredictable the journey, the more people will arrive early to be on the safe side, spreading the load on the services as well as spending more on parking and refreshments.)

In general, the algorithm is juggling the interests of many different stakeholders, and we may assume that this is designed to optimize the commercial returns to the algorithm-makers.

The same is obviously true of investment advice. The best time to buy a stock is just before everyone else buys, and the best time to sell a stock is just after everyone else buys. Which means that there are massive opportunities for unethical behaviour when advising people where / when to invest their retirement savings, and it would be optimistic to assume that the people programming the algorithms are immune from this temptation, or that regulators are able to protect investors properly.

And that's before we start worrying about the algorithms being manipulated by hostile agents ...

So remember the Weasley Doctrine: "Never trust anything that can think for itself if you can't see where it keeps its brain."



Nizan Geslevich Packin, Why Investors Should Be Wary of Automated Advice (Wall Street Journal, 14 June 2019)

Dozens of drivers get stuck in mud after Google Maps reroutes them into empty field (ABC7 New York, 26 June 2019) HT @jonerp

Related posts: Towards Chatbot Ethics (May 2019), Whom does the technology serve? (May 2019), Robust Against Manipulation (July 2019)


Updated 27 July 2019

Friday, May 10, 2019

The Ethics of Interoperability

In many contexts (such as healthcare) interoperability is considered to be a Good Thing. Johns and Stead argue that "we have an ethical obligation to develop and implement plug-and-play clinical devices and information technology systems", while Olaronke and Olusola point out some of the ethical challenges produced by such interoperability, including "data privacy, confidentiality, control of access to patients’ information, the commercialization of de-identified patients’ information and ownership of patients’ information". All these authors agree that interoperability should be regarded as an ethical issue.

Citing interoperability as an example of a technical standard, Alan Winfield argues that "all standards can ... be thought of as implicit ethical standards". He also includes standards that promote "shared ways of doing things", "expressing the values of cooperation and harmonization".

But should cooperation and harmonization be local or global? American standards differ from European standards in so many ways - voltages and plugs, paper sizes, writing the date the wrong way. Is the local/global question also an ethical one?

One problem with interoperability is that it is often easy to find places where additional interoperability would deliver some benefits to some stakeholders. However, if we keep adding more interoperability, we may end up with a hyperconnected system of systems that is vulnerable to unpredictable global shocks, and where fundamental structural change becomes almost impossible. The global financial systems may be a good example of this.

So the ethics of interoperability is linked with the ethics of other whole-system properties, including complexity and stability. See my posts on Efficiency and Robustness. Each of these whole-system properties may be the subject of an architectural principle. And, following Alan's argument, an (implicitly) ethical standard.

With interoperability, there are questions of degree as well as of kind. We often distinguish between tight coupling and loose coupling, and there are important whole-system properties that may depend on the degree of coupling. (This is a subject that I have covered extensively elsewhere.)

What if we apply the ethics of interoperability to complex systems of systems involving multiple robots? Clearly, cordination between robots might be necessary in some situations to avoid harm to humans. So the ethics of interoperability should include the potential communication or interference between heterogeneous robots, and this raises the topic of deconfliction - not just for airborne robots (drones) but for any autonomous vehicles. Clearly deconfliction is another (implicitly) ethical issue. 



Note: for a more detailed account of the relationship between interoperability and deconfliction, see my paper with Philip Boxer on Taking Governance to the Edge (Microsoft Architecture Journal, August 2006). For deconfliction in relation to Self-Driving Cars, see my post Whom does the technology serve? (May 2019).



Mauricio Castillo-Effen and Nikita Visnevski, Analysis of autonomous deconfliction in Unmanned Aircraft Systems for Testing and Evaluation (IEEE Aerospace conference 2009)

Michael M. E. Johns and William Stead, Interoperability is an ethical issue (Becker's Hospital Review, 15 July 2015)

Milecia Matthews, Girish Chowdhary and Emily Kieson, Intent Communication between Autonomous Vehicles and Pedestrians (2017)

Iroju Olaronke and Olaleke Janet Olusola, Ethical Issues in Interoperability of Electronic Healthcare Systems (Communications on Applied Electronics, Vol 1 No 8, May 2015)

Richard Veryard, Component-Based Business (Springer 2001)

Richard Veryard, Business Adaptability and Adaptation in SOA (CBDI Journal, February 2004)

Alan Winfield, Ethical standards in robotics and AI (Nature Electronics Vol 2, February 2019) pp 46-48



Related posts: Deconfliction and Interoperability (April 2005), Loose Coupling (July 2005), Efficiency and Robustness (September 2005), Making the world more open and connected (March 2018) Whom does the technology serve? (May 2019)

Sunday, April 21, 2019

How Many Ethical Principles?

Although ethical principles have been put forward by philosophers through the ages, the first person to articulate ethical principles for information technology was Norbert Wiener. In his book The Human Use of Human Beings, first published in 1950, Wiener based his computer ethics on what he called four great principles of justice.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom.”

Meanwhile, Isaac Asimov's Three Laws of Robotics were developed in a series of short stories in the 1940s, so this was around the same time that Wiener was developing his ideas about cybernetics. Many writers on technology ethics argue that robots (or any other form of technology) should be governed by principles, and this idea is often credited to Asimov. But as far as I can recall, in every Asimov story that mentions the Three Laws of Robotics, some counter-example is produced to demonstrate that the Three Laws don't actually work as intended. I have therefore always regarded Asimov's work as being satirical rather than prescriptive. (Similarly J.K. Rowling's descriptions of the unsuccessful attempts by wizard civil servants to regulate the use of magical technologies.)

So for several decades, the Wiener approach to ethics prevailed, and discussion of computer ethics was focused on a common set of human values: life, health, security, happiness, freedom, knowledge, resources, power and opportunity. (Source: SEP: Computer and Information Ethics)

But these principles were essentially no different to the principles one would find in any other ethical domain. For many years, scholars disagreed as to whether computer technology introduced an entirely new set of ethical issues, and therefore called for a new set of principles. The turning point was at the ETHICOMP1995 conference in March 1995 (just two months before Bill Gates' Internet Tidal Wave memo), with important presentations from Walter Maner (who had been arguing this point for years) and Krystyna Górniak-Kocikowska. From this point onwards, computer ethics would have to address some additional challenges, including the global reach of the technology - beyond the control of any single national regulator - and the vast proliferation of actors and stakeholders. Terrell Bynum calls this the Górniak hypothesis.

Picking up the challenge, Luciano Floridi started to look at the ethical issues raised by autonomous and interactive agents in cyberspace. In a 2001 paper on Artificial Evil with Jeff Sanders, he stated "It is clear that something similar to Asimov's Laws of Robotics will need to be enforced for the digital environment (the infosphere) to be kept safe."

Floridi's work on Information Ethics (IE) represented an attempt to get away from the prevailing anthropocentric worldview. "IE suggests that there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy." He therefore articulated a set of principles concerning ontological equality (any instance of information/being enjoys a minimal, initial, overridable, equal right to exist and develop in a way which is appropriate to its nature) and information entropy (which ought not to be caused in the infosphere, ought to be prevented, ought to be removed). (Floridi 2006)

In the past couple of years, there has been a flood of ethical principles to choose from. In his latest blogpost, @Alan_Winfield lists over twenty sets of principles for robotics and AI published between January 2017 and April 2019, while AlgorithmWatch lists over fifty. Of particular interest may be the principles published by some of the technology giants, as well as the absence of such principles from some of the others. Meanwhile, Professor Floridi's more recent work on ethical principles appears to be more conventionally anthropocentric.

The impression one gets from all these overlapping sets of principles is of lots of experts and industry bodies competing to express much the same ideas in slightly different terms, in the hope that their version will be adopted by everyone else.

But what would "adopted" actually mean? One possible answer is that these principles might feed into what I call Upstream Ethics, contributing to a framework for both regulation and action. However some commentators have expressed scepticism as to the value of these principles. For example, @InternetDaniel thinks that these lists of ethical principles are "too vague to be effective", and suggests that this may even be intentional, these efforts being "largely designed to fail". And @EricNewcomer says "we're in a golden age for hollow corporate statements sold as high-minded ethical treatises".

As I wrote in an earlier piece on principles:
In business and engineering, as well as politics, it is customary to appeal to "principles" to justify some business model, some technical solution, or some policy. But these principles are usually so vague that they provide very little concrete guidance. Profitability, productivity, efficiency, which can mean almost anything you want them to mean. And when principles interfere with what we really want to do, we simply come up with a new interpretation of the principle, or another overriding principle, which allows us to do exactly what we want while dressing up the justification in terms of "principles". (January 2011)

The key question is about governance - how will these principles be applied and enforced, and by whom? What many people forget about Asimov's Three Laws of Robotics was that these weren't enforced by roving technology regulators, but were designed into the robots themselves, thanks to the fact that one corporation (U.S. Robots and Mechanical Men, Inc) had control over the relevant patents and therefore exercised a monopoly over the manufacture of robots. No doubt Google, IBM and Microsoft would like us to believe that they can be trusted to produce ethically safe products, but clearly this doesn't address the broader problem.

Following the Górniak hypothesis, if these principles are to mean anything, they need to be taken seriously not only by millions of engineers but also by billions of technology users. And I think this in turn entails something like what Julia Black calls Decentred Regulation, which I shall try to summarize in a future post. Hopefully this won't be just what Professor Floridi calls Soft Ethics.

Update: My post on Decentred Regulation and Responsible Technology is now available.



Algorithm Watch, AI Ethics Guidelines Global Inventory

Terrell Bynum, Computer and Information Ethics (Stanford Encyclopedia of Philosophy)

Luciano Floridi, Information Ethics - Its Nature and Scope (ACM SIGCAS Computers and Society · September 2006)

Luciano Floridi and Tim (Lord) Clement-Jones, The five principles key to any ethical framework for AI (New Statesman, 20 March 2019)

Luciano Floridi and J.W. Sanders, Artificial evil and the foundation of computer ethics (Ethics and Information Technology 3: 55–66, 2001)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)

Alan Winfield, An Updated Round Up of Ethical Principles of Robotics and AI (18 April 2019)


Wikipedia: Laws of Robotics, Three Laws of Robotics


Related posts: The Power of Principles (Not) (January 2011), Data and Intelligence Principles from Major Players (June 2018), Ethics Soft and Hard (February 2019), Upstream Ethics (March 2019), Ethics Committee Raises Alarm (April 2019), Decentred Regulation and Responsible Technology (April 2019), Automation Ethics (August 2019)

Link corrected 26 April 2019

Sunday, November 04, 2018

On Repurposing AI

With great power, as they say, comes great responsibility. Michael Krigsman of #CXOTALK tweets that AI is powerful because the results are transferable from one domain to another. Possibly quoting Bülent Kiziltan.

In London this week for Microsoft's Future Decoded event, according to reporter @richard_speed of @TheRegister, Satya Nadella asserted that an AI trained for one purpose being used for another was "an unethical use".

If Microsoft really believes this, it would certainly be a radical move. In April this year Mark Russinovich, Azure CTO, gave a presentation at the RSA Conference on Transfer Learning: Repurposing ML Algorithms from Different Domains to Cloud Defense.

Repurposing data and intelligence - using AI for a different purpose to its original intent - may certainly have ethical consequences. This doesn't necessarily mean it's wrong, simply that the ethics must be reexamined. Responsibility by design (like privacy by design, from which it inherits some critical ideas) considers a design project in relation to a specific purpose and use-context. So if the purpose and context change, it is necessary to reiterate the responsibility-by-design process.

A good analogy would be the off-label use of medical drugs. There is considerable discussion on the ethical implications of this very common practice. For example, Furey and Wilkins argue that off-label prescribing imposes additional responsibilities on a medical practitioner, including weighing the available evidence and proper disclosure to the patient.

There are often strong arguments in favour of off-label prescribing (in medicine) or transfer learning (in AI). Where a technology provides some benefit to some group of people, there may be good reasons for extending these benefits. For example, Rachel Silver argues that transfer learning has democratized machine learning, lowered the barriers to entry, thus promoting innovation. Interestingly, there seem to be some good examples of transfer learning in AI for medical purposes.

However, transfer learning in AI raises some ethical concerns. Not only the potential consequences on people affected by the repurposed algorithms, but also potential sources of error. For example, Wang and others identify a potential vulnerability to misclassification attacks.

There are also some questions of knowledge ownership and privacy that were relevant to older modes of knowledge transfer (see for example Baskerville and Dulipovici).



By the way, if you thought the opening quote was a reference to Spiderman, Quote Investigator has traced a version of it to the French Revolution. Other versions from various statesmen including Churchill and Roosevelt.

Richard Baskerville and Alina Dulipovici, The Ethics of Knowledge Transfers and Conversions: Property or Privacy Rights? (HICSS'06: Proceedings of the 39th Annual Hawaii International Conference on System Sciences, 2006)

Katrina Furey and Kirsten Wilkins, Prescribing “Off-Label”: What Should a Physician Disclose? (AMA Journal of Ethics, June 2016)

Marian McHugh, Microsoft makes things personal at this year's Future Decoded (Channel Web, 2 November 2018)

Rachel Silver, The Secret Behind the New AI Spring: Transfer Learning (TDWI, 24 August 2018)

Richard Speed, 'Privacy is a human right': Big cheese Sat-Nad lays out Microsoft's stall at Future Decoded (The Register, 1 November 2018)

Bolun Wang et al, With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning (Proceedings of the 27th USENIX Security Symposium, August 2018)


See also Off-Label (March 2005)

Thursday, October 18, 2018

Why Responsibility by Design now?

Excellent article by @riptari, providing some context for Gartner's current position on ethics and privacy.

Gartner has been talking about digital ethics for a while now - for example, it got a brief mention on the Gartner website last year. But now digital ethics and privacy has been elevated to the Top Ten Strategic Trends, along with (surprise, surprise) Blockchain.

Progress of a sort, says @riptari, as people are increasingly concerned about privacy.

The key point is really the strategic obfuscation of issues that people do in fact care an awful lot about, via the selective and non-transparent application of various behind-the-scenes technologies up to now — as engineers have gone about collecting and using people’s data without telling them how, why and what they’re actually doing with it. 
Therefore, the key issue is about the abuse of trust that has been an inherent and seemingly foundational principle of the application of far too much cutting edge technology up to now. Especially, of course, in the adtech sphere. 
And which, as Gartner now notes, is coming home to roost for the industry — via people’s “growing concern” about what’s being done to them via their data. (For “individuals, organisations and governments” you can really just substitute ‘society’ in general.) 
Technology development done in a vacuum with little or no consideration for societal impacts is therefore itself the catalyst for the accelerated concern about digital ethics and privacy that Gartner is here identifying rising into strategic view.

Over the past year or two, some of the major players have declared ethics policies for data and intelligence, including IBM (January 2017), Microsoft (January 2018) and Google (June 2018). @EricNewcomer reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises".

According to the Magic Sorting Hat, high-minded vision can get organizations into the Ravenclaw or Slytherin quadrants (depending on the sincerity of the intention behind the vision). But to get into the Hufflepuff or Gryffindor quadrants, organizations need the ability to execute. So it's not enough for Gartner simply to lecture organizations on the importance of building trust.

Here we go round the prickly pear
Prickly pear prickly pear
Here we go round the prickly pear
At five o'clock in the morning.




Natasha Lomas (@riptari), Gartner picks digital ethics and privacy as a strategic trend for 2019 (TechCrunch, 16 October 2018)

Sony Shetty, Getting Digital Ethics Right (Gartner, 6 June 2017)


Related posts (with further links)

Data and Intelligence Principles from Major Players (June 2018)
Practical Ethics (June 2018)
Responsibility by Design (June 2018)
What is Responsibility by Design (October 2018)

Friday, June 08, 2018

Data and Intelligence Principles From Major Players

The purpose of this blogpost is to enumerate the declared ethical positions of major players in the data world. This is a work in progress.




Google

In June 2018, Sundar Pinchai (Google CEO) announced a set of AI principles for Google. This includes seven principles, four application areas that Google will avoid (including weapons), references to international law and human rights, and a commitment to a long-term sustainable perspective.

https://www.blog.google/topics/ai/ai-principles/


Also worth noting the statement on AI ethics and social impact published by DeepMind last year. (DeepMind was accquired by Google in 2014 and is now a subsidiary of Google parent Alphabet.)

https://deepmind.com/applied/deepmind-ethics-society/research/



IBM

In January 2017, Ginni Rometty (IBM CEO) announced a set of Principles for the Cognitive Era.

https://www.ibm.com/blogs/think/2017/01/ibm-cognitive-principles/

This was followed up in October 2017, with a more detailed ethics statement for data and intelligence, entitled Data Responsibility @IBM.

https://www.ibm.com/blogs/policy/dataresponsibility-at-ibm/



Microsoft

In January 2018, Brad Smith (Microsoft President and Chief Legal Officer) announced a book called The Future Computed: Artificial Intelligence and its Role in Society, to which he had contributed a forward.

https://blogs.microsoft.com/blog/2018/01/17/future-computed-artificial-intelligence-role-society/



Twitter


@Jack Dorsey (Twitter CEO) asked the Twitterverse whether Google's AI principles were something the tech industry as a whole could get around (via The Register, 9 June 2018).



Selected comments

These comments are mostly directed at the Google principles, because these are the most recent. However, many of them apply equally to the others. Commentators have also remarked on the absence of ethical declarations from Amazon.


Many commentators have welcomed Google's position on military AI, and congratulate those Google employees who lobbied for discontinuing its work with the US Department of Defense analysing drone footage, known as Project Maven. @kateconger, Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program (Gizmodo 1 June 2018) Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance, (Gizmodo 7 June 2018)

Interesting thread from former Googler @tbreisacher on the new principles (HT @kateconger)

@EricNewcomer talks about What Google's AI Principles Left Out (Bloomberg 8 June 2018). He reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises", complains that the Google principles are "peppered with lawyerly hedging and vague commitments", and asks about governance - "who decides if Google has fulfilled its commitments".

@katecrawford(Twitter 8 June 2018) also asks about governance. "How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'. Are they... autonomous ethics?" And @mer__edith (Twitter 8 June 2018) calls for "strong governance, independent external oversight and clarity".

Andrew McStay (Twitter 8 June 2018) asks about Google's business model. "Please tell me if you spot any reference to advertising, or how Google actually makes money. Also, I’d be interested in knowing if Government “work” dents reliance on ads."

Earlier, in relation to DeepMind's ethics and social impact statement, @riptari (Natasha Lomas) suggested that "it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts" (TechCrunch October 2017). See also my post on Conflict of Interest (March 2018).

@rachelcoldicutt asserts that "ethical declarations like these need to have subjects. ... If they are to be useful, and can be taken seriously, we need to know both who they will be good for and who they will harm." She complains that the Google principles fail on these counts. (Tech ethics, who are they good for? Medium 8 June 2018)


Related posts

Conflict of Interest (March 2018), Why Responsibility by Design Now? (October 2018), Leadership versus Governance (May 2019)


Updated 11 June 2018. Also links added to later posts.

Tuesday, June 05, 2018

Responsibility by Design

Over the past twelve months or so, we have seen a big shift in the public attitude towards new technology. More people are becoming aware of the potential abuses of data and other cool stuff. Scandals involving Facebook and other companies have been headline news.

Security professionals have been pushing the idea of security by design for ages, and the push to comply with GDPR has made a lot of people aware of privacy by design. Responsibility by design (RbD) represents a logical extension of these ideas to include a range of ethical issues around new technology.

Here are some examples of the technologies that might be covered by this.

Technologies such as
...
Benefits such as
...
Dangers such as
...
Principles such as
...
Big Data Personalization Invasion of Privacy Consent
Algorithms Optimization Algorithmic Bias Fairness
Automation Productivity Fragmentation of Work Human-Centred Design
Internet of Things Cool Devices Weak Security Ecosystem Resilience
User Experience Convenience Dark Patterns, Manipulation Accessibility, Transparency


Ethics is not just a question of bad intentions, it includes bad outcomes through misguided action. Here are some of the things we need to look at.
  • Unintended outcomes - including longer-term or social consequences. For example, platforms like Facebook and YouTube are designed to maximize engagement. The effect of this is to push people into progressively more extreme content in order to keep them on the platform for longer.
  • Excluded users - this may be either deliberate (we don't have time to include everyone, so let's get something out that works for most people) or unwitting (well it works for people like me, so what's the problem)
  • Neglected stakeholders - people or communities that may be indirectly disadvantaged - for example, a healthy politics that may be undermined by the extremism promoted by platforms such as Facebook and YouTube.
  • Outdated assumptions - we used to think that data was scarce, so we grabbed as much as we could and kept it for ever. We now recognize that data is a liability as well as an asset, and we now prefer data minimization - only collect and store data for a specific and valid purpose. A similar consideration applies to connectivity. We are starting to see the dangers of a proliferation of "always on" devices, especially given the weak security of the IoT world. So perhaps we need to replace a connectivity-maximization assumption with a connectivity minimization principle. There are doubtless other similar assumptions that need to be surfaced and challenged.
  • Responsibility break - potential for systems being taken over and controlled by less responsible stakeholders, or the chain of accountability being broken. This occurs when the original controls are not robust enough.
  • Irreversible change - systems that cannot be switched off when they are no longer providing the benefits and safeguards originally conceived.


Wikipedia: Algorithmic Bias (2017), Dark Pattern (2017), Privacy by Design (2011), Secure by Design (2005), Weapons of Math Destruction (2017). The date after each page shows when it first appeared on Wikipedia.

Ted Talks: Cathy O'Neil, Zeynep Tufekci, Sherry Turkle

Related Posts: Pax Technica (November 2017), Risk and Security (November 2017), Outdated Assumptions - Connectivity Hunger (June 2018)



Updated 12 June 2018

Saturday, April 18, 2015

Arguing with Drucker

@sheldrake via @cybersal challenges Peter Drucker on the purpose of business.

"Peter Drucker asserted that the purpose of business is to create and keep a customer. He was right at the time in offering previously inward-looking firms a more appropriate beacon. His dictum is, however, wrong for our time."

Philip Sheldrake's challenge is based on two points.

1. A concern with the health and resilience of living systems such as organizations, society and the environment.

2. The need to recognize and understand complexity.


I completely agree with these points, but I do not think they contradict Drucker's original statement of purpose. As the webpage cited by Philip indicates, Drucker always called for a healthy balance - between short-term needs and long-term sustainability - and I think he would argue that a concern for resilience and the need to understand complexity were entailed by a customer-centric purpose.

Philip proposes an alternative purpose: Business exists to establish and drive mutual value creation. My problem with this alternative formulation is that it fails to answer Lenin's fundamental question: Who, Whom? There are businesses and business networks today whose purpose appears to be to mutually enrich a small number of mutually back-scratching executives at the expense of everyone else, including customers and retail shareholders. Drucker would not approve.

A statement of purpose is essentially an ethical statement (what is the value of the business) not an instrumental statement (what is needed to deliver this value). So let me propose an alternative ethic, a compromise between Drucker and Sheldrake, based on the wise saying of Hillel the Elder.


1. If a business is only for itself, what is it?
 (Expresses a concern for customers and society)

2. If a business is not for itself, who is for it?  
(Which may entail a concern for resilience and complexity)

3. If not now, when?  
(Expresses a concern for a balance between the present and the future)



Philip Sheldrake, What, exactly, is the purpose of business? An answer post-Drucker (April 2015)

Peter Drucker's Life and Legacy (Drucker Institute, retrieved 18 April 2015)

Wikipedia: Hillel the Elder, POSIWID, Who, Whom?


Related post: Organizational Intelligence After Drucker (August 2010) 

Friday, December 31, 2010

The Independence of EA

#entarch Following my blogpost on EA and the Big Picture, @carlhaggerty asked does it matter if EA disappears into a core C-Suite competency?

Clearly it matters to some people, especially those who have committed themselves and their careers to the idea of enterprise architecture (EA) as an independent practice. If EA were to disappear, some people would be upset, and some organizations would need to find a new mission or fold.

@carlhaggerty agreed that some people might be upset but thought if it is in the best interests of the enterprise surely it would be the best thing to do.

Well perhaps. I am certainly not a defender of the current position of EA as a defacto knowledge silo, because this offends against the principles of EA itself. (See my post on A Value Proposition for Enterprise Architecture.)

But I have two problems with making a judgement about "the best thing to do" purely on the basis of "the best interests of the enterprise". The first problem is who shall speak for the best interests of the enterprise. C-level executives are often driven by extremely short-term considerations, such as stock market prices, as well as their own personal advantage. Merging enterprise architecture into C-level management might mean that EA would be forced to adopt this perspective to the exclusion of any other perspective. The second problem is that there have always been enterprises that are based on a corrupt business model - flawed or unethical - and there is clear conflict between the best interest of these enterprises and the best interests of society as a whole.

Ultimately, this comes down to the possibility of designing a complex organization to produce behaviour that is effective, intelligent and ethical. @carlhaggerty would rather see EA being done whether or not an EA exists in an enterprise, and he personally doesn't care who ;) I agree with him up to a point - I don't care who does EA, but I do want to see an organizational design giving a strong independent voice to an EA-like perspective.

Friday, September 17, 2010

Chinese Walls

In Joined up Daily Mail, @psbook via @chris_yapp points out a contradiction between the front page (attacking @stephenfry) and an advertisement on page 27 (featuring @stephenfry). "Perhaps the Daily Mail should try a little harder not to offend their advertisers?"

The Daily Mail is not my favourite newspaper (as readers of my posts on the POSIWID blog will know), but with this news my respect for the Daily Mail has gone up a notch - at least they aren't allowing their advertisers to influence the front page.

What's this got to do with architecture? We have here an example of a clash between economics (which apparently favours joined-up thinking) and ethics (which in this case apparently favours the opposite), typically resolved by the erection of an intra-organizational boundary known as a Chinese Wall. This is a structure within an enterprise intended to reduce conflicts of interest, asymmetrical information and moral hazard.

For example, financial institutions are supposed to have Chinese Walls, to prevent various patterns of inappropriate behaviour, including Insider Trading and Insider Recommendations. Among other things, the Chinese Wall is supposed to protect investment analysts from commercial pressure from other parts of the same organization. Recently, there has been much criticism of financial analysts (especially in America) who recommend the purchase of stocks simply because their colleagues have a commercial interest in promoting that stock.

Sometimes of course, the Chinese Wall is just a notional boundary, with little real effect - for symbolic or compliance purposes only. Although the actual information flows may be concealed, a strong correlation between activities on both sides of the wall may be sufficient evidence that some collusion has occurred. (Clearly architects need to appreciate the differences between official structure and defacto structure.)

In journalism, there is always supposed to be a Chinese Wall between editorial and advertising, to protect the objectivity of the journalists from commercial bias. This principle is often blatently breached, so it is pleasing to see the Daily Mail following the principle on this occasion. Although I disagree with their attack on Mr Fry, I respect the fact that the Mail has chosen to offend their advertisers rather than abandon its strongly held views.