Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Sunday, November 04, 2018

On Repurposing AI

With great power, as they say, comes great responsibility. Michael Krigsman of #CXOTALK tweets that AI is powerful because the results are transferable from one domain to another. Possibly quoting Bülent Kiziltan.

In London this week for Microsoft's Future Decoded event, according to reporter @richard_speed of @TheRegister, Satya Nadella asserted that an AI trained for one purpose being used for another was "an unethical use".

If Microsoft really believes this, it would certainly be a radical move. In April this year Mark Russinovich, Azure CTO, gave a presentation at the RSA Conference on Transfer Learning: Repurposing ML Algorithms from Different Domains to Cloud Defense.

Repurposing data and intelligence - using AI for a different purpose to its original intent - may certainly have ethical consequences. This doesn't necessarily mean it's wrong, simply that the ethics must be reexamined. Responsibility by design (like privacy by design, from which it inherits some critical ideas) considers a design project in relation to a specific purpose and use-context. So if the purpose and context change, it is necessary to reiterate the responsibility-by-design process.

A good analogy would be the off-label use of medical drugs. There is considerable discussion on the ethical implications of this very common practice. For example, Furey and Wilkins argue that off-label prescribing imposes additional responsibilities on a medical practitioner, including weighing the available evidence and proper disclosure to the patient.

There are often strong arguments in favour of off-label prescribing (in medicine) or transfer learning (in AI). Where a technology provides some benefit to some group of people, there may be good reasons for extending these benefits. For example, Rachel Silver argues that transfer learning has democratized machine learning, lowered the barriers to entry, thus promoting innovation. Interestingly, there seem to be some good examples of transfer learning in AI for medical purposes.

However, transfer learning in AI raises some ethical concerns. Not only the potential consequences on people affected by the repurposed algorithms, but also potential sources of error. For example, Wang and others identify a potential vulnerability to misclassification attacks.

There are also some questions of knowledge ownership and privacy that were relevant to older modes of knowledge transfer (see for example Baskerville and Dulipovici).



By the way, if you thought the opening quote was a reference to Spiderman, Quote Investigator has traced a version of it to the French Revolution. Other versions from various statesmen including Churchill and Roosevelt.

Richard Baskerville and Alina Dulipovici, The Ethics of Knowledge Transfers and Conversions: Property or Privacy Rights? (HICSS'06: Proceedings of the 39th Annual Hawaii International Conference on System Sciences, 2006)

Katrina Furey and Kirsten Wilkins, Prescribing “Off-Label”: What Should a Physician Disclose? (AMA Journal of Ethics, June 2016)

Marian McHugh, Microsoft makes things personal at this year's Future Decoded (Channel Web, 2 November 2018)

Rachel Silver, The Secret Behind the New AI Spring: Transfer Learning (TDWI, 24 August 2018)

Richard Speed, 'Privacy is a human right': Big cheese Sat-Nad lays out Microsoft's stall at Future Decoded (The Register, 1 November 2018)

Bolun Wang et al, With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning (Proceedings of the 27th USENIX Security Symposium, August 2018)


See also Off-Label (March 2005)

Sunday, July 16, 2017

Could we switch the algorithms off?

In his review of Nick Bostrom's book Superintelligence, Tim Adams suggests that Bostrom has been reading too much of the science fiction he professes to dislike. When people nowadays want to discuss the social and ethical implications of machine intelligence and intelligent machines, they naturally frame their questions after the popular ideas of science fiction: Frankenstein (Mary Shelley 1818), Rossum’s Universal Robots (Karel Čapek 1921), Three Laws of Robotics (Isaac Asimov 1942 onwards), Multivac (Asimov 1955 onwards), Hitchhiker's Guide to the Galaxy (Douglas Adams 1978 onwards).
  • What happens if our creations hate us? Or get depressed.
  • What happens if the robots rebel? How can they outwit the constraints we place upon them?*
  • Can humans (Susan Calvin, Arthur Dent, Ronald Bakst) outwit the machines?
@DianeCoyle1859 echoes these questions when she asks whether humans could fight back against the superintelligence described by Nick Bostrom
  • by unplugging them if they turn on us?
  • by removing sensors and RFID tags and so on, to deny them the data they feed upon?
But the analogy that springs to my mind is that disentangling humanity from machine intelligence is likely to be at least as complicated as disentangling the UK economy from the EU. The global economy is dependent on complex cybernetic systems - from algorithmic trading to just-in-time supply chains, automated warehouses, air traffic control, all the troubles of the world. Good luck trying to phase that lot out.


*By the way, it's not that difficult to outwit humans. In a recent study, a raven outsmarted the scientists by inventing her own way of accessing a reward inside a box and was therefore excluded from further tests. And don't get me started on the intelligence of bees.




Tim Adams, Artificial intelligence: ‘We’re like children playing with a bomb’ (Guardian, 12 June 2016)

Marc Ambasna-Jones, Are Asimov's laws enough to stop AI stomping humanity? (The Register, 15 Aug 2017)

Isaac Asimov, The Life and Times of Multivac (1975 via Atari Archives) (this is the story featuring Ronald Bakst)

Diane Coyle, Do AIs drive autonomous vehicles? (15 July 2017)

Ian Johnston, Ravens can be better at planning ahead than four-year-old children, study finds (Independent, 13 July 2017)

Wikipedia: All the Troubles of the World, Multivac, Three Laws of Robotics


Update: added link to article by @mambjo 19 August 2017