Sunday, July 16, 2017

Could we switch the algorithms off?

In his review of Nick Bostrom's book Superintelligence, Tim Adams suggests that Bostrom has been reading too much of the science fiction he professes to dislike. When people nowadays want to discuss the social and ethical implications of machine intelligence and intelligent machines, they naturally frame their questions after the popular ideas of science fiction: Frankenstein (Mary Shelley 1818), Rossum’s Universal Robots (Karel ńĆapek 1921), Three Laws of Robotics (Isaac Asimov 1942 onwards), Multivac (Asimov 1955 onwards), Hitchhiker's Guide to the Galaxy (Douglas Adams 1978 onwards).
  • What happens if our creations hate us? Or get depressed.
  • What happens if the robots rebel? How can they outwit the constraints we place upon them?*
  • Can humans (Susan Calvin, Arthur Dent, Ronald Bakst) outwit the machines?
@DianeCoyle1859 echoes these questions when she asks whether humans could fight back against the superintelligence described by Nick Bostrom
  • by unplugging them if they turn on us?
  • by removing sensors and RFID tags and so on, to deny them the data they feed upon?
But the analogy that springs to my mind is that disentangling humanity from machine intelligence is likely to be at least as complicated as disentangling the UK economy from the EU. The global economy is dependent on complex cybernetic systems - from algorithmic trading to just-in-time supply chains, automated warehouses, air traffic control, all the troubles of the world. Good luck trying to phase that lot out.

*By the way, it's not that difficult to outwit humans. In a recent study, a raven outsmarted the scientists by inventing her own way of accessing a reward inside a box and was therefore excluded from further tests. And don't get me started on the intelligence of bees.