Behaviours developed in a state of scarcity may cease to be appropriate in a state of abundance. Our stone age ancestors struggled to get enough energy-rich food, so they acquired a taste for food with a strong energy hit. We inherited a greed for sweet and fatty foods, and can now stuff our faces on delicacies our stone age ancestors never knew, such as ice-cream and cheesecake.
***
So let's talk about data. Once upon a time, data processing systems struggled to get enough data, and long-term data storage was expensive, so we were told to regard data as an asset. People learned to grab as much data as they could, and keep it until the data storage was full. But the greed for data was always moderated by the cost of collection, storage and retrieval, as well as the limited choice of data that was available in the first place.
Take away the assumption of data scarcity and cost, and our greed for data becomes problematic. We now recognize that data (especially personal data) can be a liability as much as an asset, and have become wedded to the principle of data minimization - only collecting the data you need, and only keeping it as long as you need.
***
But data scarcity is not the only outdated assumption that still influences our behaviour. Let's also talk about connectivity. Once upon a time, connectivity was intermittent, slow, unreliable. Hungry for greater connectivity, computer scientists dreamed of a world where everything was always on. More recently, Facebook has argued that Connectivity is a Human Right. (But you can only read this document if you have a Facebook account!)
But as with an overabundance of data, we may experience an overabundance of connectivity. Thus we are starting to realise the downside of the "always on", not just in the highly insecure world of the Internet of Things (Rainie and Anderson) but also in corporate computing (Ben-Meir, Hill).
Increasingly, products and services are being designed for "always on" operation. Ben-Meir notes Apple’s assertion that constant connectivity is essential for features such as AirDrop and AirPlay, and only today a colleague was grumbling to me about the downgrading of offline functionality in Microsoft Outlook.
Perhaps therefore, similar to the data minimization principle, there needs to be a network minimization principle. The wider the network, the larger the scope of responsibility. Or as Bruce Schneier puts it, "the more we network things together, the more vulnerabilities on one thing will affect other things". So don’t just connect because you can. Connect for a reason, disconnect by default, support offline functionality and disruption-tolerance, prefer secure hubs to insecure peer-to-peer.
Bruce Schneier again: "We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized. If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much."
Elad Ben-Meir, How an 'Always-On' Culture Compromises Corporate Security (Info Security, 2 November 2017)
Paul Hill, Always-on Access Brings Always-Threatening Security Risks (System Experts, 25 June 2015)
Lee Rainie and Janna Anderson, The Internet of Things Connectivity Binge: What Are the Implications? (Pew Research Centre, 6 June 2017)
Bruce Schneier, Click Here to Kill Everyone (New York Magazine, 27 January 2017)
Maeve Shearlaw, Mark Zuckerberg says connectivity is a basic human right – do you agree? (Guardian 3 Jan 2014)
Related post: Pax Technica - On Risk and Security (November 2017)
Thanks to @futureidentity for useful discussion
Showing posts with label connectivity. Show all posts
Showing posts with label connectivity. Show all posts
Monday, June 04, 2018
Tuesday, March 20, 2018
Making the World More Open and Connected
Last year, Facebook changed its mission statement, from "Making The World More Open And Connected" to "Bringing The World Closer Together".
As I said in September 2005, interoperability is not just a technical question but a sociotechnical question (involving people, processes and organizations). (Some of us were writing about "open and connected" before Facebook existed.) But geeks often start with the technical interface, or what is sometimes called an API.
For many years, Facebook had an API that allowed developers to snoop on friends' data: this was shut down in April 2015. As Constine reported at the time, this was not just because the API was "kind of shady" but also to "deny developers the ability to build apps ... that could compete with Facebook’s own products". Sandy Paralikas (himself a former Facebook executive) made a similar point (as reported by Paul Lewis): Facebook executives were nervous about the commercial value of data being passed to other companies, and worried that the large app developers could be building their own social graphs.
In other words, the decision was not motivated by concern for user privacy but by the preservation of Facebook's hegemony.
When Tim Berners-Lee first talked about the Giant Global Graph in 2007, it seemed such a good idea. When Facebook launched the Open Graph in 2010, this was billed as "a taste of the future where everything can be more personalized". Like!
Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal, August 2006)
Josh Constine, Facebook Is Shutting Down Its API For Giving Your Friends’ Data To Apps (TechCrunch, 28 April 2015)
Josh Constine and Frederic Lardinois, Everything Facebook Launched At f8 And Why (TechCrunch, 2 May 2014)
John Lanchester, You Are the Product (London Review of Books, 17 August 2017)
Paul Lewis, 'Utterly horrifying': ex-Facebook insider says covert data harvesting was routine (Guardian, 20 March 2018)
Caroline McCarthy, Facebook F8: One graph to rule them all (CNet, 21 April 2010)
Sandy Parakilas, We Can’t Trust Facebook to Regulate Itself (New York Times, 19 November 2017)
Wikipedia: Giant Global Graph, Open API,
Related Posts SOA Stupidity (September 2005), Social Networking as Reuse (November 2007), Security is Downstream from Strategy (March 2018), Connectivity Hunger (June 2018)
As I said in September 2005, interoperability is not just a technical question but a sociotechnical question (involving people, processes and organizations). (Some of us were writing about "open and connected" before Facebook existed.) But geeks often start with the technical interface, or what is sometimes called an API.
For many years, Facebook had an API that allowed developers to snoop on friends' data: this was shut down in April 2015. As Constine reported at the time, this was not just because the API was "kind of shady" but also to "deny developers the ability to build apps ... that could compete with Facebook’s own products". Sandy Paralikas (himself a former Facebook executive) made a similar point (as reported by Paul Lewis): Facebook executives were nervous about the commercial value of data being passed to other companies, and worried that the large app developers could be building their own social graphs.
In other words, the decision was not motivated by concern for user privacy but by the preservation of Facebook's hegemony.
When Tim Berners-Lee first talked about the Giant Global Graph in 2007, it seemed such a good idea. When Facebook launched the Open Graph in 2010, this was billed as "a taste of the future where everything can be more personalized". Like!
Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal, August 2006)
Josh Constine, Facebook Is Shutting Down Its API For Giving Your Friends’ Data To Apps (TechCrunch, 28 April 2015)
Josh Constine and Frederic Lardinois, Everything Facebook Launched At f8 And Why (TechCrunch, 2 May 2014)
John Lanchester, You Are the Product (London Review of Books, 17 August 2017)
Paul Lewis, 'Utterly horrifying': ex-Facebook insider says covert data harvesting was routine (Guardian, 20 March 2018)
Caroline McCarthy, Facebook F8: One graph to rule them all (CNet, 21 April 2010)
Sandy Parakilas, We Can’t Trust Facebook to Regulate Itself (New York Times, 19 November 2017)
Wikipedia: Giant Global Graph, Open API,
Related Posts SOA Stupidity (September 2005), Social Networking as Reuse (November 2007), Security is Downstream from Strategy (March 2018), Connectivity Hunger (June 2018)
Labels:
API,
connectivity,
Facebook,
interoperability,
openness,
privacy,
social networking,
trust
Monday, December 22, 2008
SOA in an Offline World
There is a discussion on Linked-In entitled SOA in an Offline World. The discussion has a technical focus: What kind of technology architecture to use with unreliable or intermittent network communications. There are some design patterns that may support intermittent connection, such as decoupling and asynchronous communications, and these patterns may be appropriate in a range of situations including military and medical.
A broader architectural question is whether we can use layering to hide these technical issues from the business-facing services in the layers above. In an ideal world, we would have a disruption-tolerant service platform, and the core business services and applications can then operate as if we had perfect and permanent connectivity.
In the 1990s, Peter Deutsch and James Gosling identified Eight Fallacies of Distributed Computing - invalid assumptions that inexperienced designers make when designing distributed systems. See Arnon Rotem Gal-Oz and Paul Vincent (TIBCO). These include the assumption of perfect and permanent connectivity.
By the way, Tim Bass questions whether anyone nowadays suffers from these fallacies. I think he's got a point, but perhaps the problem here is in the word "fallacy". If you ask a designer if he believes connectivity is going to be perfect, he will almost certainly say no. But if you inspect his designs, however, you may well find that he has failed to allow adequately for imperfect connectivity. Not so much a failure of belief as a failure of attention.
So the important question here is - can we compensate for the imperfections of distributed systems by having a really clever architecture, supported by really clever technology, so that the applications can operate as if we didn't have any of the problems of distribution at all? Some people may well believe that to be possible, either now or in the foreseeable future.
I don't think it's so easy. I have long argued that SOA needs to embrace three-valued logic. If this is done properly, it would make the whole architecture disruption-tolerant, not just the underlying layers. We also need to understand how disruption-tolerance affects the behaviour of the whole business-facing system. Not just technology then.
A broader architectural question is whether we can use layering to hide these technical issues from the business-facing services in the layers above. In an ideal world, we would have a disruption-tolerant service platform, and the core business services and applications can then operate as if we had perfect and permanent connectivity.
In the 1990s, Peter Deutsch and James Gosling identified Eight Fallacies of Distributed Computing - invalid assumptions that inexperienced designers make when designing distributed systems. See Arnon Rotem Gal-Oz and Paul Vincent (TIBCO). These include the assumption of perfect and permanent connectivity.
By the way, Tim Bass questions whether anyone nowadays suffers from these fallacies. I think he's got a point, but perhaps the problem here is in the word "fallacy". If you ask a designer if he believes connectivity is going to be perfect, he will almost certainly say no. But if you inspect his designs, however, you may well find that he has failed to allow adequately for imperfect connectivity. Not so much a failure of belief as a failure of attention.
So the important question here is - can we compensate for the imperfections of distributed systems by having a really clever architecture, supported by really clever technology, so that the applications can operate as if we didn't have any of the problems of distribution at all? Some people may well believe that to be possible, either now or in the foreseeable future.
I don't think it's so easy. I have long argued that SOA needs to embrace three-valued logic. If this is done properly, it would make the whole architecture disruption-tolerant, not just the underlying layers. We also need to understand how disruption-tolerance affects the behaviour of the whole business-facing system. Not just technology then.
Subscribe to:
Posts (Atom)