Thursday, November 07, 2019

On Magic Numbers - Privacy and Security

People and organizations often adopt a metrical approach to sensemaking, decision and policy. They attach numbers to things, perhaps using a weighted scorecard or other calculation method, and then make judgements about status or priority or action based on these numbers. Sometimes called triage.

In the simplest version, a single number is produced. More complex versions may involve producing several numbers (sometimes called a vector). For example, if an item can be represented by a pair of numbers, these can be used to position the item on a 2x2 quadrant. See my post Into The Matrix.

In this post, I shall look at how this approach works for managing risk, security and privacy.



A typical example of security scoring is the Common Vulnerability Scoring System (CVSS), which assigns numbers to security vulnerabilities. These numbers may determine or influence the allocation of resources within the security field.

Scoring systems are sometimes used within the privacy field as part of Privacy by Design (PbD) or Data Protection Impact Assessment (DPIA). The resultant numbers are used to decide whether something is acceptable, unacceptable or borderline. And in 2013, two researchers at ENISA published a scoring system for assessing the severity of data breaches. Scores less than 2 indicated low severity, scores higher than 4 indicated very high severity.

The advantage of these systems is that they are (relatively) quick and repeatable, especially across large diverse organizations with variable levels of subject matter expertise. The results are typically regarded as objective, and may therefore be taken more seriously by senior management and other stakeholders.

However, these systems are merely indicative, and the scores may not always provide a reliable or accurate view. For example, I doubt whether any Data Protection Officer would be justified in disregarding a potential data breach simply on the basis of a low score from an uncalibrated calculation.

Part of the problem is that these scoring systems operate a highly simplistic algebra, assuming you can break a complex situation into an number of separate factors (e.g. vulnerabilities), and then add them back together with some appropriate weightings. The weightings can be pretty arbitrary, and may not be valid for your organization. More importantly, as Marc Rogers argues (as reported by Shaun Nichols), the more sophisticated attacks rely on combinations of vulnerabilities, so assessing each vulnerability separately completely misses the point.

Thus although two minor bugs may have low CVSS ratings, interaction between them could allow a high severity attack. It is complex, but there is nothing in the assessment process to deal with that, Rogers said. It has lulled us into a false sense of security where we look at the score, and so long as it is low we don't allocate the resources.

One organization that has moved away from the scorecard approach is the Electronic Frontier Foundation. In 2014, they released a Secure Messaging Scorecard for evaluating messaging apps. However, they later decided that the scorecard format dangerously oversimplified the complex question of how various messengers stack up from a security perspective, so they archived the original scorecard and warned people against relying on it.




Nate Cardozo, Gennie Gebhart and Erica Portnoy, Secure Messaging? More Like A Secure Mess (Electronic Frontier Foundation, 26 March 2018)

Clara Galan Manso and Sławomir Górniak, Recommendations for a methodology of the assessment of severity of personal data breaches (ENISA 2013)

Shaun Nichols, We're almost into the third decade of the 21st century and we're still grading security bugs out of 10 like kids. Why? (The Register, 7 Nov 2019)

Wikipedia: Common Vulnerability Scoring System (CVSS)

Related posts: Into The Matrix (October 2015), False Sense of Security (June 2019)

No comments:

Post a Comment