The Open Integrity Index (OII) is meant to serve different audiences in different ways. For some, it will be enough that we provide a reliable source of current information about how various tools stack up against a transparent, community-maintained set of digital security best practices. Others will likely rely on the Index primarily as a repository of logistical metadata about the tools themselves. As a way, that is, to answer questions like: When was the latest version released? How far behind is the version for my distribution or operating system? When was a particular vulnerability patched? Can I download the tool securely? What is the official checksum of the binary installer?
However, to address the most important needs of at-risk users, we will have to go a step further. By committing to a user-centric perspective and precise, accessible language, we can do more than just provide evidence-based answers to those who already know what questions to ask. We can help people develop of sense of what questions they should be asking. And, at the same time, we can greatly improve thier chances of understanding the answers they find--regardless of where they find them.
The goal of the Impact Group is to ensure that OII metrics and analysis remain grounded in and relevant to the threats faced by at-risk users. Initially, the key question we should address is fairly abstract. How do software security best-practices (or lack thereof) impact these users. This task will become more concrete as we expand the set of tools covered by the Index and establish the precise metrics used to evaluate these practices. Naturally, this is a two-way street. If the Impact Group determines that certain properties are more broadly relevant than others, it will make sense to prioritise the collection and analysis of metrics that target those properties.
This group will also examine more specific issues that might (initially) be relevant only to particular users, regions or tools. There is an explanatory element to this work. While it may take some time to put the resulting insights into a format that is more useful than an extremely verbose FAQ, questions such as the following are likely in scope:
In other words, we should try to build toward a community consensus on how to explain things about which end-users hear mutterings when they ask around. To be clear, this is not about agreeing on what to recommend but on how to explain it. Taking this on will help us bridge the gap between inscrutable matrices of security properties and usable knowledge--a tall order in the absense of specific threat models, but essential if we are serious about adopting a user-centric approach.
The most obvious answers to the top-level question posed above (how do software security best-practices impact users?) are historical. And, certainly, the Impact Group should find and reference research, news stories, CVEs, legal cases, blog posts and changelogs that provide actual examples. Unfortunately, this kind of evidence is rare and can sometimes be challenging to verify. So, when dealing with the hypothetical, we should try to tell stories rather than specifying vulnerabilities.
In the grand tradition of Alice, Bob, Eve and Mallory, it is often better to spell out every step of an attack scenario than to employ precise technical language as a way to use the smallest number of words possible. This is particularly true when communicating with a skeptical but non-expert audience and when trying to teach a concept rather than merely describing it.
Clearly, the OII data structure will keep track of whether or not a given tool implements forward secrecy. But we are more likely to leverage that datum usefully if we think about it (and, where practical, explain it) in terms of a particular adversary who records a particular user's encrypted traffic for a particular reason, then obtains his or her secret key in a particular way, thereby inflicting particular consequences on particular individuals. This is true even where the particulars are theoretical.