Scott Kitterman <deb...@kitterman.com> writes:

> Yes.  I think that's the core of the disagreement.  In my view, when I
> type the passphrase for my key, I'm asserting responsibility for the
> contents of what I'm signing.  It doesn't mean it is correct or
> uncompromised, but I am taking responsibility for it.

Right.  And I come from a culture that emphasized blameless postmortems
and systems design and a way of thinking about security review from a
similar perspective, which is that assigning responsibility is not in and
of itself a useful thing to do.  Just because someone is responsible
doesn't mean that we're more secure.  It may mean that you have someone
you can punish afterwards, but it's very questionable how much that helps
with security, really.

Assigning responsibility is, in that model, only important to the degree
to which it will change people's actual behavior towards behavior that is
more secure, either before or after the fact.  If one assigns
responsibility for something that isn't realistically under their control,
or in a way that doesn't cause their behavior to change, the argument is
that nothing is truly accomplished from a security standpoint.  It's an
illusion of security without actual security.

One of my goals in doing security design is to try to reduce the degree to
which humans are performing repetitive validation tasks because humans are
not good at maintaining constant vigilance.  We know this from a bunch of
empircal studies on, for example, airport screening.  If a human does a
repetitive task with a very low rate of true positives, their attention
will fade and there will be a lot of false negatives.  Asking humans to do
this is a recipe for failure, and making the humans responsible for doing
this correctly and threatening them with consequences for not doing it
correctly only slightly decreases the risk of failure.

This is exactly why reproducible builds are so important: that involves
finding a way for computers to do the sorts of repetitive validation tasks
that computers are good at and that humans are very bad at.

-- 
Russ Allbery (r...@debian.org)              <https://www.eyrie.org/~eagle/>

Reply via email to