On 2/15/23 2:12 PM, Murray S. Kucherawy wrote:
On Tue, Feb 14, 2023 at 11:44 AM Michael Thomas <m...@mtcc.com> wrote:
At maximum, isn't it just the x= value? It seems to me that if you
don't specify an x= value, or it's essentially infinite, they are
saying they don't care about "replays". Which is fine in most
cases and you can just ignore it. Something that really throttles
down x= should be a tractable problem, right?
Remember that the threat model is:
1) send a message through A to B, acquiring A's signature
2) collect the message from B
3) re-post the message to C, D, E, ...
These days, this attack is complete within seconds. If you select an
"x=" small enough to thwart this, you have to expect that all
legitimate deliveries will happen even faster. But email delivery can
be slow for lots of legitimate reasons. So I would argue that "x="
alone can't really solve this problem without introducing other
constraints that we don't really want.
I'm not saying that it solves the problem, only that it bounds how much
you'd need to store.
There's also the question of whether "x=" is properly enforced. RFC
6376 says verifiers "MAY" choose to enforce it. I think I asked about
this at a conference recently and was told that it's not universally
supported by implementations.
Others have said that the enforcement is pretty good. But I have no way
to evaluate if that's true.
Going the route of some kind of duplicate signature detection
alleviates the risk of that approach, but also sort of inverts it: If
you assume each signature will only appear once, there's a window
during which the first signature works, and then a second window
during which duplicates will be blocked, but then that process
recycles when the cache expires. That could mean replays work if I
just out-wait your cache. You also introduce the risk of false
positives, where a legitimate message tries to arrive in separate
envelopes with the same signature, and all but the first one get blocked.
I would imagine that the cache should be valid for a small x= expiry.
That's really a tuning problem on the sending domain.
But I mentioned in another response that if you detect lots of replays
and could turn up the dial on your spam filters, that may well thwart a
sizable amount of spam *and* have the ability to be retroactive with
spam that has made it past the filter.
But even at scale it seems like a pretty small database in
comparison to the overall volume. It's would be easy for a
receiver to just prune it after a day or so, say.
It creates an additional external dependency on the SMTP server. I
guess you have to evaluate the cost of the database versus the cost of
the protection it provides, and include reasonable advice about what
to do when the database is not available.
There are *tons* of external dependencies on the filtering MTA. I really
can't imagine that this would be the straw that breaks the camel's back.
Mike
_______________________________________________
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim