Thanks, Matt. That sounds like a good suggestion.
Nigel, since you have the emails, if you could capture the debug output
in a file and post like you did the messages, perhaps someone wise could
evaluate what is going on.
You can capture the debug output by using:
spamassassin -D -t < message1 2> debug1.txt
Andy Figueroa
Matt Kettler wrote:
Andy Figueroa wrote:
Matt (but not just to Matt), I don't understand your reply (though I
am deeply in your dept for the work you do for this community). The
sample emails that Nigel posted are identical in content, including
obfuscation. I've noted the same situation. Yet, the scoring is
really different. On the low scoring ones, DCC and RAZOR2 didn't hit,
and the BAYES score is different. The main differences are in the
headers' different forged From and To addresses. I thought these
samples were worthy of deeper analysis.
Well, there might be other analysis worth making.
However, Nigel asked why the drugs rules weren't matching. I answered
that question alone.
Not sure why the change in razor/dcc happend.
BAYES changes are easily explained by the header changes, but a deeper
analysis would involve running through spamassassin -D bayes and looking
at the exact tokens.