On Thu, 25 Jan 2007 10:28:21 -0500, Andy Figueroa
<[EMAIL PROTECTED]> wrote:

>Thanks, Matt.  That sounds like a good suggestion.
>
>Nigel, since you have the emails, if you could capture the debug output 
>in a file and post like you did the messages, perhaps someone wise could 
>evaluate what is going on.
>
>You can capture the debug output by using:
>spamassassin -D -t < message1 2> debug1.txt
>
>Andy Figueroa
>
>Matt Kettler wrote:
>> Andy Figueroa wrote:
>>> Matt (but not just to Matt), I don't understand your reply (though I
>>> am deeply in your dept for the work you do for this community).  The
>>> sample emails that Nigel posted are identical in content, including
>>> obfuscation.  I've noted the same situation.  Yet, the scoring is
>>> really different. On the low scoring ones, DCC and RAZOR2 didn't hit,
>>> and the BAYES score is different.  The main differences are in the
>>> headers' different forged From and To addresses.  I thought these
>>> samples were worthy of deeper analysis.
>> 
>> Well, there might be other analysis worth making.
>> 
>>  However,  Nigel asked why the drugs rules weren't matching. I answered
>> that question alone.
>> 
>> Not sure why the change in razor/dcc happend.
>> 
>> BAYES changes are easily explained by the header changes, but a deeper
>> analysis would involve running through spamassassin -D bayes and looking
>> at the exact tokens.
>> 

Debug results are available on: 
http://dev.blue-canoe.net/spam/spam01.txt
http://dev.blue-canoe.net/spam/debug1.txt

http://dev.blue-canoe.net/spam/spam02.txt
http://dev.blue-canoe.net/spam/debug2.txt

http://dev.blue-canoe.net/spam/spam03.txt
http://dev.blue-canoe.net/spam/debug3.txt

http://dev.blue-canoe.net/spam/spam04.txt
http://dev.blue-canoe.net/spam/debug4.txt

Make of them what you will, I think I need more beer before that lot
makes much sense :-D

Kind regards

Nigel

Reply via email to