[EMAIL PROTECTED] writes: > I only used the first message in my spam box, one that scored highly the > first time around. I'm sure I could pick a half a dozen at random and see > similar results.
So what? Like I said, it's not how individual scores change, it's how false positives and false negatives change. >> The only worthwhile measures are false positive and false negative rates >> over a large sample size. There are various ways to measure those two >> attributes (and ways to combine the two into a single number), but our >> focus is on improving both from release to release. > I really don't see it that way. I really don't think just looking at the > false postitives and negatives is looking at the whole picture. Ignoring > the hits in the middle would be like ignoring successes )complete or > partial and only focusing on failures IMHO. I don't think you understand. Anything that isn't a FP or FN is, by definition, a success. So, if you have fewer FPs and fewer FNs, then you must have more successes. There is nothing else. I don't care if a message is a few points one way or the other unless it's really close to my threshold (in which case, I'm basically changing my acceptance rate for FPs and FNs, but the ratio stays about the same). > If 2.42 scored that message only a few points lower, I wouldn't be > concerned. However it almost halved the scored. I can't see that as a > good thing. Lets look at a couple of the individual rules that scored > differently. So what? How does the precise score of rules matter? What matters is whether SA helps you file a message correctly as spam or nonspam. Dan ------------------------------------------------------- This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. http://thinkgeek.com/sf _______________________________________________ Spamassassin-talk mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/spamassassin-talk