Rolf Loudon wrote:
> hi
>
> I use sa-update with channels      and updates.spamassassin.org.
>
> After the latest run today I am getting matches against BAYES_99
> (which adds 3.5) to many messages, where they previously triggered
> virtually no rules at all.
>
> This is causing many false positives, to the extent that I've had to
> set the score to zero to avoid them.
>
> Anyone else seeing this? Better, have the rule or rules that are
> causing this been identified (and fixed)?
>
> Else, if the bayes db has been damaged by something, how do I remove
> whatever is persuading it about the high probability this rule indicates? 

Well, the sa-update itself wouldn't change the behavior of BAYES_99
unless there was a grossly stupid or malicious error made by the
maintainers.  All sa-update could do is change the rule, which amounts to:

body BAYES_99               eval:check_bayes('0.99', '1.00')

And it's pretty much been that for a few years now, and the latest
sa-update is no different. An error here would be really obvious.
BAYES_99's real behavior is going to be based on the contents of your
bayes database and possibly changes to the bayes code, neither of which
is touched by sa-update.

*however* an updated ruleset might change the behavior of your
auto-learning, by increasing spam scores with rule hits. You might want
to go digging through your logs and see if there's a lot more spam
autolearning going on post-upgrade.  That said, I'd expect that to make
a change over a period of a few weeks, not instantly.

Perhaps your bayes DB is merely just not well trained and this is a
problem that's been building but went unnoticed so far? What's a
"sa-learn --dump magic" output look like?




Reply via email to