On Thu, 26 Jul 2018 17:36:19 +0200
Matus UHLAR - fantomas wrote:

> >On Wed, 25 Jul 2018 19:49:04 +0200
> >Daniele Duca wrote:  
> >> In my current SA setup I use bayes_auto_learn along with some
> >> custom poison pills (autolearn_force on some rules) , and I'm
> >> currently wondering if over training SA's bayes could lead to the
> >> same "prejudice" problem as CRM114.
> >>
> >> I'm thinking that maybe it would be better to use
> >> "bayes_auto_learn_on_error 1"  
> 
> On 26.07.18 15:48, RW wrote:
> >On a busy server using auto-learning it's probably a good idea to set
> >this just to increase the token retention, and reduce writes into the
> >database.  
> 
> well, I have a bit different experience. 


I didn't say auto-training itself, is a good idea.


> There are spams hitting
> negative scoring rules e.g.  MAILING_LIST_MULTI, RCVD_IN_RP_*,
> RCVD_IN_IADB_* and they are constantly trained as ham.


You should be able to work around that by adding noautolearn to the
tflags.


> I would like to prevent re-training when bayes disagrees with score
> soming from other rules.


I don't know what you mean by 'prevent re-training', but auto-learning
is not supposed to happen if Bayes generates  1 point or more  in the
opposite direction.

 
> I quite wonder why "learn" tflag causes score being ignored.
> Only the "noautolearn" flag should be used for this so at least
> BAYES_99 and BAYES_00 could be takein into account when learning.


It's to prevent  mistraining from running away in a vicious
circle.   

Reply via email to