On Mon, 22 Aug 2011 15:46:14 +0200, J4K wrote:

# sa-learn  --dump magic
0.000 0 3 0 non-token data: bayes db version
0.000          0        640          0  non-token data: nspam
0.000          0       7001          0  non-token data: nham
0.000          0     366899          0  non-token data: ntokens

its not really possible to say if one nham is one ntokens, and same goes for nspam, so in all its not possible to know if it bad learned either, but keep monitoring it so says if rbl testing in mta is doing a rate of bayes learn, test one spam, test one ham, if bayes agre then its learned correct

basicly one should learn all msgs, not just the ones that is learned bad, this keeps more tokens vaigthed good so false is reduced that way

bayes lives in a very dynamic world :=)

Reply via email to