Hi,

I have written quite a few of my own rules, and would like to test them more 
systematically than I do now.

1. I've seen people commenting on specific rules, saying that a particular rule 
generates x false positives and y false negatives against their corpus of ham and 
spam.  How are they running these tests?

2. I would like to run my entire rulebase against a ham/spam corpus, and arrive at 
statistically "best" weighting of rules.  How is this done?

Cordially,

Eric hart
ehart [nospam] npi.net



=== All NPI email scanned for spam and viruses by SpamRejector(tm).  SpamRejector.net 
===

Reply via email to