On Sun, Jan 20, 2008 at 09:41:58AM -0800, John D. Hardin wrote: >On Sat, 19 Jan 2008, Loren Wilton wrote: > >> I would not be terribly surprised to find out that on average >> there was no appreciable difference in running all rules of all >> types in priority order, over the current method; > >Neither am I. Another thing to consider is the fraction of defined >rules that actually hit and affect the score is rather small. The >greatest optimization would be to not test REs you know will fail; >but how do you do *that*?
thanks for all the followups on my inquiry. I'm glad the topic is/was considered and it looks like there is some room for development, but I now realize it is not as simple as I thought it might have been. In answer to above question, maybe the tests need their own scoring? eg fast tests and with big spam scores get a higher test score than slow tests with low spam scores. maybe if there was some way to establish a hierachy at startup which groups rule processing into nodes. some nodes finish quickly, some have dependencies, some are negative, etc. Then utilize some sort of looping test (eg check every .5 second) which can kill remaining tests and short circut. eg anytime in the hierachy the score is above what the negative test can fix, etc. Appreciate the discussion thus far, unfortunately discussion is all I'm able to contribute at this time. Thanks, // George -- George Georgalis, information system scientist <IXOYE><