The PostgreSQL experiment turned out to not be as stellar as I had hoped.
With our volume, the disk write load for bayes auto-learn is extremely high,
even with fsync disabled, share mem increased, etc.  I also ran into some
severe concurrency issues - lots of waiting on locks, even with only one
system doing updates and the others reading.  Auto-vacuum set to 60 seconds
(every 3 minutes for the SpamAssassin table) appears to help a tremendously.
I think we'd need a solid state disk, or SAN with a large buffer, to safely
handle it with a larger number of tokens.  I'm also getting failed expires
due to 'deadlock detected'.

Regrouping, I was looking at benchmarks for QDBM and see it is on the "we
need volunteers" list.  Is this more than just changing the "tie" in the
Bayes DBM store module?

Wes 


Reply via email to