Well, before moving servers... What is its file size? See the bug I
referenced earlier, and the oddities found there.
Yeah, I did have a look at this before. I dont think I have any
unusually large files that would cause a prob, the sizes are:
82M ./auto-whitelist
66K ./bayes_journal
7.6M ./bayes_seen
17M ./bayes_toks
4.0K ./user_prefs
Also interesting would be, if you can read that file with (a) any
Berkeley DB tool, (b) with Perl, and (c) if you can reproduce the
issue with that file either on the current system or another.
Ok previously Id tested of DB integrity via sa-learn sync and assumed
if this worked that things were ok. If I run that against my files I
get exit status 0
# /usr/local/bin/sa-learn --dbpath /tmp/.spamassassin.old3 --sync
# echo $?
However if I try running db_dump:
# db_dump185-4.2 -f /tmp/test.db ./bayes_toks
Segmentation fault: 11 (core dumped)
# db_dump185-4.2 -f /tmp/test.db ./bayes_seen
Segmentation fault: 11 (core dumped)
Not so good :S About a month ago I totally deleted all the DB files
on this system as they were even failing the sa-learn sync command. So
these files are recently generated from scratch by SA, not migrated
from an earlier system etc...