eldom have to deal with REALLY large queues but I use this I hacked
up real quick to see if anything is starting to build up:
#!/bin/bash
echo ACTIVE
echo
find /var/spool/postfix/active/. ! -name . ! -name '?' -print |wc -l
echo
echo DEFERRED
echo
find /var/spool/postfix/deferred/. ! -name . ! -name '?' -print |wc -l
echo
echo BOUNCE
echo
find /var/spool/postfix/bounce/. ! -name . ! -name '?' -print |wc -l
echo
echo INCOMING
echo
find /var/spool/postfix/incoming/. ! -name . ! -name '?' -print |wc -l
echo
echo HOLD
echo
find /var/spool/postfix/hold/. ! -name . ! -name '?' -print |wc -l
Why not use "-type f" to tell find to only show files, which does not
include directories? That is how I did it when I was using the brute
force method. Nice advantage in that you can have any amount of
hashing in a directory and still get an accurate count. On queue
directories with upwards of 100,000 files it takes about a third of a
second or less. Surely that is fast enough for such usage. ;)
I use Linux w/Inotify to track changes, and store them in a DB thus
giving me fast access to the data w/o significant file I/O. Then
again, I also index the queue file to determine (envelope) sender &
recipients as well as subject and origination time and store that for
anything in the deferred or hold queues, thus giving me a pretty good
picture of the queues, especially large ones where mailq effectively
falls down.
Ultimately if you are having a problem with large amounts over time,
"live" updates to a DB is likely to be a better route, IMO.