ahmad riza h nst put forth on 11/8/2010 3:37 AM:

> i think it would reach to 12 thousands or less. yes we plan to do it
> in one server but just for mailboxes only (pop3, imap, webmail), we
> have another servers for the mx.

12,000 is a lot of users for one IMAP server.  You'll definitely need
the hardware upgrades I mentioned in my last email.  When you say
webmail, are you planning on running apache/lighttpd and
roundcube/horde/squirrelmail on this same box that hosts Dovecot?  If
so, keep in mind you will probably run out of processor power and memory
well before you reach anywhere close to 1,000 concurrent users.

> the problem is webmin + virtualmin won't do it with mysql db virtual
> user, or maybe i'm wrong ?
> http://www.virtualmin.com/node/7616

Do you plan on manually typing your current 12,000 usernames and
passwords into Virtualmin, one at a time?  If not, I suggest you figure
out a way to get virtualmin to query those databases, or at least import
the data from them.  Otherwise I'd highly recommend you pick another web
based management front end.  There are many of them freely available.

I've not used Virtualmin myself, but I got involved in assisting another
OP on this list who uses Virutalmin.  I found its capabilities to be
extremely limiting in some ways.  For instance, due to the manner in
which it implements SpamAssassin, it requires you to use procmail for
local delivery.  This is horribly inefficient compared to using Dovecot
LDA for delivery.  If you are doing your spam filtering upstream at the
MX host or a gateway, I'd highly recommending disabling SpamAssassin on
this virtualmin mailbox server host, if Virtualmin allows disabling it.

This is just one of the limitations I've noticed.  If at all possible,
look to another web management front end that gives you the flexibility
to use your current mysql user database and gives you the flexibility to
use Dovecot LDA.  Don't choose one which forces you to configure one of
your services in a far less than optimal matter WRT performance, as
Virtualmin does.


> we would do it with hp dl180 g6 (1 xeon quad core, raid1, 4G Ram)

As I said, add the 2nd CPU, upgrade the RAM to at least 8GB, and add 8
drives in a hardware RAID5 device on the Smart Array controller.  Create
a single partition on the device with cfdisk or fdisk.  Format the
partition with the XFS filesystem:

mkfs.xfs -d sw=7 /dev/[device_name]

The "-d sw=7" switch sets the filesystem stripe width to 7.  With an 8
disk RAID5, each stripe contains 7 data blocks and one parity block.
Using RAID6 this would be "-d sw=6" as there are two parity blocks.

XFS is superior to all other filesystems when many processes are
reading/writing in parallel to the same filesytem, mainly due to the use
of allocation groups.  Also, XFS in the only production Linux filesystem
to offer an online defragmentation tool which can be scheduled weekly to
defragment the filesystem containing the mail store.

Put the system log directory, Postfix spool directory, and maildir
directory on this XFS filesystem.  For the log and Postfix spool
directories, stop the respective daemons, use "cp -a" to move the
contents to directories on the new XFS filesystem, and then create
hardlinks.  For Dovecot simply tell it the location of the maildir
directory you create.

The reason I recommend you put this all on one filesystem instead of
creating 3 and mounting each into the usual places, is that this way you
don't have to worry about preallocating a given amount of disk space to
each.  Say you allocate a 20GB partition to /var/spool/postfix and one
super busy day you get a bunch of mail backed up due to a problem at a
popular destination which is having problems difesting inbound mail.
What happens when your outbound spool fills up?  Using one filesystem
gives your postfix spool access to hundreds of gigs if need be.  There
is a downside:  if your postfix spool goes nuts for some reason, you can
fill up a large portion of your massive filesystem.  Other may have
different advice, but I think this method offers the best trade off WRT
user satisfaction.

-- 
Stan

Reply via email to