Hello all,
Is this normal? I swear to have seen people suggesting trashing cyrus.*
files and running reconstruct on the user, but I'm having no luck. In
this case, that was not what was attempted, but the end result is the
same. The cyrus.* files are gone. See for yourself...
-rw--- 1
On Thu, Oct 04, 2007 at 03:33:58PM -0700, Vincent Fox wrote:
>
>
> Xue, Jack C wrote:
> > At Marshall University, We have 30K users (200M quota) on Cyrus. We use
> > a Murder Aggregation Setup which consists of 2 frontend node, 2 backend
> > nodes
> Interesting, but this is approximately 15K use
> Anyhow, just wondering if we the lone rangers on this particular
> edge of the envelope. We alleviated the problem short-term by
> recycling some V240 class systems with arrays into Cyrus boxes
> with about 3,500 users each, and brought our 2 big Cyrus units
> down to 13K-14K users each which s
Xue, Jack C wrote:
> At Marshall University, We have 30K users (200M quota) on Cyrus. We use
> a Murder Aggregation Setup which consists of 2 frontend node, 2 backend
> nodes
Interesting, but this is approximately 15K users per backend. Which is
where we are now after 30K users per backend were
At Marshall University, We have 30K users (200M quota) on Cyrus. We use
a Murder Aggregation Setup which consists of 2 frontend node, 2 backend
nodes and a master node (all are Dell 1855 Blades). We then further
divide the users into 2 storage partitions on each backend (Totals 4
cyrus partitions,
>
> One of the things Rob Banz recently did here was to move the data/
> config/proc directory from a "real" fs to tmpfs. This reduces the
> disk IO from Cyrus process creation/management.
>
> So the way we do stuff here is that each Cyrus backend has its own
> ZFS pool. That zpool is divided up in
On Oct 4, 2007, at 2:41 PM, Vincent Fox wrote:
> We spent some time talking to Ken & Co. at CMU on the phone
> about what happens in very high loads but haven't come to a
> "fix" for what happened to us. There may not be one. I can and
> will describe all the nitty-gritty of that post-mortem in
Hi,
We have around 35k users spread out on 5 different systems. The
largest of which has 12K active users and 200K messages per day. We do
our anti-spam/anti-virus on other systems before delivering to the 5
mailbox systems. I'm guessing you don't have that type of setup?
Jim
Vincent Fo
I suppose I should have given a better description:
University mail setup with 60K-ish faculty, staff, and students
all in one big pool no separation into this server for faculty
and this one for students, etc.
Load-balanced pools of smallish v240 class servers for:
SMTP
MX
AV/spam scanning
LDAP
We run a single dell 2850 (2-dual @ 2.8GHz, 8gb ram and 900gb internal).
have about 29k users... but our message transfer load is much smaller than
what you describe... may be in the order of 10k the systems is at 80+ idle
most of the time.
We will have this setup for about 1 yr now... no
ma
> We have talked to UCSB, which is running 30K users on a single
> Sun V490 system. However they seem to have fairly low activity
> levels with emails in the hundred-thousands range not millions.
We've got around 250k users on a single system, but we're in that same
boat: only about 300k emails/d
I have a system with about 20k users and I need to subscribe all of those
users to all of the folders that they have ever created. I was thinking
about scripting this by generating a mailbox list and echoing the folders
per user to the .sub file. This seems like it could provide some unexpected
res
Wondering if anyone out there is running a LARGE Cyrus
user-base on a single or a couple of systems?
Let me define large:
25K-30K (or more) users per system
High email activity, say 2+ million emails a day
We have talked to UCSB, which is running 30K users on a single
Sun V490 system. However
Hi all,
I am planning to move my mail server from an i586 to a x86_64 box.
I removed the db* directories from the imapconfig directory, and ran a
reconstruct on all mailboxes. That prevented the lmtpd to crash all the
time. I think this is because of the different (32bit 64bit) db4 versions.
I h
Read some excerpts of manging IMAP Mail online (
http://www.oreilly.com/catalog/mimap/chapter/ch09.html) and deleting the
quota root file and creating a new quota root using cyradm>setquota and
then fixing with quota -f seemed to force cyrus to calculate the new
filesystem disk usage.
Not sure if
15 matches
Mail list logo