Compression is a big winner!
We had fast compression on database etc. gzip on the inboxed.
Other than that consider noatime.
Dedup wasn't available back when, and now our Cyrus is gone in favor of
office365.
Good luck
Sent from my iPhone
On Aug 13, 2017, at 00:20, Mikhail T.
mailto:mi+cy.
On 04/04/2016 09:43 AM, Binarus via Info-cyrus wrote:
But the spammer then first has to get a domain and then has to set up the DNS
entries, which obviously is too complicated for most spammers. Furthermore, I
am constantly seeing messages trying to get into the server which originate
from d
On 05/28/2015 10:05 AM, Recursive wrote:
> This means that these messages well-formed enough for GMX to accept them, but
> are so malformed that cyrus lmtp / imapd rejects them.
I can give an example of a related problem.
We use sendmail as our MTA, delivering via LMTP to Cyrus.
The LMTP delive
Hi,
We are running quite old Cyrus 2.3.8 (near retirement) and last couple
of nights it started kicking up this error during nightly expire run.
cyr_expire[3409]: [ID 386572 local6.error] db
/var/cyrus/imap/deliver.db, inconsistent pre-checkpoint, bailing out
Any guidance on best course of ac
On 9/18/2014 11:58 AM, Fabio S. Schmidt wrote:
> Hi,
>
> - Sorry if it seems to be a little off-topic -
>
> We have deployed Cyrus Aggregator and currently we provide load
> balancing and high availability for the Cyrus Front Ends through DNS.
> With this scenario, if a Frontend is unavailable it w
Just to follow up on this, I realized as Joseph pointed out, the larger
problem is my MTA shouldn't be delivering RFC-breaking messages
to the Cyrus backends.
However, sendmail seems to set L=0 on the Cyrusv2 mailer, and doesn't
provide an easy way to redefine it in my macro file.
I hand-editted
On 6/14/2014 6:16 AM, Joseph Brennan wrote:
>
> Vincent Fox wrote:
>
>> I have recently noticed that certain malformed messages hang POP3 clients.
>> The pattern is that the mesages all have obscenely long last lines. I
>> can see it
>> when I set up debug
Hi,
I have recently noticed that certain malformed messages hang POP3 clients.
The pattern is that the mesages all have obscenely long last lines. I
can see it
when I set up debug log dir for the account, that the download of the
message
proceeds fine up until 2048th character and then stops.
A
On 12/12/2013 10:11 AM, sofkam wrote:
> I am in the process of replacing our Cyrus Front-End servers with a
> proxy server that can front both Exchange and Cyrus IMAP, as well as
> future IMAP hosts as we move and migrate users. The two that stand out
> are: Perdition, and NGinX. I'm looking for
On 01/23/2013 08:16 AM, francis picabia wrote:
>
> I've also backed out the change (yesterday) to
> /sys/block/sda/queue/nr_requests
> I think it was pushing the load higher and there is no advantage
> in my hardware (SAS with Perc 5/i Raid 5 over 4 disk)
> to run with a low value for nr_requests.
On 10/03/2011 12:58 PM, Josef Karliak wrote:
> Hi there,
> what filesystem type do you use for Cyrus imapd ? I use SLES11x64
> (or opensuse 11.4).
> I use Reiserfs3.6, so far so good. But couldn't be better ? :)
ZFS, which unfortunately is not much of an option for
you Linux folks I think.
On 6/16/2011 7:24 AM, Pascal Gienger wrote:
>
> 20K users with 20G each = 400 TB. With zfs compression perhaps 300 TB.
I dunno, I find that people use FAR FAR less than their quota on average.
Sure you've got the 1 doctor who sends a bunch of X-rays or something
but most people wouldn't fill up a 1
On 4/20/2011 12:50 AM, Rudy Gevaert wrote:
>
> Hi Vincent,
>
> How do you make the snapshot consistent (netbackup+vmware snapshot)?
>
> Do you stop cyrus?
I'll have to ask our backup admin if you want technical specifics
and guarantees, but my understanding was that the process included
an atomic s
On 4/18/2011 2:58 AM, Paolo Cravero wrote:
> The competition is: who will die first? The web-based client or our
> SAN+backup system? :-)
What web-based client? Why would you expect it to die?
As to SAN, this maybe related.
We have found netbackup on large Cyrus mail-store volumes are a
prob
I put these in TMPFS:
proc
socket
log
Activity in proc is pretty low under 3 megs.
Still running Cyrus 2.3 here, not sure what the
lock directory you mention is?
On 1/2/2011 4:50 PM, Lucas Zinato Carraro wrote:
> It's safe to put/proc and/lock in tmps ?
>
> What happen if the space over ?
>
>
On 09/20/2010 04:23 PM, Andrew Morgan wrote:
> I end up granting myself rights to various users' mailboxes to
> investigate when we see one of our users sending out spam. It usually
> turns out that they have been phished recently. Once I grant myself
> rights to their mailbox, I see the mai
On 09/20/2010 06:59 AM, Marc Patermann wrote:
> But where does Cyrus IMAPd stand today?
> It may be Murder/Aggregator - but how to get the people, when on first
> contact, where they just need a simple IMAP server, they are pointed to
> other product, which they then stay with?
Umm, what? We run
So to my mind, the downside of ZFS flush disable is.
Data on disk may not be as current in the unlikely event
of power outage. In point of fact MOST filesystem do not operate
in journalled data mode anyhow and most people just don't
realize this fact. The default for Linux EXT filesystems
On Mon, 2010-08-09 at 17:22 +0200, Eric Luyten wrote:
> Folks,
>
> did you consider, measure and/or carry
> out a change of the default 128 KB blocksize ?
To more directly answer your question than last post...
We did some testing with Bonnie++ prior to deployment
and changing recordsize didn't
For what Cyrus is doing on Solaris with ZFS, the
recordsize seems nearly negligible. What with all the
caching in the way, and how ZFS orders transactions, it's
about the last tuneable I'd worry about.
Here's what works well for us, add this to /etc/system:
* Turn off ZFS cache flushing
set zfs:
On 5/26/2010 8:06 AM, Blake Hudson wrote:
> I wish it were that straightforward. After performing several
> switchovers where DNS A records were repointed, many clients (days
> later) continue trying to access the old servers. TTL on the DNS records
> are set appropriately short, this is simply a c
On 5/11/2010 5:35 AM, Andre Nathan wrote:
> Hello
>
> I'm setting up a two-machine cyrus cluster using OCFS2 over DRBD. The
> setup is working fine, and I'm now considering the load balancing
> options I have.
>
> I believe the simplest option would be to simply rely on DNS load
> balancing. Howeve
On 5/11/2010 6:40 AM, Andre Nathan wrote:
> What I still haven't figured out is how to keep the proc directory and
> the locks in the socket directory local to the cluster nodes. For the
> socket names there are configuration options, so I could just choose
> different names for each cluster node.
Eric Luyten wrote:
> Our Z pool was 83% full ...
> Deleted the December snapshots, which brought that figure down to 74%
>
> Performance came right back :-)
>
Running close to full you will eventually run into fragmentation
issues. Fortunately
you can grow the size of a pool while it's hot, g
Clement Hermann (nodens) wrote:
> The snapshot approach (we use ext3 and lvm, soon ext4) is promising, as
> a simple tar is faster than using the full backup suite on a filesystem
> with a lot of small files (atempo here). But you need the spare space
> locally, or you need to do it over the net
Andrew Morgan wrote:
> Is there really a significant downside to performing backups on a hot
> cyrus mailstore? Should I care if Suzie's INBOX was backed up at 3am
> and Sally's INBOX was backed up at 4am?
>
> Vincent, on a slightly related note, what is your server and SAN
> hardware?
>
I dunn
Michael Bacon wrote:
> For those of you doing ZFS, what do you use to back up the data after a zfs
> snapshot? We're currently on UFS, and would love to go to ZFS, but haven't
> figured out how to replace ufsdump in our backup strategy.
>
There are other commercial backup solutions however we
John Madden wrote:
> Out of curiousity, how good is zfs with full fs scans when running in
> the 100-million file count range? What do you see in terms of
> aggregate MB/s throughput?
>
I'm not sure what you mean by "full fs scan" precisely, and
haven't tested anything very large. Since t
John Madden wrote:
>
> We did quite a bit with snapshots (LVM) to when we were experimenting
> with block-level backups but there's a performance problem there -- we
> were saturating GbE. Snapshot doesn't really buy you anything in
> terms of getting the data to tape.
>
>
We run the tape backu
Forgot to mention we are running inline compression on
our ZFS pools. With "fast" LZJB compression on the
filesystems for metadata etc. still a savings of ~2.0.
The inboxes are all in /var/cyrus/mail which is set for
gzip-6 compression savings of ~ 1.7. Backups run
faster so it's win-win.
Combin
John Madden wrote:
> That still leaves full backups as a big issue (they take days to run)
> and NetBackup has a solution for that: You run one full backup and store
> it on disk somewhere and from then on, fulls are called "synthetic
> fulls," where the incrementals are applied periodically in
I suppose replication and snapshots are out of the question for you?
We run ZFS so snapshots are atomic and nearly instant.
Thus we keep 14 days of daily snaps in our production pool
for recovery purposes. In our setup the total of all the snaps
is about a 50% overhead on the production data whic
Bob Dye wrote:
>
> But it does seem odd that it supports STARTTLS on 143 but not 993.
This is not odd, this is working as specified.
TLS is enabling encryption on a connection that
has started without it.
There's a cogent argument that 993 should be depecrated
as the vestige of "stunnel days" tha
Simon Matter wrote:
What I'm really wondering, what filesystem disasters have others seen? How
many times was it fsck only, how many times was it really broken. I'm not
talking about laptop and desktop users but about production systems in a
production environment with production class hardware a
Bron Gondwana wrote:
It's an interesting one. For real reliability, I want to
have multiple replication target supported cleanly.
So the issues for me with Cyrus replication:
1) Is it working? Is the replica actually up to date
and more importantly what if I switch to it and there
is some
Bron Gondwana wrote:
I assume you mean 500 gigs! We're switching from 300 to 500 on new
filesystems because we have one business customer that's over 150Gb
now and we want to keep all their users on the one partition for
folder sharing. We don't do any murder though.
Oops yes. I meant 5
Lucas Zinato Carraro wrote:
- Exist a recommended size to a Backend server ( Ex: 1 Tb )?
Hardware-wise your setup is probably overkill.
Nothing wrong with that.
Sizing of filesystems IMO should be based on your
tolerance for long fsck during a disaster. I run ZFS which
has none of that and d
The UID is defined in RFC for IMAP as needing to be a unique number for
that mailbox.
That is all it is. You can dump message files into an account and as
long as they are
"#." and you index them they will be visible to IMAP clients.
The UIDL is related/derived number returned by POP.
We actua
Michael Sims wrote:
Quick question on this. If I setup an active/passive cluster and put the
mail spool AND all of the application data on a SAN that both nodes have
access to (not simultaneously, of course), doesn't that bypass the need for
using "mupdate_config: replicated"? Thanks...
Th
Jose Perez wrote:
> Some people could just say "don't use POP3 anymore, use IMAP" right?
>
YES!
> Ok, I'd say the same as a sysadmin but you know exactly that this
> isn't always possible is some organizations for others reasons not
> technical.
>
> What I would like to know is:
>
> Are there
David Lang wrote:
>
> the flip side of the complience issue is that it's a LOT easier to control
> retention policies (including backups) on a central server than on
> everybody's
> individual desktops/laptops.
>
> as for the concerns about laxer data security in other juristictions, that's
> s
Bron Gondwana wrote:
> BUT - if someone is asking "what's the best filesystem to use
> on Linux" and gets told ZFS, and by the way you should switch
> operating systems and ditch all the rest of your custom setup/
> experience then you're as bad as a Linux weenie saying "just
> use Cyrus on Linux"
(Summary of filesystem discussion)
You left out ZFS.
Sometimes Linux admins remind me of Windows admins.
I have adminned a half-dozen UNIX variants professionally but
keep running into admins who only do ONE and for whom every
problem is solved with "how can I do this with one OS only?"
I admin
We run Solaris 10 on our Cyrus mail-store backends.
The mail is stored in a ZFS pool. The ZFS pool are
composed of 4 SAN volumes in RAID-10. The active
and failover server of each backend pair have "fiber multipath"
enabled so their dual connections to the SAN switch ensure
that if an HBA or SAN
David Lang wrote:
> and gaining some new worries along the way. while some are convinced that ZFS
> is
> the best thing ever others see it as trading a set of known problems for a
> set
> of unknown problems (plus it severly limits what OS you can run, which can
> bring
> it's own set of prob
Wesley Craig wrote:
>> Maildir and cyrus both suffer from the same
>> disadvantages (huge needs in terms of inodes etc.),
With ZFS, inodes are among the many stone-age worries you leave behind.
;-)
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/
Jeff Fookson wrote:
> We are planning to run the mirrors
> off a 4-port 3ware RAID card even though we're not overly fond of
> 3ware (we have a fair amount of experience
> with RAID5 arrays on 3ware cards on our research machines where they
> perform adequately but
> not more). We are hoping the
David Lang wrote:
>
> raid 6 allows you to loose any two disks and keep going.
>
>
This is turning into a RAID discussion.
The orginal poster was doing a RAID-5 across 3 disks, and has stopped
commenting but it's probably because that's all the hardware he could
scrounge.
I am a staunch memb
Gah my first thought was, a 3-disk RAID5?
Is this 1998 or 2008? Disk is cheap. RAID-1 or RAID-10.
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Jeff Fookson wrote:
> is unusably slow. Here are the specifics:
>
You are mighty short on the SPECIFICS of your setup.
Expect a slew of questions to elicit this information.
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Inf
Pascal Gienger wrote:
> For x86 the patch will have id 127729-07. For SPARC it will have
> the major number 127728.
>
My esteemed colleague Nick Dugan, ran some tests against this
patch and according to the results it gives the desired results.
I expect during our next monthly maintenance cy
Several people have asked for the IDR number from Sun that gave
us the performance optimizations we needed.
Management says the agreement for the IDR we got prohibits this.
However this forum thread from BEFORE the agreement should point you
in the right direction:
http://opensolaris.org/jive/th
So for those of you who recall back that far.
UC Davis switched to Cyrus and as soon as fall quarter started
and students started hitting our servers hard, they collapsed.
Load would go up to what SEEMED to be (for a 32-core T2000)
a moderate value of 5+ and then performance would fall off a c
I have a candidate for our running into performance issues with Cyrus
under load. I think it could be in the ZFS filesystem layer. Our man
Omen has come back from DTrace class and started running it, and
while it's early days yet, seems to be some fdsync calls with long times
associated with them
Bron Gondwana wrote:
(Linux comments)
'Twas not my intent to start a this-OS versus that-OS comparison.
Valid though that is, it's a different thread.
Like most sites, we have various OS in operation here, it just
happens that the Cyrus backends are Solaris. The test project
here started with s
Thanks folks, I have collated some responses below to
make it easier to read:
Cyrus on Solaris at Universities:
Carnegie-Mellon University
University of California Davis
50K accounts (active), multiple T2000 failover clusters
ZFS storage to arrays attached through SAN
Univ California San
Just wondering what other universities are runing Cyrus on Solaris?
We know of:
CMU
UCSB
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
How "unsafe" is setting in imapd.conf
skiplist_unsafe: 1
Our /var/cyrus/imap filesystem is on a ZFS mirror set on
arrays with dual controllers so OS and/or hardware
corruption is remote.
The application can scramble it but that can happen
whether we have sync or not eh?
Anything I am missing?
We had sym-linked imap/proc directory to a size-limited
TMPFS a while back.
Now I'm thinking to do the same for imap/socket and imap/log
Any reasons against? Other candidates?
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/In
This of course sounds familiar to some experiences we had
recently with Cyrus 2.3.8 on Solaris 10 backends pretty
heavily loaded with number of users. If you search the
mailing list archives you'll find several threads about our
problems here at UC Davis.
However, the others in the thread have m
Bron Gondwana wrote:
> Lucky we run reiserfs then, I guess...
>
>
I suppose this is inappropriate topic-drift, but I wouldn't be
too sanguine about Reiser. Considering the driving force behind
it is in a murder trial last I heard, I sure hope the good bits of that
filesystem get turned over to
I've been running this in production:
mkdir /var/imap-proc
chown cyrusd /var/imap-proc
ln -s /var/imap-proc /var/cyrus/imap/proc
Setup vfstab entry for /var/imap-proc as TMPFS , and
that's about all there is to it. But yeah it would be an
improvement to see it configurable.
Ian G Batten wrote:
>
> /etc/system:
>
> set zfs:zfs_nocacheflush=1
Yep already doing that, under Solaris 10u4. Have dual array controllers in
active-active mode. Write-back cache is enabled. Just poking in the 3510FC
menu shows cache is ~50% utilized so it does appear to be doing some work.
Cyrus Home Pag
This thought has occurred to me:
ZFS prefers reads over writes in it's scheduling.
I think you can see where I'm going with this. My WAG is something
related to Pascal's, namely latency. What if my write requests to
mailboxes.db
or deliver.db start getting stacked up, due to the favoritism sho
Michael Bacon wrote:
>
> Solid state disk for the partition with the mailboxes database.
>
> This thing is amazing. We've got one of the gizmos with a battery
> backup and a RAID array of Winchester disks that it writes off to if
> it loses power, but the latency levels on this thing are
> non-
Can you expand on this, like a LOT?
I recall a while ago you brought up some performance issues and
said you had found hacks for them. Were those issues actually unresolved
or are you talking about something else? I don't see any recent posts by
you about problems with your Cyrus install.
I'm
Jure Pečar wrote:
>
> I'm still on linux and was thinking a lot about trying out solaris 10, but
> stories like yours will make me think again about that ...
>
>
We are I think an edge case, plenty of people running Solaris Cyrus no
problems.
To me ZFS alone is enough reason to go with Solaris
Jure Pečar wrote:
> In my expirience the "brick wall" you describe is what happens when disks
> reach a certain point of random IO that they cannot keep up with.
>
The problem with a technical audience, is that everyone thinks they have
a workaround
or probable fix you haven't already thought
Eric Luyten wrote:
> Another thought : if your original problem is related to a locking issue
> of shared resources, visible upon imapd process termination, the rate of
> writing new messages to the spool does not need to be a directly contri-
> buting factor.
> Were you experiencing the load probl
Everytime I send email to the list now I get a bounce from "Gateway".
Could some kindly admin please remove the account causing this bounce?
Header snippet:
>Received: from mail.goldengatelanguage.com ([4.79.216.102])
> by mx12.ucdavis.edu (8.13.7/8.13.1/it-defang-5.4.0) with ESMTP id
lA92U7
Bron Gondwana wrote:
> Also virtual interfaces means you can move an instance without having
> to tell anyone else about it (but it sounds like you're going with an
> "all eggs in one basket" approach anyway)
>
>
No, not "all eggs in one basket", but better usage of resources.
It seems silly
Gary Mills wrote:
> How many
> and what sort of people does it take to maintain a system such as
> this? I need a good argument for hiring a replacement for me.
>
>
At a minimum you want 1 qualified person and someone cross-trained
as a backup, so that person can reasonably enough have vacatio
To close the loop since I started this thread:
We still haven't finished up the contract to get Sun out here to
get to the REAL bottom of the problem.
However observationally we find that under high email usage that
above 10K users on a Cyrus instance things get really bad. Like last
week we ha
I have seen squatter run more than 24 hours.
This is on a large mail filesystem. I've seen it start up a
second one while the first is still running. Should I:
1) Forget about squatter
2) Remove from cyrus.conf, run from cron every other day
3) Find some option to cyrus.conf for same effect as
So here's the story of the UC Davis (no, not Berkeley) Cyrus conversion.
We have about 60K active accounts, and another 10K that are forwards, etc.
10 UWash servers that were struggling to keep up with a load that was 2006
running around 2 million incoming emails a day, before spam dumpage, et
Ken Murchison wrote:
>
> You can set service-specific options, such as "lmtp_allowplaintext:
> yes". The service-specific prefix must match a service name in
> cyrus.conf.
>
That seems more than sufficient solution, thanks!
We set
allowplaintext: no
lmtp_allowplaintext: yes
It works like a cha
So I want to do LMTP between an MX pool and Cyrus backends.
The common way I read about doing this, is with a shared LMTP
account from MX pool to backends. So it becomes a postman sort
of account with the password in plaintext in various places and of
course transiting the network that way.
Is t
Rob Mueller wrote:
>> We are in the process of moving from reiserfs to ext3 (with dir_index).
>>
>>
ZFS with mirrors across 2 separate storage devices, means never having
to say you're sorry.
I sleep very well at night.
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ:
The iostat and sar data disagrees with it being an I/O issue.
16 gigs of RAM with about 4-6 of it being used for Cyrus
leaves plenty for ZFS caching. Our hardware seemed more than
adequate to anyone we described it to.
Yes beyond that it's anyone guess.
Cyrus Home Page: http://cyrusimap.
Xue, Jack C wrote:
> At Marshall University, We have 30K users (200M quota) on Cyrus. We use
> a Murder Aggregation Setup which consists of 2 frontend node, 2 backend
> nodes
Interesting, but this is approximately 15K users per backend. Which is
where we are now after 30K users per backend were
do our anti-spam/anti-virus on other systems before delivering to the
> 5 mailbox systems. I'm guessing you don't have that type of setup?
> Jim
>
>
> Vincent Fox wrote:
>> Wondering if anyone out there is running a LARGE Cyrus
>> user-base on a single or a couple
Wondering if anyone out there is running a LARGE Cyrus
user-base on a single or a couple of systems?
Let me define large:
25K-30K (or more) users per system
High email activity, say 2+ million emails a day
We have talked to UCSB, which is running 30K users on a single
Sun V490 system. However
Pascal Gienger wrote:
>
> [2] in /etc/system: set zfs:zfs_nocacheflush=1
>on a live system using mdb -kw: zfs_nocacheflush/W0t1
By the way I tried this on a fully patched Solaris 10u3 system
and get this notice during boot:
sorry, variable 'zfs_nocacheflush' is not defined in the 'zfs' modu
Has anyone worked with Solaris Zones to run Cyrus?
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
After hours of struggle, I am abandoning this and switching to
simply having my script unpack the tarball into /tmp and
compile it there. Every time I found some fix for one
piece there was another one that needed fixing. Like in
the sieve dir there is the xversion.sh that creates an xversion.h.
It's undoubtedly not the right way, but I patched lib/Makefile.in.
The basic problem in lib I think is twofold:
1) chartable.c being dynamically built during make, so it's
not there when the makedepend is run
2) cyrusdb_quotalegacy.c including glob.h, which has a name
collision with a similar Cyr
Ah, found my problem. The 2nd problem with glob_t not being found, was
caused by my fix
for the first problem. To rewind a bit, my first trial failed here:
### Done building chartables.
gcc -c -I.. -I/ucd/include
-I/ucd/src/cyrus-imapd/cyrus-imapd-2.3.9/cyrus-imapd-2.3.9/et
-I/ucd/include -I/
Bron Gondwana wrote:
> Do you have the 'makedepend' utility on your system? I found
> that if it wasn't there, "make depend" would not get all the
> dependencies in place.
>
In Solaris, there is a makedepend in /usr/openwin/bin. Hence my including
that directory in PATH at top of script. I su
So our protocol is we have a central repository in AFS for
source code. Then we write a script called BUILD.ksh that
does all the right stuff to compile into a /tmp/(package)-obj
directory on each build platform. This is not working for me
yet on Cyrus 2.3.9 as the configure utility as-is doesn't
Pascal,
How many accounts did you have per mail-store?
Thanks!
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
We had several other universities called and offered assistance.
Thanks!
We also had several people call with offers to help with consulting
on it, but mostly not at the same level of Cyrus hardware/software/config
we are running. We appreciate the effort though, we really do. Lots of
folks see
HELP!
We need Cyrus consulting technical assistance on our Cyrus 2.3.8 system.
The University of California Davis has 2 servers with about 29K users
on one system and 23K on the other. The past few days we have seen
our load go through the roof, timeouts to users, lot of problems.
We have poked
>> May somebody recommend reliable/safe filesystem that support resizing?
>> I'm afraid to use anything except ext3 in production enviroment...
>>
Forgot to mention for ZFS, you can grow the pool by adding disks to it.
We run a pool full of mirror pairs so we'd add more pairs.
Quite easy an
>
>> May somebody recommend reliable/safe filesystem that support resizing?
>> I'm afraid to use anything except ext3 in production enviroment...
>>
We use Solaris and I have to say ZFS is quite amazing. Never have to
run fsck ever again!
The checksum feature ensures that when you read the
Hello Rick,
For IMAP users (and clients that support it) there is message of the day
function.
Example, create one-line bulletin in /var/cyrus/imap/msg/motd
Next time Thunderbird user connects, they should all see message.
No way to do this for POP users that I know of.
I asked a question rece
Suggestions:
1) More spindles is best, so more small disks is better than fewer large
ones
2) RAID-10 is best performance
3) Run bonnie++ on your proposed setups and benchmark performance
using filesize typical for a mail message (16K typical)
4) Read this website before trusting user data to RAI
here with most
students being home, not so much daily class-related chitchat.
If this 3511 setup is yielding very similar numbers we may switch
array products during our next buildout.
Dale Ghent wrote:
> On Jul 4, 2007, at 12:59 PM, Vincent Fox wrote:
>
>> Sun recommends against the
Dale Ghent wrote:
> each with a zpool comprising of a mirror between two se3511s on our
> SAN...
Sun recommends against the 3511 in most literature I read, saying that
the SATA drives
are slower and not going to handle as much IOPS loading. But they are
working out okay
for you? Perhaps it's
Dale Ghent wrote:
> Sorry for the double reply, but by the way, what sort of compression
ratio are you seeing on your ZFS filesystems?
{cyrus1:vf5:136} zfs get compressratio cyrus/mail
NAME PROPERTY VALUE SOURCE
cyrus/mail compressratio 1.26x
I just thought to report back to the list, that ZFS is working out well.
I asked a while ago about other's opinions, and got all thumbs up.
We deployed a setup with Sun T2000 running Solaris 10u3 (11/06)
and a pair of Sun 3510FC arrays mirroring in a hybrid HA RAID
5+1+0 setup that I can describe
1 - 100 of 116 matches
Mail list logo