> This is a 16GB Ram server running Linux Centos 5.5 64 bit.
???
Last I checked, there were no Linux ZFS ports that were actually usable.
--
John Madden / Systems Engineer III
Office of Technology / Ivy Tech Community College of Indiana
Free Software is a matter of liberty,
x27;t remember the
specifics, just that things improved when I turned them off. It might
be a mis-configuration on the number of clients versus the number of
allowed servers.
John
--
John Madden / Sr UNIX Systems Engineer
Office of Technology / Ivy Tech Community College of Indiana
F
LMTP
under load and somewhere along the line these options fixed them. YMMV.
The usual RTFineM about these applies too:
http://www.postfix.org/postconf.5.html
John
--
John Madden / Sr UNIX Systems Engineer
Office of Technology / Ivy Tech Community College of Indiana
Free Software is a ma
egardless,
those who need the most recent versions of these packages often have to
look outside the distribution's packages.
Faux pas or not, it's the truth. :)
John
--
John Madden / Sr UNIX Systems Engineer
Office of Technology / Ivy Tech Community College of Indiana
Free
eleyDB with
the right version of OpenLDAP. No thanks.
It's well worth your time to maintain your own compiles and even
packages of Cyrus because the package maintainers can't keep up.
John
--
John Madden
Sr UNIX Systems Engineer / Office of Technology
Ivy Tech Community Colle
>> It is possible that I provide a patch.
>
> Yes please. This looks like a good solution.
Agreed. :)
--
John Madden
Sr UNIX Systems Engineer / Office of Technology
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://www.cyrusimap.org/
List A
else break?
John
--
John Madden
Sr UNIX Systems Engineer / Office of Technology
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
plain DRDB+ext4 if you don't have real shared storage.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
a PHP session store; it'll certainly fall over with IMAP loads.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
be alleviated with such a solution.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
On 07/12/2010 02:01 PM, Wesley Craig wrote:
> On 02 Jul 2010, at 09:29, John Madden wrote:
>> I'm concerned about the listener_lock timeouts.
>
> The listener_lock timeout means that the thread waited around for 60
> seconds to see if a connection was going to arrive. Sinc
resting.
Can I do anything with the prefork parameter for mupdate to spread
things out on more cpu's or increase concurrency?
>> Also during this time, mailbox changes (CREATE/etc)
>> are delayed or timeout.
>
> That's normal, as the mupdate master blocks changes whil
ot of matching thread timeout errors on the master,
also consuming cpu. Also during this time, mailbox changes (CREATE/etc)
are delayed or timeout.
This is a 2.3.15+murder system with about 2.8mil mailboxes, two
frontends, 6 backends, and a single master. Any suggestions?
John
--
John Madd
d SATA storage partitions plus a single
frontend absolutely rocks for our over 450,000 users (2.6m mailboxes).
We don't do HA but Murder makes it easy to do if needed.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cy
Out of curiousity, how good is zfs with full fs scans when running in
the 100-million file count range? What do you see in terms of
aggregate MB/s throughput?
--
John madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
On Feb 15, 2010, at 15:43
NBU admin
> so I dunno why this is more efficient but it worked great.
Yeah, we do a stream per disk. Since the disk is pegged reads, no point
in doing more than one stream per filesystem that I can see.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
We did quite a bit with snapshots (LVM) to when we were experimenting
with block-level backups but there's a performance problem there -- we
were saturating GbE. Snapshot doesn't really buy you anything in terms
of getting the data to tape.
--
John Madden
Sr UNIX Systems Engin
John Madden wrote:
>> Isn't this what "foolstupidclients" does? I think Blackberry might
>> meet the criteria...
>
> Not really, and already enabled in my case. According to the docs, it just
> converts a "LIST *" into "LIST INBOX*" and t
backup." After that one full
backup, the only thing you ever run is incremental. This takes 2x your
disk, but it's manageable.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
C
> Isn't this what "foolstupidclients" does? I think Blackberry might
> meet the criteria...
Not really, and already enabled in my case. According to the docs, it just
converts a "LIST *" into "LIST INBOX*" and that isn't sufficient.
John
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
I'm "this close" to writing an
imap proxy that supports client-side SSL that will allow me to tweak the do
this on the fly.
Thanks,
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http:
> 0.23 seconds on a 35MB mailboxes file. I thought I saw in one of you
> other e-mails that yours was taking about one second?
Yeah, .95 seconds in my case. Even with a 4-cpu box, our user load
makes that intolerable, the latency causes things to back up.
John
--
John Madden
S
'driver' => 'imap',
),
'imap_config' => array
(
'children' => false,
'namespace' => array
(
'INBOX.' => array
(
'name' => 'INBOX.',
> We set the following in imapd.conf:
>
> sharedprefix: ~ Public Folders
(We don't use altnamespace.)
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus
high
> load on the Cyrus frontends.
Well, none, in general. But it's a 200MB mailboxes.db so I assume
scanning that (to return nothing) is what takes up the cpu time.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home
g track of the original thread... You said there was high
> cpu usage and slowdown during login. Did you track it back to these LIST
> operations?
Yes, sorry, Wesley Craig's response pointed me in that direction and
that definitely seems to be the problem.
John
--
Joh
> Do your users have access to each other's mailboxes? Is there are a large
> number of results?
a02 namespace
* NAMESPACE (("INBOX." ".")) (("user." ".")) (("" "."))
--
John Madden
Sr UNIX Systems Engineer
Ivy Tec
no foolstupidclients setting.
My fix here is to spoof the NAMESPACE with Horde's "imap_config"
parameters. You can manually specify which namespaces to recognize into
a config file and voila, it'll only look at INBOX.*. I'm not sure we'll
reall
k into whether or not it can be eliminated. I don't suppose there's
another fullstupidclients that improves responsiveness on this call, is
there?
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://cyrusi
ount is
currently 2.2 million (400k top-level) but we only have about 30% user
load at this point.
I had poked around with strace but didn't find anything obvious.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home
icate_db: berkeley-nosync
quota_db: skiplist
subscription_db: skiplist
mboxlist_db: skiplist
Thanks,
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusim
to adjust quotas or create
sub-folders (that theoretically should already be there and return
"mailbox exists). The server isn't yet in use so it isn't a matter of
anyone logging into it and, say, locking it over pop3.
Any ideas?
John
--
John Madden
Sr UNIX Systems Engineer
Iv
to. To me, the benefits of running virtualized outweigh the pitfalls --
dealing with real OS installs on real hardware, dealing with
multipathing and SAN (virtual disks are easy), etc.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
--
ng the abort is the one responsible, that's fine,
but how do we prevent this situation to begin with?
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyru
course be
better, but I find the overhead of Xen to be worthwhile.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http
that's what
> you might have been seeing. Of course you also mounted "noatime,nodiratime"
> on both?
Yes, we were using notail,noatime,nodiratime.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http:
at way. We did, however,
also move from a single partition to 8 of them, so that obviously has some
effect as well.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmad...@ivytech.edu
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: htt
it isn't due to the number of files on those filesystems? File-level
backups will slow down linearly as the filesystems grow, of course.
I "solve" this by adding more spools (up to 8 at the moment with about 350k
mailboxes) so they can be backed up in parallel. All on ext3.
hould move it there.
You can then use Cyrus' built-in search mechanisms (squat) and have to change
very little.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: htt
> Can anyone advice what could help?
Increase your connection count to something more than 400?
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.
lots of (relatively small)
storage pools to build performance.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
list.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
st_unsafe
I see most of our writes going to the spool filesystems, not so much the
meta filesystem, so I'd prefer to see something where we can keep the
main databases fsnyc()ing properly but allow the individual mailboxes to
just rely on filesystem journaling. Is there a
cacheandindexfil
al, my money's still on Linux/LVM/Reiser/ext3.
250,000 mailboxes, 1,000 concurrent users, 60 million emails, 500k
deliveries/day. For us, backups are the worst thing, followed by
reiserfs's use of BLK, followed by the need to use a ton of disks to
keep up with the i/o.
John
--
John
to
ext3 will make matters worse, but I have nothing else to go on.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info:
y about 300k emails/day.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
ut has anyone had to do this
before/is there a tool out there already that does it?
Thanks,
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu
s is now pretty miserable, struggling to
pull 1MB/s off of fibre channel.) How does your experience compare?
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.we
ry long fsck's, but I've seen the
same out of ext3. But for a filesystem of 35 million mail files, I
figure it's got to beat ext3 on performance, at least. ...But there
don't seem to be any stats at this scale to support that.
John
--
John Madden
Sr. UNIX Systems E
> My guess: ext3. ReiferFS has some very annoying weaknesses that may
> affect you.
Please specify those weaknesses. 250,000 mailboxes on reiserfs right
now, always open to options.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PRO
ol/imap, etc. Either way, you want to separate not just on
LVM, but on the physical spindles doing the work.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
rrency in your MTA to the same value (or n-1). There's no way your
disk system (or any other) is going to be able to handle 200 lmtpd's
writing simultaneously.
Even with our SAN, I only allow *3* lmtpd's to write concurrently.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech
ith
> lock contention in heavily loaded servers? Is there anything that can
> be
> done to tune this, other than disabling the duplicate test?
FWIW, I turned off duplicatesuppression to no avail -- lmtpd still locks
and writes to /var/imap/deliver.db. ...So are you sure it's really
tu
> - iSCSI storage ? (cheap GigaEthernet SAN)
>
> according to Wikipedia http://en.wikipedia.org/wiki/ISCSI ,
> only 1 iSCSI-client can be connected to 1 iSCSI-server (disk)
> at a time...so this does not allow for a shared FS ?
Yes, much like only one host can connect to an FC LUN at once. :) iS
ry (MTA?) services, move the data, adjust
cyrus' configuration to point to the new mail store, and start
everything back up.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://cyrusimap.web.
ot clustering. GFS could certainly be used in this case, but
would be overkill.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archi
rent) SAN performs for such an
application.
(And yes, of course, filesystem issues affect performance as well.)
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/
True, but I do expect to reach this number on this machine in the next
couple of years. ...And reiserfs has been just fine so far. Then
again, I didn't even consider using ext3 at the time.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED
7;t be a problem any more.
How big? ext3 STILL only supports 32000 directories within a directory.
That gets to be quite a problem on large installs.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Pa
> I'm in the process of porting a couple of patches from UMich into 2.3
> and then I'm going to make a release which includes a fix for an easily
> exploitable buffer overflow in pop3d.
Ehm? Does this affect the 2.2.x branch as well?
John
--
John Madden
Sr. UNIX S
fix+deliver doesn't do this
(although it causes other problems).
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Arc
with LVM snapshots alone?
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
s bounces? (Postfix 2.1.5, Cyrus 2.2.12, Debian 3.1)
Thanks,
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
(We're talking postfix-2.1.5 with "lmtp_cache_connection = no", cyrus 2.2.12)
How
can this be? Is there any way to find out what lmtpd's doing with the
transaction?
Thanks,
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
good thing :)
As a user of lvm snapshots for an install of 110k mailboxes, I can tell you that
they work, on reiserfs and all. =)
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu
s.
It's skiplist.
John
--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
the moment, I'm actually investigating what another subscriber mentioned --
loss of sync on the socket as Postfix sends the message. To resolve this, I've
turned off its lmtp connection cache. I won't know if this is actually a
sufficient fix for probably another week or so since th
0
lmtp_over_quota_perm_failure: 1
autocreatequota: 15360
autocreateinboxfolders: Sent|Drafts|Trash
autosubscribeinboxfolders: Sent|Drafts|Trash
autosubscribesharedfolders: user.PublicFolders | user.PublicFolders.SPAM Can |
user.PublicFolders.Ham Bone
fulldirhash: 1
hashimapspool: 1
mailnotifier: mailto
n
ED]" is. Could this be why
some
main is randomly [?] delivered incorrectly?
(I've got the requisite info in saslauthd.conf so that authentication works
properly. No problems there.)
Thanks,
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PR
ource_search_base = ou=People,dc=ivytech,dc=edu
ldapsource_query_filter = (&(o=Mail)(|(uid=%u)(mailLocalAddress=%u)))
ldapsource_debuglevel = 0
ldapsource_result_attribute = uid
ldapsource_bind = no
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page
ap/sieve
And because we're talking LDAP here, saslauthd.conf:
ldap_servers: ldap://ldap.ivytech.edu
ldap_search_base: ou=People,dc=ivytech,dc=edu
ldap_auth_method: bind
ldap_port: 389
ldap_version: 3
ldap_verbose: on
ldap_debug: 10
ldap_filter: (&(uid=%u)(o=Mail))
Thanks,
John
--
John M
> The autocreatequota option is a possibility, but there would be fewer
> support calls if we could create the inboxes for them.
Look for the autocreate patches for mailboxes. We use it for creation of
mailboxes and auto-subscription to public folders, works great.
John
--
John
;s what you're
hoping to avoid.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
k-based and that I'm just going to have to deal with it from that
angle.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Arc
being spent in fdatasync
>> and fsync.
Actually, the thread just got off topic quickly -- I'm running this on reiserfs,
not ext3. ...And I've got it mounted with data=writeback, too. But thanks for
the info, Andrew.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Commu
> Hm. I'd definitely take a second look at your ds6800 configuration ... How is
> your
> write cache configured there?
Let's just say they're not terribly clear on that. :)
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTE
errors column for the open() call on this strace:
% time seconds usecs/call callserrors syscall
-- --- --- - -
1.070.019902 17 622130 open
Why 130 errors? I assume if there's an error that the call is re-trie
BOX, pulls out the first
message and logs out. I'm able to do about 230 of those per second, so at least
the read performance is more than acceptable. (And the client box here, a 4-CPU
Opteron 850, is definitely the bottleneck anyway.)
John
--
John Madden
UNIX Systems Engine
7;t ever pegging them out --
nothing ever goes into iowait. The bottleneck is elsewhere...
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
), so I'm at a loss. Is there a general checklist of things to have a look
at? Are their tools to look at the metrics of the skiplist db's (such as
Berkeley's db_stat)? Am I doomed to suffer sub-par performance as long as IMAP
writes are happening?
Migration's coming on the 24th.
d pray a bunch that split-brain doesn't happen. :)
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cm
nd that's the big question: How inconsistent can things be without shutting
down
cyrus?
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
Li
>> FWIW, I've experimented with 750k mailboxes on a single system with 8GB RAM
>> and
>> we
>> plan to put that number in production in a couple of months here.
>
> Ouch, 750k? How many concurrent accesses?
Most likely less than a thousand.
John
--
Jo
ed between
all
imapd processes.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
ated.
FWIW, I've experimented with 750k mailboxes on a single system with 8GB RAM and
we
plan to put that number in production in a couple of months here.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cm
l
be sending them new passwords that as of such-and-such a date will be what they
use to log in. (Generate this list beforehand, then drop the new passwords into
place on the migration date.)
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECT
le-instance housing 750k accounts and (at minimum)
3.7
million mailboxes on a single SAN partition. Not flat files, though. :)
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
C
> Ah yeah this is good, too. I didn't think of that in my response. This way
> you don't need to distribute new passwords.
Yeah, sorry about that. In our situation, we're changing the password hash, so
we've got to change them.
John
--
John Madden
UNIX Systems E
. So what *is* the solution?
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
I'm sure we've all had problems with this.
Is it wise to modify these clients to instead FETCH, delete/expunge, then STORE?
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus
> I'm using LVM snapshot on linux box and it work perfectly
But a filesystem-level snapshot isn't a clear copy of what's uncommitted to the
DB's.
I still haven't heard how bad a situation it is if the db's in the 'db'
directory
are corrupted -- what
al bdb methods, so what about the
skiplist db's? Isn't everything else just text files and the per-mailbox
cache/etc db's that are rebuilt all the time anyway?
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
though I've got no comparable 32-bit-only boxes to do any performance
comparisons on.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech State College
[EMAIL PROTECTED]
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/
ny other application: two servers + shared storage
+
HA.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech State College
[EMAIL PROTECTED]
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
> I need to create some big nice mail architecture, which should be able to
> grow nicely.
> So, I have a big nfs nas, and, I have these "small" xeon 2.8 with 1G of ram.
See the docs -- Cyrus + NFS = No.
--
John Madden
UNIX Systems Engineer
Ivy Tech State Colle
I assume reading berkeley's docs on DB_CONFIG would be a good place to start.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech State College
[EMAIL PROTECTED]
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.
y box would be ok (and management
more easily swallowed "4 CPU box with 8 GB RAM" than they did "lots of little
boxes.")
John
--
John Madden
UNIX Systems Engineer
Ivy Tech State College
[EMAIL PROTECTED]
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: ht
> I think they use capacitors that will hold enough charge to allow
> flushing the buffers to disk when there's a power loss.
And another set of caps to keep the spindles spinning so that data can be
written? I'm not yet willing to buy the bridge you're selling. :)
John
want
to discuss this on postfix-users, as you may have something bad in your
config.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech State College
[EMAIL PROTECTED]
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http
> Is this strictly referencing UFS on Solaris? Or is this also true with
> UFS on *BSD where UFS_DIRHASH is present?
I was, yes, but I have no experience with it on BSD. "DIRHASH" sure
sounds nice. :)
John
--
John Madden
UNIX Systems Engineer
Ivy Tech State College
1 - 100 of 110 matches
Mail list logo