At Mon, 7 Nov 2005 09:00:08 -0500 (EST),
John Madden wrote:
>
> > Have you tried running something like postmark
> >
> >http://packages.debian.org/stable/utils/postmark
> >
> > to benchmark your filesystem?
>
> The disks are quite fast. bonnie++, for example, shows writes at over
> 300MB/s.
On Wed, 09 Nov 2005, Joshua Schmidlkofer wrote:
> Does this mean that those of us using XFS should run some testing as well?
Yes, XFS doesn't journal data in any way, AFAIK. I don't know how one could
go about speeding up fsyncs() with it.
What I *do* know is that I don't trust spools to XFS, b
Yes, on ext3, an fsync() syncs the entire filesystem. It has to, because
all the metadata for each file is shared - it's just a string of
journallable blocks. Similar story with the data, in ordered mode.
So effectively, fsync()ing five files one time each is performing 25 fsync()s.
One fix (
>>> This guy is having a problem with cyrus-imap and ext3 - when multiple
>>> processes are attempting to write to the one filesystem (but not the one
>>> file), performance drops to next to nothing when only five processes are
>>> writing. An strace shows most of the time is being spent in fdatasy
John Madden wrote:
This guy is having a problem with cyrus-imap and ext3 - when multiple
processes are attempting to write to the one filesystem (but not the one
file), performance drops to next to nothing when only five processes are
writing. An strace shows most of the time is being spent in fd
> This might seem dumb, but are there any issues with name resolution?
> Could DNS queries be slowing things down?
Nah, it's a good thought, but this is with an already-established session
running
from localhost. Based on the strace, I can guess that this is definitely
something disk-based and t
Quoting John Madden <[EMAIL PROTECTED]>:
The disks are quite fast. bonnie++, for example, shows writes at
over 300MB/s.
What I'm finding though is that the processes aren't ever pegging them out --
nothing ever goes into iowait. The bottleneck is elsewhere...
John
This might seem dumb, but
>> This guy is having a problem with cyrus-imap and ext3 - when multiple
>> processes are attempting to write to the one filesystem (but not the one
>> file), performance drops to next to nothing when only five processes are
>> writing. An strace shows most of the time is being spent in fdatasync
>
To: Andrew McNamara <[EMAIL PROTECTED]>
cc: "John Madden" <[EMAIL PROTECTED]>,
info-cyrus@lists.andrew.cmu.edu
Subject: Re: improving concurrency/performance (fwd)
Andrew McNamara <[EMAIL PROTECTED]> wrote:
>
> This guy is having a problem wi
On Tue, 2005-11-08 at 09:25 -0500, John Madden wrote:
> Makes me wonder why the fsync's are taking so long since the disk is
> performing so
> well. Anyone know if that's actually typical?
Some time ago I wrote a little LD_PRELOAD libary that neutered fsync()
and related calls, intended for use
> Hm. I'd definitely take a second look at your ds6800 configuration ... How is
> your
> write cache configured there?
Let's just say they're not terribly clear on that. :)
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]
Cyrus Home Page:
On Tue, 8 Nov 2005 09:25:54 -0500 (EST)
"John Madden" <[EMAIL PROTECTED]> wrote:
> The delays I was seeing ocurred when multiple imapd's were writing to the
> spool at
> the same time. I do see a lot of this though:
>
> fcntl(6, F_SETLKW, {type=F_UNLCK, whence=SEEK_SET, start=0, len=0}) = 0
>
> As expected, these are from locking operations. 0x8 is file descriptor,
> which, if I read lsof output correctly, points to config/socket/imap-0.lock
> (what would that be?) and 0x7 is F_SETLKW which reads as "set lock or wait
> for it to be released" in the manual page.
Yup, that's exactly the
On Mon, 7 Nov 2005 22:31:42 +0100
Jure Pečar <[EMAIL PROTECTED]> wrote:
> For example, on my production system I see some suspicious long pauses at
> fcntl64(0x8, 0x7, 0xsomeaddr, 0xsomeotheraddr) calls ... lets dig what this
> is.
As expected, these are from locking operations. 0x8 is file descr
On Mon, 7 Nov 2005 12:41:03 -0500 (EST)
"John Madden" <[EMAIL PROTECTED]> wrote:
> Perhaps it's worth repeating: With a single imapcopy process, the whole thing
> goes
> along pretty quickly, but drops off significantly with a second process and
> comes
> to basically a crawl with just 5 process
> It's situations like this Dtrace was made for. But on linux we still have
> to use some 'gut feeling' to figure it out ...
True. It's that sort of tool that I'm looking for, specifically to look into
concurrency on the skiplist db's, as the system load is so low that it seems
there's got to be
On Mon, Nov 07, 2005 at 11:59:39AM +0100, Paul Dekkers wrote:
> >Make sure that you format ext3 partitions with dir_index which improves
> >large directory performance.
>
> ... but decreases read performance in general... at least that is what I
> found under RH / Fedora!
Yes, processing dire
On Mon, 7 Nov 2005 09:00:08 -0500 (EST)
"John Madden" <[EMAIL PROTECTED]> wrote:
> The disks are quite fast. bonnie++, for example, shows writes at over
> 300MB/s.
> What I'm finding though is that the processes aren't ever pegging them out --
> nothing ever goes into iowait. The bottleneck is
> Have you tried running something like postmark
>
>http://packages.debian.org/stable/utils/postmark
>
> to benchmark your filesystem?
The disks are quite fast. bonnie++, for example, shows writes at over 300MB/s.
What I'm finding though is that the processes aren't ever pegging them out --
Hi,
Andrew Morgan wrote:
On Sun, 6 Nov 2005, Michael Loftis wrote:
I'd also be VERY interested since our experience was quite the
opposite. ReiserFS was faster than all three, XFS trailing a dismal
third (also had corruption issues) and ext3 second or even more dismal
third, depending on if
Hi, I would just request that the tests and comments in this thread
should be added to the Cyrus wiki.
Kind regards,
Tarjei
On Mon, 2005-11-07 at 02:46 -0200, Sergio Devojno Bruder wrote:
> David Lang wrote:
> >(..)
> > I was recently doing some testing of lots of small files on the various
> >
David Lang wrote:
>(..)
I was recently doing some testing of lots of small files on the various
filesystems, and I ran into a huge difference (8x) depending on what
allocator was used for ext*. the default allocator changed between ext2
and ext3 (you can override it as a mount option) and when
On Mon, 7 Nov 2005, Sergio Devojno Bruder wrote:
David Lang wrote:
(..)
I was recently doing some testing of lots of small files on the various
filesystems, and I ran into a huge difference (8x) depending on what
allocator was used for ext*. the default allocator changed between ext2 and
ext
Michael Loftis wrote:
Interesting ... can you provide some numbers, even from memory?
I'd also be VERY interested since our experience was quite the opposite.
ReiserFS was faster than all three, XFS trailing a dismal third (also
had corruption issues) and ext3 second or even more dismal thir
> >> In our experience FS-wise, ReiserFS is the worst performer between ext3,
> >> XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (>1M
> >> mailboxes in 3 partitions per backend, 0.5TB each partition).
> >
> > Interesting ... can you provide some numbers, even from memory?
>
>
On Mon, 7 Nov 2005, Jure Pe?ar wrote:
On Sun, 6 Nov 2005 14:20:03 -0800 (PST)
Andrew Morgan <[EMAIL PROTECTED]> wrote:
mkfs -t ext3 -j -m 1 -O dir_index /dev/sdb1
tune2fs -c 0 -i 0 /dev/sdb1
What about 1k blocks? I think they'd be more useful than 4k on mail
spools ...
I was recently doing
On Mon, 7 Nov 2005, Jure [ISO-8859-2] Pe?ar wrote:
On Sun, 6 Nov 2005 14:20:03 -0800 (PST)
Andrew Morgan <[EMAIL PROTECTED]> wrote:
mkfs -t ext3 -j -m 1 -O dir_index /dev/sdb1
tune2fs -c 0 -i 0 /dev/sdb1
What about 1k blocks? I think they'd be more useful than 4k on mail
spools ...
Maybe,
On Sun, 6 Nov 2005 14:20:03 -0800 (PST)
Andrew Morgan <[EMAIL PROTECTED]> wrote:
> mkfs -t ext3 -j -m 1 -O dir_index /dev/sdb1
> tune2fs -c 0 -i 0 /dev/sdb1
What about 1k blocks? I think they'd be more useful than 4k on mail
spools ...
--
Jure Pečar
http://jure.pecar.org/
Cyrus Home Page
On Sun, 6 Nov 2005, Michael Loftis wrote:
I'd also be VERY interested since our experience was quite the opposite.
ReiserFS was faster than all three, XFS trailing a dismal third (also had
corruption issues) and ext3 second or even more dismal third, depending on if
you ignored it's wretched
--On November 6, 2005 12:51:33 PM +0100 Jure Pečar <[EMAIL PROTECTED]>
wrote:
On Sun, 06 Nov 2005 03:58:15 -0200
Sergio Devojno Bruder <[EMAIL PROTECTED]> wrote:
In our experience FS-wise, ReiserFS is the worst performer between ext3,
XFS e ReiserFS (with tailBLAH turned on or off) for a C
> On Sun, 06 Nov 2005 03:58:15 -0200
> Sergio Devojno Bruder <[EMAIL PROTECTED]> wrote:
>
>> In our experience FS-wise, ReiserFS is the worst performer between ext3,
>> XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (>1M
>> mailboxes in 3 partitions per backend, 0.5TB each part
Jure Pečar wrote:
On Sun, 06 Nov 2005 03:58:15 -0200
Sergio Devojno Bruder <[EMAIL PROTECTED]> wrote:
In our experience FS-wise, ReiserFS is the worst performer between ext3,
XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (>1M
mailboxes in 3 partitions per backend, 0.5TB e
On Sun, 06 Nov 2005 03:58:15 -0200
Sergio Devojno Bruder <[EMAIL PROTECTED]> wrote:
> In our experience FS-wise, ReiserFS is the worst performer between ext3,
> XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (>1M
> mailboxes in 3 partitions per backend, 0.5TB each partition)
John Madden wrote:
I've had great experience with the performance of Cyrus thus far, but I'm
testing
a migration at the moment (via imapcopy) and I'm having some pretty stinky
results. There's no iowait (4 stripes on a 2Gbps SAN), no cpu usage, nothing
waiting on the network, and still I'm seei
On Fri, 2005-11-04 at 22:14 -0500, Patrick H Radtke wrote:
> How bad is your performance with imapcopy?
>
> I've never had 'fast' performance with IMAP.
Have you tried other tools like imapsync?
Also, maybe it's better to dun many copies of the tool at once instead
of just one process.
Tarjei
>
How bad is your performance with imapcopy?
I've never had 'fast' performance with IMAP.
-Patrick
On Fri, 4 Nov 2005, John Madden wrote:
I've had great experience with the performance of Cyrus thus far, but I'm
testing
a migration at the moment (via imapcopy) and I'm having some pretty stink
On 11/4/05, John Madden <[EMAIL PROTECTED]> wrote:
> I've had great experience with the performance of Cyrus thus far, but I'm
> testing
> a migration at the moment (via imapcopy) and I'm having some pretty stinky
> results. There's no iowait (4 stripes on a 2Gbps SAN), no cpu usage, nothing
> wa
I've had great experience with the performance of Cyrus thus far, but I'm
testing
a migration at the moment (via imapcopy) and I'm having some pretty stinky
results. There's no iowait (4 stripes on a 2Gbps SAN), no cpu usage, nothing
waiting on the network, and still I'm seeing terrible performan
38 matches
Mail list logo