On 10/10/2011 8:51 AM, Wietse Venema wrote:
> Stan Hoeppner:
>> I'll work on getting access to suitable hardware so I can publish some
>> thorough first hand head-to-head numbers, hopefully with a test harness
>> that will use Postfix/SMTP and Dovecot/IMAP instead of a purely
>> synthetic benchmark
Stan Hoeppner:
> I'll work on getting access to suitable hardware so I can publish some
> thorough first hand head-to-head numbers, hopefully with a test harness
> that will use Postfix/SMTP and Dovecot/IMAP instead of a purely
> synthetic benchmark.
Until you have first-hand experience, perhaps y
Bron Gondwana:
> On Sun, Oct 09, 2011 at 04:42:25PM -0400, vg_ us wrote:
> > will postmark transaction test do? here -
> > http://www.phoronix.com/scan.php?page=article&item=linux_2639_fs&num=1
>
> Oh:
>
> http://blog.goolamabbas.org/2007/06/17/postmark-is-not-a-mail-server-benchmark/
>
> "Th
On 10/9/2011 3:42 PM, vg_ us wrote:
> will postmark transaction test do? here -
> http://www.phoronix.com/scan.php?page=article&item=linux_2639_fs&num=1
> stop arguing - I think postmark transaction was the only relevant test
> XFS was loosing badly - not anymore...
> search www.phoronix.com for o
On 10/9/2011 6:27 PM, Bron Gondwana wrote:
> On Sun, Oct 09, 2011 at 06:03:36PM -0500, Stan Hoeppner wrote:
>> XFS has been seeing substantial development for a few years now due to
>> interest from RedHat, who plan to make it the default RHEL filesystem in
>> the future. They've dedicated seriou
On 10/9/2011 4:49 PM, karave...@mail.bg wrote:
> I run a couple of busy postfix MX servers with queues now on XFS:
> average: 400 deliveries per minute
> peak: 1200 deliveries per minute.
>
> 4 months ago they were hosted on 8 core Xeon, 6xSAS10k RAID 10
> machines. The spools were on ext4.
>
On Sun, Oct 09, 2011 at 06:03:36PM -0500, Stan Hoeppner wrote:
> On 10/9/2011 3:29 PM, Bron Gondwana wrote:
>
> > I'm honestly more interested in maildir type workload too, spool doesn't
> > get enough traffic usually to care about IO.
> >
> > (sorry, getting a bit off topic for the postfix list)
- Цитат от Bron Gondwana (br...@fastmail.fm), на 10.10.2011 в 01:50 -
> On Mon, Oct 10, 2011 at 01:33:31AM +0300, karave...@mail.bg wrote:
>> Nice setup. And thanks for your work on Cyrus. We are
>> looking also to move the metadata on SSDs but we have not
>> found yet cost effective d
--
From: "Bron Gondwana"
Sent: Sunday, October 09, 2011 6:28 PM
To: "vg_ us"
Cc: "Bron Gondwana" ; "Stan Hoeppner"
;
Subject: Re: Premature "No Space left on device" on XFS
On Sun, Oct 09, 20
On 10/9/2011 3:29 PM, Bron Gondwana wrote:
> I'm honestly more interested in maildir type workload too, spool doesn't
> get enough traffic usually to care about IO.
>
> (sorry, getting a bit off topic for the postfix list)
Maybe not off topic. You're delivering into the maildir mailboxes with
l
On Mon, Oct 10, 2011 at 01:49:44AM +0300, karave...@mail.bg wrote:
> I do not trust Postmark - it models mbox appending and skips
> fsync-s. So it is too different from our setup. The best benchmark
> tool I have found is imaptest (from dovecot fame) - it is actually
> end to end benchmarking, in
On Mon, Oct 10, 2011 at 01:33:31AM +0300, karave...@mail.bg wrote:
> Nice setup. And thanks for your work on Cyrus. We are
> looking also to move the metadata on SSDs but we have not
> found yet cost effective devices - we need at least a pair of
> 250G disk for 20-30T spool on a server.
You ca
- Цитат от Bron Gondwana (br...@fastmail.fm), на 10.10.2011 в 01:28 -
> On Sun, Oct 09, 2011 at 04:42:25PM -0400, vg_ us wrote:
>> From: "Bron Gondwana"
>> >I'm honestly more interested in maildir type workload too, spool doesn't
>> >get enough traffic usually to care about IO.
>>
>> wil
On Sun, Oct 09, 2011 at 04:42:25PM -0400, vg_ us wrote:
> will postmark transaction test do? here -
> http://www.phoronix.com/scan.php?page=article&item=linux_2639_fs&num=1
Oh:
http://blog.goolamabbas.org/2007/06/17/postmark-is-not-a-mail-server-benchmark/
"Thus it pains me a lot that they ar
- Цитат от Bron Gondwana (br...@fastmail.fm), на 10.10.2011 в 01:12 -
>
> Here's what our current IMAP servers look like:
>
> 2 x 92GB SSD
> 12 x 2TB SATA
>
> two of the SATA drives are hotspares - though I'm
> wondering if that's actually necessary now, we
> haven't lost any yet, and we
On Sun, Oct 09, 2011 at 04:42:25PM -0400, vg_ us wrote:
> From: "Bron Gondwana"
> >I'm honestly more interested in maildir type workload too, spool doesn't
> >get enough traffic usually to care about IO.
>
> will postmark transaction test do? here -
> http://www.phoronix.com/scan.php?page=articl
On Sun, Oct 09, 2011 at 03:24:44PM -0500, Stan Hoeppner wrote:
> That said, there are plenty of mailbox
> servers in the wild that would benefit from the XFS + linear concat
> setup. It doesn't require an insane drive count, such as the 136 in the
> test system above, to demonstrate the gains, esp
- Цитат от Bron Gondwana (br...@fastmail.fm), на 09.10.2011 в 23:29 -
>
> My goodness. That's REALLY recent in filesystem times. Something
> that recent plus "all my eggs in one basket" of changing to a
> large multi-spindle filesystem that would really get the benefits
> of XFS would be
--
From: "Bron Gondwana"
Sent: Sunday, October 09, 2011 4:29 PM
To: "Stan Hoeppner"
Cc:
Subject: Re: Premature "No Space left on device" on XFS
On Sun, Oct 09, 2011 at 02:31:19PM -0500, Stan Hoeppner wrot
On Sun, Oct 09, 2011 at 02:31:19PM -0500, Stan Hoeppner wrote:
> On 10/9/2011 8:36 AM, Bron Gondwana wrote:
> > How many people are running their mail servers on 24-32 SAS spindles
> > verses those running them on two spindles in RAID1?
>
> These results are for a maildir type workload, i.e. POP/I
On 10/9/2011 9:32 AM, Wietse Venema wrote:
> Stan Hoeppner:
>> On 10/8/2011 3:33 PM, Wietse Venema wrote:
>>> That's a lot of text. How about some hard numbers?
>>
>> Maybe not the perfect example, but here's one such high concurrency
>> synthetic mail server workload comparison showing XFS with a
On 10/9/2011 8:36 AM, Bron Gondwana wrote:
>> http://btrfs.boxacle.net/repository/raid/history/History_Mail_server_simulation._num_threads=128.html
>
> Sorry - I don't see unlinks there. Maybe I'm not not reading very
> carefully...
Unfortunately the web isn't littered with a gazillion head-to-
Stan Hoeppner:
> On 10/8/2011 3:33 PM, Wietse Venema wrote:
> > That's a lot of text. How about some hard numbers?
>
> Maybe not the perfect example, but here's one such high concurrency
> synthetic mail server workload comparison showing XFS with a substantial
> lead over everything but JFS, in w
On Sun, Oct 09, 2011 at 03:56:39AM -0500, Stan Hoeppner wrote:
> On 10/8/2011 3:33 PM, Wietse Venema wrote:
> > That's a lot of text. How about some hard numbers?
>
> Maybe not the perfect example, but here's one such high concurrency
> synthetic mail server workload comparison showing XFS with a
On 10/8/2011 3:33 PM, Wietse Venema wrote:
> Stan Hoeppner:
> [ Charset ISO-8859-1 unsupported, converting... ]
>> On 10/8/2011 5:17 AM, Wietse Venema wrote:
>>> Stan Hoeppner:
nicely. On the other hand, you won't see an EXTx filesystem capable of
anywhere close to 10GB/s or greater file
On 2011-10-07 15:51, Bernhard Schmidt wrote:
Am 07.10.2011 12:12, schrieb Reindl Harald:
Am 07.10.2011 10:41, schrieb Bernhard Schmidt:
Basically the only problem with postfix here is that I cannot have
queue_minfree> 2GB to be on the safe side, so I don't know how to avoid
this problem
have
Stan Hoeppner:
[ Charset ISO-8859-1 unsupported, converting... ]
> On 10/8/2011 5:17 AM, Wietse Venema wrote:
> > Stan Hoeppner:
> >> nicely. On the other hand, you won't see an EXTx filesystem capable of
> >> anywhere close to 10GB/s or greater file IO. Here XFS doesn't break a
> >> sweat.
> >
On 10/8/2011 5:17 AM, Wietse Venema wrote:
> Stan Hoeppner:
>> nicely. On the other hand, you won't see an EXTx filesystem capable of
>> anywhere close to 10GB/s or greater file IO. Here XFS doesn't break a
>> sweat.
>
> I recall that XFS was optimized for fast read/write with large
> files, whi
On Sat, Oct 08, 2011 at 06:17:19AM -0400, Wietse Venema wrote:
> Stan Hoeppner:
> > nicely. On the other hand, you won't see an EXTx filesystem capable of
> > anywhere close to 10GB/s or greater file IO. Here XFS doesn't break a
> > sweat.
>
> I recall that XFS was optimized for fast read/write
Stan Hoeppner:
> nicely. On the other hand, you won't see an EXTx filesystem capable of
> anywhere close to 10GB/s or greater file IO. Here XFS doesn't break a
> sweat.
I recall that XFS was optimized for fast read/write with large
files, while email files are small, and have a comparatively hig
On 10/7/2011 2:50 PM, Bernhard Schmidt wrote:
> On 07.10.2011 21:20, Stan Hoeppner wrote:
>
>> If I may make a purely subjective comment: 2.5m spooled emails on a
>> single host is insane.
>
> I'm not arguing that. In the end the system is supposed to cope with
> 300k mails in 24h, balanced on t
On 07.10.2011 21:20, Stan Hoeppner wrote:
If I may make a purely subjective comment: 2.5m spooled emails on a
single host is insane.
I'm not arguing that. In the end the system is supposed to cope with
300k mails in 24h, balanced on two servers, which I think can be
achieved without a lot o
On Fri, Oct 07, 2011 at 02:20:06PM -0500, Stan Hoeppner wrote:
> If I may make a purely subjective comment: 2.5m spooled emails on a
> single host is insane.
I tested this scale some years back, it was actually the motivation
for adding SMTP connection caching to Postfix ~2.1. If one's bulk
engi
On 10/7/2011 3:41 AM, Bernhard Schmidt wrote:
> Basically the only problem with postfix here is that I cannot have
> queue_minfree > 2GB to be on the safe side, so I don't know how to avoid
> this problem.
There is a simple solution here, Comp Sci 101 type stuff, which Wietse
has mentioned many t
Zitat von Bernhard Schmidt :
Am 07.10.2011 16:01, schrieb lst_ho...@kwsoft.de:
Someone on the XFS mailinglist believed it could be filesystem
fragmentation after all. They need an aligned continous 16k block to
allocate a new inode chunk, otherwise it will fail. I'm going to test
that later.
Am 07.10.2011 16:01, schrieb lst_ho...@kwsoft.de:
>> Someone on the XFS mailinglist believed it could be filesystem
>> fragmentation after all. They need an aligned continous 16k block to
>> allocate a new inode chunk, otherwise it will fail. I'm going to test
>> that later.
>
> This could be che
Zitat von Bernhard Schmidt :
Hi,
It's not the number of inodes as it is common on ext2/ext3 but the
percentage of space occupied by inodes which is dependant on the inode
size, the number and the size of the volume. Check with xfs_info, on the
filesystems we are using xfs on the percentage is
Am 07.10.2011 12:12, schrieb Reindl Harald:
> Am 07.10.2011 10:41, schrieb Bernhard Schmidt:
>> Basically the only problem with postfix here is that I cannot have
>> queue_minfree > 2GB to be on the safe side, so I don't know how to avoid
>> this problem
> have you considered using ext4 instead of
Am 07.10.2011 10:41, schrieb Bernhard Schmidt:
> Basically the only problem with postfix here is that I cannot have
> queue_minfree > 2GB to be on the safe side, so I don't know how to avoid
> this problem
have you considered using ext4 instead of XFS?
signature.asc
Description: OpenPGP dig
Hi,
> It's not the number of inodes as it is common on ext2/ext3 but the
> percentage of space occupied by inodes which is dependant on the inode
> size, the number and the size of the volume. Check with xfs_info, on the
> filesystems we are using xfs on the percentage is 25% but it may be
> diffe
Zitat von Bernhard Schmidt :
On 06.10.2011 22:49, lst_ho...@kwsoft.de wrote:
Hi,
lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
touch: cannot touch `a': No space left on device
lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
Filesystem 1K-blocks Used Available Use% Moun
On 06.10.2011 22:49, lst_ho...@kwsoft.de wrote:
Hi,
lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
touch: cannot touch `a': No space left on device
lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb 10475520 7471160
Zitat von Bernhard Schmidt :
Hey,
a small not-quite but a bit postfix related issue.
We (or better said: an over-eager third party) have been running
some performance tests against our future outbound bulkmail platform
(no, not UCE, university stuff), which consists of multiple SLES11.1
Hey,
a small not-quite but a bit postfix related issue.
We (or better said: an over-eager third party) have been running some
performance tests against our future outbound bulkmail platform (no, not
UCE, university stuff), which consists of multiple SLES11.1 VMs with 1GB
of RAM and 4 vCPU eac
44 matches
Mail list logo