On Sun, Oct 09, 2011 at 06:03:36PM -0500, Stan Hoeppner wrote: > On 10/9/2011 3:29 PM, Bron Gondwana wrote: > > > I'm honestly more interested in maildir type workload too, spool doesn't > > get enough traffic usually to care about IO. > > > > (sorry, getting a bit off topic for the postfix list) > > Maybe not off topic. You're delivering into the maildir mailboxes with > local(8) right?
Cyrus via LMTP (through an intermediate proxy, what's more) actually. > > We went with lots of small filesystems to reduce single points of > > failure rather than one giant filesystem across all our spools. > > Not a bad architecture. Has a few downsides but one big upside. Did > you really mean Postfix spools here, or did you mean to say maildir > directories? Destination cyrus directories, yes - sorry, not postfix spools. > > My goodness. That's REALLY recent in filesystem times. Something > > XFS has been seeing substantial development for a few years now due to > interest from RedHat, who plan to make it the default RHEL filesystem in > the future. They've dedicated serious resources to the effort, > including hiring Dave Chinner from SGI. Dave's major contribution while > at RedHat has been the code that yields the 10X+ increase in unlink > performance. It is enabled by default in 2.6.39 and later kernels. Fair enough. It's good to see the extra work going in. > > that recent plus "all my eggs in one basket" of changing to a > > large multi-spindle filesystem that would really get the benefits > > of XFS would be more dangerous than I'm willing to consider. That's > > That's one opinion, probably not shared by most XFS users. I assume > your current architecture is designed to mitigate hardware > failure--focused on the very rare occasion of filesystem corruption in > absence of some hardware failure event. I'd make an educated guess that > the median size XFS filesystem in the wild today is at least 50TB and > spans dozens of spindles housed in multiple FC SAN array chassis. Corruption happens for real. We get maybe 1-2 per month on average. Wouldn't even notice them if we didn't actually have the sha1 of every single email file in the metadata files, and THAT protected with a crc32 per entry as well. So we can actually detect them. > > barely a year old. At least we're not still running Debian's 2.6.32 > > any more, but still. > > We've been discussing a performance patch to a filesystem driver, not a > Gnome release. :) Age is irrelevant. It's the mainline default. If > you have an "age" hangup WRT kernel patches, well that's just silly. Seriously? I do actually build my own kernels still, but upgrading is always an interesting balancing act, random bits of hardware work differently - stability is always a question. Upgrading to a new gnome release is much less risky. > > I'll run up some tests again some time, but I'm not thinking of > > switching soon. > > Don't migrate just to migrate. If you currently have deficient > performance with high mailbox concurrency on many spindles, it may make > sense. If youre performance is fine, and you have plenty of headroom, > stick with what you have. > > I evangelize XFS to the masses because it's great for many things, and > many people haven't heard of it, or know nothing about it. They simply > use EXTx because it's the default. I'm getting to the word out WRT > possibilities and capabilities. I'm not trying to _convert_ everyone to > XFS. > > Apologies to *BSD, AIX, Solaris, HP-UX mail server admins if it appears > I assume the world is all Linux. I don't assume that--all the numbers > out here say it has ~99% of all "UNIX like" server installs. Well, yeah. I've heard interestingly mixed things from people running ZFS too, but mostly positive. We keep our backups on ZFS on real Solaris - at least one lot. The others are on XFS on one of those huge SAN thingies. But I don't care so much about performance there, because I'm reading and writing huge .tar.gz files. And XFS is good at that. Bron.