Hi,

On Fri, Apr 04, 2003 at 09:14:40AM +1000, Russell Coker wrote:

> On Fri, 4 Apr 2003 06:52, Emile van Bergen wrote:
> > Something just occurred to me. A lot of systems will have one (logical)
> > disk, either physical or as a RAID-5 or RAID-1 set.
> >
> > Wouldn't it be nice if you could interleave multiple filesystems on the
> > same block device? I.e. instead of giving one filesystem blocks 0
> 
> That will guarantee that every file larger than the chunk size is effectively 
> fragmented.  Not good for performance when you start copying those gigabyte 
> files around.

Good point. But I guess if the stripe size is an even multiple of the
maximum number of blocks that can be transferred in a single
transaction, then the problem will be much less severe.

Of course, you only want to interleave fs'es of which the load is
expected to be in the same ballpark.

> > That way, you can have multiple filestystems that are immune to each
> > others corruptions, each with their own type or other properties, that
> 
> What do you mean by "immune to each others corruptions"?

The point brought up by someone else: if one fs gets corrupted because
of an unorderly shutdown, other, possibly R/O fs'es may still survive.

> > can share a single logical disk, without the seek penalty. Also, each fs
> > can benefit equally from the higher troughput at the beginning of an
> > oversized disk.
> 
> Best to just discover which file system needs the benefit most and put it at 
> the start.

That still applies. The issue rises when you've already put the smallest
and most performance critical one at the start (say, the gig you want
for swapping), and you're left with two fs'es that will each get a lot
of use, say /var/spool/mail and /home, which you want separated without
paying the seek penalty.

> > Does anybody know if this has been proposed before? It shouldn't be too
> > hard to achieve on the md layer; instead of allowing just one md per
> > group of disks, allow multiple striped md's per group of disks.
> 
> It should be easy enough to implement with LVM or EVMS.  Why not try it out 
> and see what happens?

I might do just that. If you'll help me devise some nice bonnie++ tests
for the benchmark :)

Cheers,


Emile.

-- 
E-Advies - Emile van Bergen           [EMAIL PROTECTED]      
tel. +31 (0)70 3906153           http://www.e-advies.nl    

Attachment: pgpJROZB2ECwD.pgp
Description: PGP signature

Reply via email to