On Wed, 22 Aug 2012 13:47:07 -0700 Dan Williams <d...@fb.com> wrote:

> On Tue, Aug 21, 2012 at 11:00 PM, NeilBrown <ne...@suse.de> wrote:
> > On Wed, 22 Aug 2012 11:57:02 +0800 Yuanhan Liu <yuanhan....@linux.intel.com>
> > wrote:
> >
> >>
> >> -#define NR_STRIPES           256
> >> +#define NR_STRIPES           1024
> >
> > Changing one magic number into another magic number might help your case, 
> > but
> > it not really a general solution.
> >
> > Possibly making sure that max_nr_stripes is at least some multiple of the
> > chunk size might make sense, but I wouldn't want to see a very large 
> > multiple.
> >
> > I thing the problems with RAID5 are deeper than that.  Hopefully I'll figure
> > out exactly what the best fix is soon - I'm trying to look into it.
> >
> > I don't think the size of the cache is a big part of the solution.  I think
> > correct scheduling of IO is the real answer.
> 
> Not sure if this is what we are seeing here, but we still have the
> unresolved fast parity effect whereby slower parity calculation gives
> a larger time to coalesce writes.  I saw this effect when playing with
> xor offload.

I did find a case where inserting a printk made it go faster again.
Replacing that with msleep(2) worked as well. :-)

I'm looking for a most robust solution though.
Thanks for the reminder.

NeilBrown

Attachment: signature.asc
Description: PGP signature

Reply via email to