Re: raid=noautodetect is apparently ignored?

2007-06-25 Thread Neil Brown
On Tuesday June 26, [EMAIL PROTECTED] wrote: > When I try and disable auto detection, with kernel boot parameters, it > goes ahead and auto assembles and runs anyway. The md= parameters seem > to be noticed, but don't seem to have any other effect (beyond resulting > in a dmesg). Odd Maybe yo

raid=noautodetect is apparently ignored?

2007-06-25 Thread Ian Dall
When I try and disable auto detection, with kernel boot parameters, it goes ahead and auto assembles and runs anyway. The md= parameters seem to be noticed, but don't seem to have any other effect (beyond resulting in a dmesg). Here is the result $ dmesg | egrep 'raid|md:' Kernel

Re: stripe_cache_size and performance

2007-06-25 Thread Jon Nelson
On Mon, 25 Jun 2007, Dan Williams wrote: > > 7. And now, the question: the best absolute 'write' performance comes > > with a stripe_cache_size value of 4096 (for my setup). However, any > > value of stripe_cache_size above 384 really, really hurts 'check' (and > > rebuild, one can assume) perform

Re: stripe_cache_size and performance

2007-06-25 Thread Justin Piszcz
On Mon, 25 Jun 2007, Jon Nelson wrote: On Thu, 21 Jun 2007, Jon Nelson wrote: On Thu, 21 Jun 2007, Raz wrote: What is your raid configuration ? Please note that the stripe_cache_size is acting as a bottle neck in some cases. Well, that's kind of the point of my email. I'll try to restate

Re: stripe_cache_size and performance

2007-06-25 Thread Dan Williams
7. And now, the question: the best absolute 'write' performance comes with a stripe_cache_size value of 4096 (for my setup). However, any value of stripe_cache_size above 384 really, really hurts 'check' (and rebuild, one can assume) performance. Why? Question: After performance goes "bad" does

Re: stripe_cache_size and performance

2007-06-25 Thread Jon Nelson
On Thu, 21 Jun 2007, Jon Nelson wrote: > On Thu, 21 Jun 2007, Raz wrote: > > > What is your raid configuration ? > > Please note that the stripe_cache_size is acting as a bottle neck in some > > cases. Well, that's kind of the point of my email. I'll try to restate things, as my question appear

Re: stripe_cache_size and performance [BUG with =64kb]

2007-06-25 Thread Justin Piszcz
On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Bill Davidsen wrote: Justin Piszcz wrote: I have found a 16MB stripe_cache_size results i

Re: stripe_cache_size and performance [BUG with =64kb]

2007-06-25 Thread Justin Piszcz
On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Bill Davidsen wrote: Justin Piszcz wrote: I have found a 16MB stripe_cache_size results in optimal performance after testing many many

Re: stripe_cache_size and performance [BUG with =64kb]

2007-06-25 Thread Justin Piszcz
On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Bill Davidsen wrote: Justin Piszcz wrote: I have found a 16MB stripe_cache_size results in optimal performance after testing many many values :) We have discussed this before, my

Re: stripe_cache_size and performance [BUG with =64kb]

2007-06-25 Thread Justin Piszcz
On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Bill Davidsen wrote: Justin Piszcz wrote: I have found a 16MB stripe_cache_size results in optimal performance after testing many many values :) We have discussed this before, my experience has been that after 8 x stripe si

Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)

2007-06-25 Thread Justin Piszcz
On Mon, 25 Jun 2007, Thorsten Wolf wrote: Hello again. I've upgraded my SLES 9 SP3 system to SLES 10 (no SP1). The raid I had running on my devices: /dev/hdg1 /dev/hdi1 /dev/hdk1 /dev/hde1 doesn't work because SLES 10 does detect them as: /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)

2007-06-25 Thread Thorsten Wolf
Hello again. I've upgraded my SLES 9 SP3 system to SLES 10 (no SP1). The raid I had running on my devices: /dev/hdg1 /dev/hdi1 /dev/hdk1 /dev/hde1 doesn't work because SLES 10 does detect them as: /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I guess it's going to be simple, but can anyon

Re: stripe_cache_size and performance

2007-06-25 Thread Justin Piszcz
On Mon, 25 Jun 2007, Bill Davidsen wrote: Justin Piszcz wrote: I have found a 16MB stripe_cache_size results in optimal performance after testing many many values :) We have discussed this before, my experience has been that after 8 x stripe size the performance gains hit diminishing retur

Re: stripe_cache_size and performance

2007-06-25 Thread Justin Piszcz
It was going with 32k just REALLY slow, will use 128k+ 1073737728 2007-06-25 13:07 Bonnie.5178.000 On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Bill Davidsen wrote: Justin Piszcz wrote: I have found a 16MB stripe_cache_size

Re: stripe_cache_size and performance

2007-06-25 Thread Justin Piszcz
On Mon, 25 Jun 2007, Justin Piszcz wrote: On Mon, 25 Jun 2007, Bill Davidsen wrote: Justin Piszcz wrote: I have found a 16MB stripe_cache_size results in optimal performance after testing many many values :) We have discussed this before, my experience has been that after 8 x stripe si

Re: stripe_cache_size and performance

2007-06-25 Thread Justin Piszcz
On Mon, 25 Jun 2007, Bill Davidsen wrote: Justin Piszcz wrote: I have found a 16MB stripe_cache_size results in optimal performance after testing many many values :) We have discussed this before, my experience has been that after 8 x stripe size the performance gains hit diminishing retur

Re: stripe_cache_size and performance

2007-06-25 Thread Bill Davidsen
Justin Piszcz wrote: I have found a 16MB stripe_cache_size results in optimal performance after testing many many values :) We have discussed this before, my experience has been that after 8 x stripe size the performance gains hit diminishing returns, particularly for typical write instead of

Re: RAID 5 Grow

2007-06-25 Thread Bill Davidsen
Richard Scobie wrote: I will soon be adding another same sized drive to an existing 3 drive RAID 5 array. The machine is running Fedora Core 6 with kernel 2.6.20-1.2952.fc6 and mdadm 2.5.4, both of which are the latest available Fedora packages. Is anyone aware of any obvious bugs in either