This loop that sets up the hash_table has problems.
Careful examination will show that the last time through, everything
but the first line is pointless. This is because all it does is
change 'cur' and 'size' and neither of these are used after
the loop. This should ring warning bells...
That l
Else a subsequence bio_clone might make a mess.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Cc: "Don Dupuis" <[EMAIL PROTECTED]>
Cc: Jens Axboe <[EMAIL PROTECTED]>
### Diffstat output
./fs/bio.c |3 +++
1 file changed, 3 insertions(+)
diff ./fs/bio.c~current~ ./fs/bio.c
--- ./fs/bio.c~cur
Two patches to fix two possibly Oopses in md/raid
Both are suitable for 2.6.17.
Both involve indexing off the end of the array, and only cause a
problem if the ends up hitting a non-mapped page.
The first only affects raid0 is fairly rare put possible circumstances.
The second can affect all use
(Please don't reply off-list. If the conversation starts on the list,
please leave it there unless there is a VERY GOOD reason).
On Monday May 22, [EMAIL PROTECTED] wrote:
> On 5/19/06, Neil Brown <[EMAIL PROTECTED]> wrote:
> >
> > On Friday May 19, [EMAIL PROTECTED] wrote:
> > > As i can see th
On Monday May 22, [EMAIL PROTECTED] wrote:
> How will the raid5 resize in 2.6.17 be different from raidreconf?
It is done (mostly) in the kernel while the array is active, rather
than completely in user-space while the array is off-line.
> Will it be less risky to grow an array that way?
It sho
On Sunday May 21, [EMAIL PROTECTED] wrote:
> > Please read
> >
> > http://www.spinics.net/lists/raid/msg11838.html
> >
> > and ask if you have further questions.
> >
> Does this implementation also need to do delayed updates to the stripe
> cache? I.e. we bypass the cache and get the requester the
How will the raid5 resize in 2.6.17 be different from raidreconf?
Will it be less risky to grow an array that way?
Will it be possible to migrate raid5 to raid6?
(And while talking of that: can I add for example two disks and grow *and*
migrate to raid6 in one sweep or will I have to go raid6 an
Please read
http://www.spinics.net/lists/raid/msg11838.html
and ask if you have further questions.
Does this implementation also need to do delayed updates to the stripe
cache? I.e. we bypass the cache and get the requester the data it
needs but then schedule that data to also be copied into
On Sunday May 21, [EMAIL PROTECTED] wrote:
>
> Question :
>What is the cost of not walking trough the raid5 code in the
> case of READ ?
>if i add and error handling code will it be suffice ?
>
Please read
http://www.spinics.net/lists/raid/msg11838.html
and ask if you have furt
Neil hello
I am measuring read performance of two raid5 with 7 sata disks, chunk size 1MB.
when i set the stripe_cache_size to 4096 i get 240 MB/s. IO'ing from
the two raids ended with 270 MB/s.
i have added a code in make_request which passes the raid5 logic in
the case of read.
it looks like t
10 matches
Mail list logo