I know that some raid management information is save on the disks.
But I am using 250 GB disks at minimum dd'ing this amount
would be too long. Has know the exact position of where to
dd ?
On Tue, 2005-08-02 at 13:27 -0700, Jason Leach wrote:
> Raz:
>
> The 3ware (at least my 9500S-8) keeps the
Hi,
According to RAID theory, the READ performance with RAID0, 1 and 5
should
Be faster than one with non-RAID. I tested it on Redhat linux(ES) on
Pentium PC, but they are almost same. I am using
RocketRAID404(HPT374)PCI card to connect 4 master IDE
Drivers.
Ideally it must be 2 or more times fast
Raz:
The 3ware (at least my 9500S-8) keeps the info about the disk and how
it fits into the RAID array on the disk; I think in the DCB is used
for this. You can (and I have) unplug the disks, then connect them to
different ports and the array will still work fine.
When you are adding a disk
On Tue, 2005-08-02 at 13:52 +0300, Raz Ben-Jehuda(caro) wrote:
> i have encountered a weired feature of 3ware raid.
> When i try to put inside an existing raid a disk which
> belonged to a different 3ware raid if fail.
> Any idea anyone ?
Two thoughts:
1) Maybe test the disk in another machine
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi!
Software RAID and the hell with IDE drives...
I built a software raid 5 with 9 disks, over that md0 I buitl a dmcrypt
partition and it went well for a year.
Now, after moving the PC some miles away, the fun started. First
everything was OK, RAID s
i have encountered a weired feature of 3ware raid.
When i try to put inside an existing raid a disk which
belonged to a different 3ware raid if fail.
Any idea anyone ?
--
Raz
Long Live the Penguin
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to
The recent change to never ignore the bitmap, revealed that the bitmap
isn't begin flushed properly when an array is stopped.
We call bitmap_daemon_work three times as there is a three-stage pipeline
for flushing updates to the bitmap file.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diff
Following are 7 patches for md in 2.6.13-rc4
They are all fairly well tested, with the possible exception of '4' -
I haven't actually tried throwing BIO_RW_BARRIER requests are any md
devices. However the code is very straight forward.
I'm happy (even keen) for these to go into 2.6.13.
If it's g
'this_sector' is a virtual (array) address while 'head_position' is
a physical (device) address, so substraction doesn't make any sense.
devs[slot].addr should be used instead of this_sector.
However, this patch doesn't make much practical different to the read
balancing due to the effects of lat
Firstly, R1BIO_Degraded was being set in a number of places in the
resync code, but is never used there, so get rid of those settings.
Then: When doing a resync, we want to clear the bit in the bitmap iff
the array will be non-degraded when the sync has completed. However
the current code would
Until the bitmap code was added,
modprobe md
would load the md module. But now the md module is called 'md-mod',
so we really need an alias for backwards comparability.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/md.c |1 +
1 files changed, 1 insertio
md does not yet support BIO_RW_BARRIER, so be honest about it
and fail (-EOPNOTSUPP) any such requests.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/linear.c|5 +
./drivers/md/multipath.c |5 +
./drivers/md/raid0.c |5 +
./drive
The code currently will ignore the bitmap if the array seem to be
in-sync. This is wrong if the array is degraded, and probably wrong anyway.
If the bitmap says some chunks are not in in-sync, and the superblock says
everything IS in sync, then something is clearly wrong, and it is safer
to trust
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/md.c |1 -
1 files changed, 1 deletion(-)
diff ./drivers/md/md.c~current~ ./drivers/md/md.c
--- ./drivers/md/md.c~current~ 2005-08-02 15:22:11.0 +1000
+++ ./drivers/md/md.c 2005-08-02 15:22:11.000
14 matches
Mail list logo