Hi folks:
I built a RAID0 stripe across two large hardware RAID cards
(identical cards/driver). What I am finding is that direct IO is
about the performance I expect (2x the single RAID card) while
buffered IO is about the same as the performance of a single RAID
card. This is true across chun
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Andrew:
~ I have a quick question. Is software raid using mirror on two SATA
drives more reliable that just using a single SATA drive if
properly supported in the rc scripts?
~ My impression is that it's definitely better. The push back I'm
get
Hi!
I have been reading the "Clarifications about check/repair, i.e.
RAID SCRUBBING" thread, and there were some answers which were still
slightly unclear to me, and I'd like to get a bit more clarification.
This is all in reference to the /sys/block/md0/md/sync_action setting.
Neil Brow
Good morning, hope the end of the week is going well for everyone.
Apologies for the rather wide coverage on this note but I wanted to
make sure all involved parties were in the loop.
We've been chasing a series of anomalies in a large production SAN
environment involving MD/RAID1 and the sysfs/ko
Peter Grandi schrieb:
Those are as such not very meaningful. What matters most is
whether the starting physical address of each logical volume
extent is stripe aligned (and whether the filesystem makes use
of that) and then the stripe size of the parity RAID set, not
the chunk sizes in themselves
[ ... ]
>> * Suppose you have a 2+1 array which is full. Now you add a
>> disk and that means that almost all free space is on a single
>> disk. The MD subsystem has two options as to where to add
>> that lump of space, consider why neither is very pleasant.
> No, only one, at the end of the md d