While we're at it, here's a little issue I had with RAID5 ; not really
the fault of md, but you might want to know...
I have a 5x250GB RAID5 array for home storage (digital photo, my lossless
ripped cds, etc). 1 IDE Drive ave 4 SATA Drives.
Now, turns out one of the SATA drives is a Maxt
On Tuesday January 17, [EMAIL PROTECTED] wrote:
> Hello, Neil,
>
> Some days before, i read the entire mdadm man page.
Excellent...
>
> I have some ideas, and questions:
>
> Ideas:
> 1. i think, it is neccessary, to make another one mode to mdadm like "nop"
> or similar, just for bitmaps, and
On Tuesday January 17, [EMAIL PROTECTED] wrote:
> Regarding box crash and process interruption, what is the remaining
> work to be done to save the process status efficiently, in order to
> resume resize process ?
design and implement ...
It's not particularly hard, but it is a separate task and I
On Tuesday January 17, [EMAIL PROTECTED] wrote:
> NeilBrown wrote (ao):
> > +config MD_RAID5_RESHAPE
>
> Would this also be possible for raid6?
Yes. The will follow once raid5 is reasonably reliable. It is
essentially the same change to a different file.
(One day we will merge raid5 and raid6
On Tuesday January 17, [EMAIL PROTECTED] wrote:
> On Jan 17, 2006, at 06:26, Michael Tokarev wrote:
> > This is about code complexity/bloat. It's already complex enouth.
> > I rely on the stability of the linux softraid subsystem, and want
> > it to be reliable. Adding more features, especiall
On Tuesday January 17, [EMAIL PROTECTED] wrote:
> Hello Neil ,
>
> On Tue, 17 Jan 2006, NeilBrown wrote:
> > Greetings.
> >
> > In line with the principle of "release early", following are 5 patches
> > against md in 2.6.latest which implement reshaping of a raid5 array.
> > By this I mean a
On Tuesday January 17, [EMAIL PROTECTED] wrote:
> > "NeilBrown" == NeilBrown <[EMAIL PROTECTED]> writes:
>
> NeilBrown> Previously the array of disk information was included in
> NeilBrown> the raid5 'conf' structure which was allocated to an
> NeilBrown> appropriate size. This makes it awkw
On Tuesday January 17, [EMAIL PROTECTED] wrote:
>
> As a sort of conclusion.
>
> There are several features that can be implemented in linux softraid
> code to make it real Raid, with data safety goal. One example is to
> be able to replace a "to-be-failed" drive (think SMART failure
> predictio
On Wednesday January 18, [EMAIL PROTECTED] wrote:
> hi,
> I have a silly question. Why md request buffers will not
> across devices? That means Why a bh will only locate in a single
> storage device? I guess maybe file system has aligned the bh? Who
> can tell me the exact reasons? Thanks a l
On Wednesday January 18, [EMAIL PROTECTED] wrote:
>
> I agree with the original poster though, I'd really love to see Linux
> Raid take special action on sector read failures. It happens about 5-6
> times a year here that a disk gets kicked out of the array for a simple
> read failure. A rebu
On Wednesday January 18, [EMAIL PROTECTED] wrote:
> On Wed, 18 Jan 2006, John Hendrikx wrote:
>
> > I agree with the original poster though, I'd really love to see Linux
> > Raid take special action on sector read failures. It happens about 5-6
> > times a year here that a disk gets kicked out of
On Wednesday January 18, [EMAIL PROTECTED] wrote:
> 2006/1/18, Mario 'BitKoenig' Holbe <[EMAIL PROTECTED]>:
> > Mario 'BitKoenig' Holbe <[EMAIL PROTECTED]> wrote:
> > > scheduled read-requests. Would it probably make sense to split one
> > > single read over all mirrors that are currently idle?
> >
On Wednesday January 18, [EMAIL PROTECTED] wrote:
>
> Hi,
>
> Are there any known issues with changing the number of active devices in
> a RAID1 array?
There is now, thanks.
>
> I'm trying to add a third mirror to an existing RAID1 array of two disks.
>
> I have /dev/md5 as a mirrored pair o
On Wednesday January 18, [EMAIL PROTECTED] wrote:
>
> >personally, I think this this useful functionality, but my personal
> >preference is that this would be in DM/LVM2 rather than MD. but given
> >Neil is the MD author/maintainer, I can see why he'd prefer to do it in
> >MD. :)
>
> Why don't M
2006/1/18, Mario 'BitKoenig' Holbe <[EMAIL PROTECTED]>:
> Mario 'BitKoenig' Holbe <[EMAIL PROTECTED]> wrote:
> > scheduled read-requests. Would it probably make sense to split one
> > single read over all mirrors that are currently idle?
>
> A I got it from the other thread - seek times :)
> Perhap
>personally, I think this this useful functionality, but my personal
>preference is that this would be in DM/LVM2 rather than MD. but given
>Neil is the MD author/maintainer, I can see why he'd prefer to do it in
>MD. :)
Why don't MD and DM merge some bits?
Jan Engelhardt
--
-
To unsubscribe
Max Waterman wrote:
Mark Hahn wrote:
They seem to suggest RAID 0 is faster for reading than RAID 1, and I
can't figure out why.
with R0, streaming from two disks involves no seeks;
with R1, a single stream will have to read, say 0-64K from the first
disk,
and 64-128K from the second. these c
On Wed, 18 Jan 2006, John Hendrikx wrote:
> I agree with the original poster though, I'd really love to see Linux
> Raid take special action on sector read failures. It happens about 5-6
> times a year here that a disk gets kicked out of the array for a simple
> read failure. A rebuild of the ar
Sander wrote:
Michael Tokarev wrote (ao):
Most problematic case so far, which I described numerous times (like,
"why linux raid isn't Raid really, why it can be worse than plain
disk") is when, after single sector read failure, md kicks the whole
disk off the array, and when you start resync
On Wednesday January 18, [EMAIL PROTECTED] wrote:
> Mark Hahn wrote:
> >> They seem to suggest RAID 0 is faster for reading than RAID 1, and I
> >> can't figure out why.
> >
> > with R0, streaming from two disks involves no seeks;
> > with R1, a single stream will have to read, say 0-64K from the
Max Waterman <[EMAIL PROTECTED]> wrote:
> Still, it seems like it should be a solvable problem...if you order the
> data differently on each disk; for example, in the two disk case,
> putting odd and even numbered 'stripes' on different platters [or sides
Well, unfortunately for todays hard dis
On Tue, Jan 17, 2006 at 12:09:27PM +, Andy Smith wrote:
> I'm wondering: how well does md currently make use of the fact there
> are multiple devices in the different (non-parity) RAID levels for
> optimising reading and writing?
Thanks all for your answers.
signature.asc
Description: Digita
On Mer, 2006-01-18 at 09:14 +0100, Sander wrote:
> If the (harddisk internal) remap succeeded, the OS doesn't see the bad
> sector at all I believe.
True for ATA, in the SCSI case you may be told about the remap having
occurred but its a "by the way" type message not an error proper.
> If you (th
Max Waterman wrote:
Still, it seems like it should be a solvable problem...if you order the
data differently on each disk; for example, in the two disk case,
putting odd and even numbered 'stripes' on different platters [or sides
of platters].
The only problem there is determining the int
linux-kernel snipped from cc list.
Sander wrote:
Michael Tokarev wrote (ao):
Most problematic case so far, which I described numerous times (like,
"why linux raid isn't Raid really, why it can be worse than plain
disk") is when, after single sector read failure, md kicks the whole
disk off the
Michael Tokarev wrote (ao):
> Most problematic case so far, which I described numerous times (like,
> "why linux raid isn't Raid really, why it can be worse than plain
> disk") is when, after single sector read failure, md kicks the whole
> disk off the array, and when you start resync (after repla
Mark Hahn wrote:
They seem to suggest RAID 0 is faster for reading than RAID 1, and I
can't figure out why.
with R0, streaming from two disks involves no seeks;
with R1, a single stream will have to read, say 0-64K from the first disk,
and 64-128K from the second. these could happen at the sam
27 matches
Mail list logo