I do realise that 2
} controller failures at the same time would lose everything.
Wow. Sounds like what I said a few months ago. I think I also recommended
RAID6.
Guy
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROT
ray down? I do realise that 2
} controller failures at the same time would lose everything.
Wow. Sounds like what I said a few months ago. I think I also recommended
RAID6.
Guy
}
} Steve.
}
} No virus found in this outgoing message.
} Checked by AVG Free Edition.
} Version: 7.5.516 / Virus Data
or redundancy.
Or 6 8 disk RAID6 arrays using 1 disk from each controller). That way any 2
controllers can fail and your system will still be running. 12 disks will
be used for redundancy. Might be too excessive!
Combine them into a RAID0 array.
Guy
-
To unsubscribe from this list: send the l
man md
man mdadm
I use RAID6. Happy with it so far, but haven't had a disk failure yet.
RAID5 sucks because if you have 1 failed disk and 1 bad block on any other
disk, you are hosed.
Hope that helps.
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED]
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Janek Kozicki
} Sent: Monday, October 08, 2007 6:47 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: very degraded RAID5, or increasing capacity by adding discs
}
} Janek Kozicki said:
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Richard Scobie
} Sent: Monday, October 08, 2007 3:27 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: very degraded RAID5, or increasing capacity by adding discs
}
} Janek Kozicki wrote:
}
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Neil Brown
} Sent: Monday, June 04, 2007 2:59 AM
} To: Guy Watkins
} Cc: 'linux-raid'
} Subject: Re: RAID6 clean?
}
} On Monday June 4, [EMAIL PROTECTED] wrote:
} > I have a RA
ck the 1 disk. You must be
able to determine which disk was written to. I don't know how to do that
unless you have the output from "mdadm -D" during the create/syncing.
But please don't proceed until someone else confirms what I say or gives
better advice!
Guy
-
To unsubscribe fr
ig EMC array we had had enough battery power to power about 400 disks
while the 16 Gig of cache was flushed. I think EMC told me the batteries
would last about 20 minutes. I don't recall if the array was usable during
the 20 minutes. We never tested a power failure.
Guy
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
It would be nice if there was an array option to allow an "un-clean" array
to be started. An option that would be set in the md superblock.
Thanks,
Guy
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
reaches the barrier, corruption should be assumed.
It seems to me each block device that represents more than 2 other devices
must do a flush at a barrier so that all devices will cross the barrier at
the same time.
Guy
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
} -Original Message-
} From: Ruslan Sivak [mailto:[EMAIL PROTECTED]
} Sent: Friday, May 04, 2007 7:22 PM
} To: Guy Watkins
} Cc: linux-raid@vger.kernel.org
} Subject: Re: raid10 on centos 5
}
} Guy Watkins wrote:
} > } -Original Message-
} > } From: [EMAIL PROTECTED] [mailto
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Guy Watkins
} Sent: Saturday, April 28, 2007 8:52 PM
} To: linux-raid@vger.kernel.org
} Subject: RAID6 question
}
} I read in processor.com that Adaptec has a RAID 6/60 that is patented
s. With RAID1+RAID0, any one disk can fail, a second
failure has a 1 in 3 chance of vast data loss.
I hope this helps,
Guy
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
I read in processor.com that Adaptec has a RAID 6/60 that is patented.
Does Linux RAID6 have a conflict?
Thanks,
Guy
Adaptec also has announced a new family of Unified Serial (meaning 3Gbps
SAS/SATA) RAID controllers for PCI Express. Five models include cards with
four, eight, 12, and 16
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of H. Peter Anvin
} Sent: Monday, April 23, 2007 1:49 PM
} To: Guy Watkins
} Cc: linux-raid@vger.kernel.org
} Subject: Re: mkinitrd and RAID6 on FC5
}
} Guy Watkins wrote:
} > Is this a RED
GB / 16 MB /?/ SATA
And 2 of these:
Seagate "Barracuda 7200.9" 300 GB / 16 MB /?/ SATA
No problems with above, very quiet and cool.
And many (17+) 18 Gig 10,000 RPM SCSI disks. 1 or 2 have failed in the last
2 years. But they are vary out of warranty.
No problems, very
est to read from the
disk being replaced. You could even migrate many disks at the same time.
Your data would remain redundant throughout the process.
Guy
}
} --
} bill davidsen <[EMAIL PROTECTED]>
} CTO TMR Associates, Inc
} Doing interesting things with small computers since 1979
}
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of berk walker
} Sent: Saturday, September 09, 2006 4:09 PM
} To: Justin Piszcz
} Cc: Sandra L. McGrew; linux-raid@vger.kernel.org
} Subject: Re: 2 Hard Drives & RAID
}
} Justin Piszcz wrote:
ace), /dev/sdb would become the new /dev/sda anyway. And so fstab
} would be correct in pointing to /dev/sda.
You should mirror swap space! Otherwise you will have an outage if a disk
fails. Most places would not accept an outage for a simple disk failure.
IMO.
Guy
}
} Sound opinions welcome.
}
hat I want.
Btw, I plan to buy 3 or 4 Seagate ST3320620AS disks. Barracuda
7200.10 SATA 320G.
Thanks,
Guy
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
ing blkdev_issue_flush were
} appropriate.
}
} Whether filesystems actually do this, I am less certain.
What if a disk is hot added while the filesystem is mounted. And the new
disk does not support barriers but the old disks do? Or you have a mix?
If the new disk can't be handled co
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Ming Zhang
} Sent: Tuesday, April 11, 2006 6:13 PM
} To: Andy Smith
} Cc: linux-raid@vger.kernel.org
} Subject: Re: mdadm + raid1 of 2 disks and now need to add more
}
} On Tue, 2006-04-11
v/md0" or "mdadm -E /dev/hda2".
Or the output from "cat /proc/mdstat", from before you re-created the array.
Guy
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Technomage
} Sent: Wednesday, March 29, 2006 11:15
this:
dd if=/dev/hdb2 of=/dev/null bs=64k
or
dd if=/dev/hdb of=/dev/null bs=64k
Guy
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Technomage
} Sent: Wednesday, March 29, 2006 12:09 AM
} To: linux-raid@vger.kernel.org
} Subject
Does test 1 have 4 processes?
Does test 2 have 1 process?
The number of testing processes should be the same in both tests.
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Raz Ben-Jehuda(caro)
} Sent: Monday, March 06, 2006 6:46 AM
} To:
Not the same amount! Match the size of the NV RAM disk with RAM at a
fraction of the cost. With the money saved, buy a computer for the kids.
:)
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Andy Smith
} Sent: Monday, February 13, 200
Don't forget, that speed is per disk! :)
In about 10 years we will laugh at how slow this is.
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Mike Hardy
} Sent: Friday, February 03, 2006 4:56 PM
} To: linux-raid@vger.kernel.org
} Subjec
that is over kill.
Guy
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Mario 'BitKoenig' Holbe
} Sent: Thursday, February 02, 2006 1:42 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: RAID 16?
}
} Matthias Urlichs <[EM
s the CPU load? The above does not reflect CPU usage.
I beleve it reports on average outsanding IOs. With 6 arrays syncing, I
would have expected 6 or 12.
Run "top", or "sar -u 10 10" to see CPU usage.
Guy
}
} Does anybody have an idea what is the problem? Thank a lot in
en a light bulb fail while on, only during on/off cycles.
I have seen disks go bad while in use. But more often I see disks that were
fine, until the power is cycled.
Guy
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Mark Hahn
} Sent:
The last disk is not excluded, since the excluded space is used for parity
(xor).
An equal part of all disks is excluded (the parity data).
However, the size is the same as 1 disk being excluded. :)
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:linux-raid-
> [EMAIL PROTECTED] O
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:linux-raid-
> [EMAIL PROTECTED] On Behalf Of Raz Ben-Jehuda(caro)
> Sent: Monday, November 21, 2005 9:47 AM
> To: Guy
> Cc: Linux RAID Mailing List
> Subject: Re: comparing FreeBSD to linux
>
> fetching from
faster under Linux? :)
That would explain the 9.3 times increase in CPU load.
It is important that you are comparing the CPU load at the same disks rate,
or at least factor in the disk rate.
Guy
>
> I need to switch to linux from freebsd. I am using in linux 2.6.6 kernel .
> is problem
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:linux-raid-
> [EMAIL PROTECTED] On Behalf Of Carlos Carvalho
> Sent: Saturday, November 19, 2005 2:30 PM
> To: linux-raid@vger.kernel.org
> Subject: raid5 reliability (was raid5 write performance)
>
> Guy ([EM
> -Original Message-
> From: Mike Hardy [mailto:[EMAIL PROTECTED]
> Sent: Friday, November 18, 2005 11:57 PM
> To: Guy
> Cc: 'Dan Stromberg'; 'Jure Pečar'; linux-raid@vger.kernel.org
> Subject: Re: raid5 write performance
>
>
>
> Gu
t think the file system can tell
when a write is truly complete. I don't recall ever having a Linux system
crash, so I am not worried. But power failures cause the same risk, or
maybe more. I have seen power failures, even with a UPS!
Guy
>
> Dan Stromberg wrote:
> &
In most of your results, your CPU usage is very high. Once you get to about
90% usage, you really can't do much else, unless you can improve the CPU
usage.
Guy
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:linux-raid-
> [EMAIL PROTECTED] On Behalf Of Holge
ot data backup! It is hardware redundancy!!
Data loss or corruption can still occur with a RAID solution. RAID won't
help if someone fat fingers a "rm" command.
Corruption of the filesystem can also cause major data loss, without a
failed disk.
If the data was lost, what would i
RAID10 will work with an odd number of disks.
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:linux-raid-
> [EMAIL PROTECTED] On Behalf Of Gregory Seidman
> Sent: Monday, August 22, 2005 5:31 PM
> To: Linux RAID list
> Subject: RAID10 vs. LVM on RAID1
>
> Is there any advantage to
6k av, 508128k used, 7168k free, 0k shrd, 128412k buff
I think buff (128412k) is the amount of "unused" memory. But not sure. I
never had a memory issue with Linux, so have not researched this. But I
have on other Unixes.
Guy
> -Original Message-
> From: [EMAIL PR
> From: Martin K. Petersen [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, April 20, 2005 11:49 AM
> To: Guy
> Cc: 'Frank Wittig'; [EMAIL PROTECTED]; linux-raid@vger.kernel.org
> Subject: Re: Questions about software RAID
>
> >>>>> "Guy"
> Hervé Eychenne wrote:
>
> >Maybe you are an experienced guy so it seems so simple to you... but
> >I'm always amused when an experienced guy refuses to make things
> >simpler for those who aren't as much as he is. And sends them to
> >Microsoft. Great.
>
erformance data comparing the 2?
One bad point for EVMS, no RAID6. :(
One good point for EVMS, bad Block Relocation (but only on writes).
Not sure how EVMS handles read errors.
I am getting on the mailing list(s). I must know more about this!!!
Guy
> -Original Message-
> F
or. It is common for the SCSI card to supply term power. Then all
of your disks would be configured the same.
Guy
>
>
> On Sat, 2 Apr 2005, Guy wrote:
>
> >
> >
> >> -Original Message-
> >> From: [EMAIL PROTECTED] [mailto:linux-raid-
> >&
it shouldn't have to be that way - and that's one
> reason I did not jump on the termination right away. Evidently I have
> cheap caddies .
You just said the cable has a terminator block after the last drive!! What
is that? It sounds like the terminato
es they keep
> > getting flagged as missing. If I do a "Read system config" the drives
> > show up and the flag goes away. A few minutes later they are flagged as
> >
> > missing again. This is a little discouraging as it appears I have a
> > problem. Any
nt values type these 2 lines:
cat /proc/sys/dev/raid/speed_limit_min
cat /proc/sys/dev/raid/speed_limit_max
To temporarily change the defaults use these 2 commands:
echo 1000 > /proc/sys/dev/raid/speed_limit_min
echo 10 > /proc/sys/dev/raid/speed_limit_max
Adjust above as required.
I am not sure what bd_claim is, but it is somewhat like open(). My guess is
your disk is in use, maybe nounted. Run this command and send the output.
df
Guy
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:linux-raid-
> [EMAIL PROTECTED] On Behalf Of rrk
> Sent: S
someone fixes the problem and you want to re-sync. Both "A" and "B" have
done disk I/O that the other does not know about. Both bitmap must be used
to re-sync, or a 100% re-sync must be done.
I think what I have outlined above is quite reasonable.
Guy
-Original Message--
Oh! I never read it like you just said. I have been reading it like copy
in both directions based on both bitmaps! What you said below, seems
reasonable.
Guy
-Original Message-
From: Lars Marowsky-Bree [mailto:[EMAIL PROTECTED]
Sent: Saturday, March 19, 2005 12:54 PM
To: Guy; '
ge a superset at the block level? AND the 2
blocks and update both? :) I don't think a filesystem would like that. It
would be real bad to re-sync if the filesystem is mounted!
In the case of a split brain, I think one must be 100% voided, and a full
re-sync must be done.
Guy
-O
You said:
"If your write size is smaller than chunk_size*N (N = number of data blocks
in a stripe), in order to calculate correct parity you have to read data
from the remaining drives."
Neil explained it in this message:
http://marc.theaimsgroup.com/?l=linux-raid&m=1086821907
the config file.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Erik Wasser
Sent: Sunday, March 13, 2005 7:22 AM
To: linux-raid@vger.kernel.org
Subject: 'Segmentation fault' after running 'mdadm --examine --brief --scan
--config=partitions&
I guess I need to know more history. Before your problems, was hdi1 a
spare?
Describe the array before you had problems. Then what went wrong.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ronny Plattner
Sent: Saturday, March 12, 2005 1:54 PM
To
It seems like a trick question! :)
You don't use "missing" on assemble, it is a keyword for the create command.
For assemble. just don't list that device.
mdadm --assemble --run --force /dev/md2 /dev/hdi1 /dev/hdk1 /dev/hdo1
mdadm will know which disk is which.
Guy
-
my opinion. My opinions are the best! :)
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of John McMonagle
Sent: Thursday, March 10, 2005 7:02 PM
To: Brad Campbell
Cc: linux-raid@vger.kernel.org
Subject: Re: Convert raid5 to raid1?
Brad
Not saying
The only problem I have is related to bad blocks. This problem is common to
all RAID types. RAID5 is more likely to have problems.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brad Campbell
Sent: Thursday, March 10, 2005 6:04 PM
To: John
I have no idea, but...
Is the disk IO reads or writes. If writes, scary Maybe data destined
for the array goes to the spare sometimes. I hope not. I feel safe with my
2.4 kernel. :)
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peter
0 bad block example would take almost 17 hours.
I think 1000 bad blocks at one time is an indication you have a head
failure. In that case, the disk is bad.
Does anyone know how many spare blocks are on a disk?
My worse disk has 28 relocated bad blocks.
Guy
-Original Message-
From:
That has not been my experience, but I have Seagate drives!
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brad Campbell
Sent: Tuesday, March 01, 2005 11:57 PM
To: Robin Bowes
Cc: linux-raid@vger.kernel.org
Subject: Re: Joys of spare disks!
Robin
I think the overhead related to fixing the bad blocks would be insignificant
compared to the overhead of degraded mode.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Molle Bestefich
Sent: Tuesday, March 01, 2005 10:51 PM
To: linux-raid
It is a 7 drive array. If you use 6 of 7 drives, md will not try to
re-sync. But I have no idea if how to re-use the previous RAID data.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Omri Schwarz
Sent: Tuesday, March 01, 2005 6:09 PM
To: linux
ory is needed, the Kernel should
be able to relocate as needed. Maybe no code exists to do that, but I think
it would be easier to do than to swap to disk (assuming you have enough free
memory).
Guy
-Original Message-
From: Andrew Walrond [mailto:[EMAIL PROTECTED]
Sent: Friday, January 0
must re-sync 3 disks.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Christian Schmid
Sent: Sunday, February 27, 2005 10:34 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; linux-raid@vger.kernel.org
Subject: BUG (Deadlock) in 2.6.10
Hello.
Just for y
This is very good! But most of my disk space is RAID5. Any chance you have
similar plans for RAID5?
Thanks,
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Nagpure, Dinesh
Sent: Wednesday, February 23, 2005 2:56 PM
To: '[EMAIL PROTECTED
drives, SCSI cards
and terminators. But those disks still work today in U2W (LVD-80) mode.
My last attempt to mix SCSI disks and tapes was over 1 year ago, using RH9.
On previous attempts I would have used RH7. I don't recall ever using RH8.
Guy
-Original Message-
From: [EMAIL PROT
buses, 17 disk drives
and 2 tape drives. Works just fine. But no tape on the same SCSI bus as
disks.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael Tokarev
Sent: Tuesday, February 22, 2005 4:53 PM
To: linux-raid@vger.kernel.org
Subject: Re
3 tape (12-24 Gig). The
backup took more than 24 hours. That is better than 10 to 1. :) So, yes,
giving the correct data, 2 to 1 can be done, but not in the real world.
IMO.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gordon Henderson
Sen
last
time I checked, I got about 1.1 to 1. What a marketing scam!!!!
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gordon Henderson
Sent: Tuesday, February 22, 2005 10:41 AM
To: Louis-David Mitterrand
Cc: linux-raid@vger.kernel.org
Subject: Re: [OT]
. mdadm does not use the
names from /proc/partitions but only the major and minor device
numbers. It scans /dev to find the name that matches the num-
bers.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Torsten E.
Sent: Thursday
in grown table.
Does anyone know how many defects is considered too many?
Guy
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Maybe I am confused, but if you use the whole disk, I would expect the whole
disk could be over-written! What am I missing?
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andrew Morton
Sent: Monday, February 14, 2005 1:17 PM
To: linux-raid
RAID6 array that they believe is stable and safe?
And please give some details about the array. Number of disks, sizes, LVM,
FS, SCSI, ATA and anything else you can think of? Also, details about any
disk failures and how well recovery went?
Thanks,
Guy
-Original Message-
From: [EMAIL
For future reference:
Everyone should do a nightly disk test to prevent bad blocks from hiding
undetected. smartd, badblocks or dd can be used. Example:
dd if=/dev/sda of=/dev/null bs=64k
Just create a nice little script that emails you the output. Put this
script in a nighty cron to run while
n Thu, Jan 27, 2005 at 12:16:31PM -0500, Guy wrote:
> > It rotates the pairs!
> > Assume 3 disks, A, B and C.
> > Each stripe would be on these disks:
> > A+B
> > C+A
> > B+C
> > A+B
> > C+A
> > B+C
> > ...
>
> Hmm, difficult to visu
, January 27, 2005 11:19 AM
To: 'linux-raid'
Subject: RAID-10 with odd number of disks (was Re: Software RAID 0+1 with
mdadm.)
On Thu, Jan 27, 2005 at 10:50:43AM -0500, Guy wrote:
> RAID10 will work with an odd number of disks! If really is cool!
It will? How? Does it just make the last
ut of the RAID1 arrays.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Holger Kiehl
Sent: Thursday, January 27, 2005 3:13 AM
To: Neil Brown
Cc: linux-raid
Subject: Re: Software RAID 0+1 with mdadm.
>>
>> I have since upgraded to mdad
Why would you fsck the failed member of a RAID5?
You said "format", please elaborate!
You should verify the disk is readable.
It looks like your disk is bad. But a read test would be reasonable.
Try this:
dd if=/dev/had of=/dev/null bs=64k
It should complete without errors. It will do a full
-syncing 2 disks of data.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brad Dameron
Sent: Wednesday, January 26, 2005 3:33 PM
To: linux-raid@vger.kernel.org
Subject: RE: Software RAID 0+1 with mdadm.
On Tue, 2005-01-25 at 15:04, Guy wrote:
> For a more sta
2.75TB into virtual disks that are all smaller than
2TB.
Just some ideas.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Carlos Knowlton
Sent: Wednesday, January 26, 2005 12:46 PM
To: linux-raid@vger.kernel.org
Subject: booting from a HW RAID volume
/dev/md1
/dev/md2
You can put a file system directly on /dev/md0
Are all of the disks on the same cable?
Not sure about your booting issue.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brad Dameron
Sent: Tuesday, January 25, 2005 5:28 PM
To
At least:
Different SCSI or IDE bus.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of David Dougall
Sent: Thursday, January 20, 2005 2:18 PM
To: Kanoa Withington
Cc: Mario Holbe; linux-raid@vger.kernel.org
Subject: Re: No response?
By "diff
Are you sure it is RAID? Maybe hardware RAID?
Send the output of these commands:
cat /proc/mdstat
df
mdadm -D /dev/md?
If using LVM:
vgdisplay -v
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of David Dougall
Sent: Thursday, January 20, 2005 1:57
disk directly, unless you are
recovering from an abnormal failure. Doing so could cause your array to be
out of sync, and md would not know it has occurred. A normal failure would
allow you to have normal access to your data, just using 1 less disk,
without user intervention.
I hope this helps!
You should download "SeaTools Enterprise".
If this tool fails the drive, I think it is safe to return.
The tool uses the sg devices. I am not sure, but I think these are for SCSI
devices.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of D
of md.
My /etc/sysctl.conf has a date of Dec 12, 2003. So, whatever kernel I had
over 1 year ago had a default of 10,000, or so.
Anyway, it has helped some people in the past. :)
I guess it depends on the kernel/md version.
I guess a default of no limit would be nice. But no support for that,
If your rebuild seems too slow, make sure you increase the speed limit!
Details in "man md".
echo 10 > /proc/sys/dev/raid/speed_limit_max
I added this to /etc/sysctl.conf
# RAID rebuild min/max speed K/Sec per device
dev.raid.speed_limit_min = 1000
dev.raid.speed_limit_max
88 matches
Mail list logo