RE: How many drives are bad?

2008-02-19 Thread Guy Watkins
I do realise that 2 } controller failures at the same time would lose everything. Wow. Sounds like what I said a few months ago. I think I also recommended RAID6. Guy - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROT

RE: How many drives are bad?

2008-02-19 Thread Guy Watkins
ray down? I do realise that 2 } controller failures at the same time would lose everything. Wow. Sounds like what I said a few months ago. I think I also recommended RAID6. Guy } } Steve. } } No virus found in this outgoing message. } Checked by AVG Free Edition. } Version: 7.5.516 / Virus Data

RE: Raid over 48 disks

2007-12-18 Thread Guy Watkins
or redundancy. Or 6 8 disk RAID6 arrays using 1 disk from each controller). That way any 2 controllers can fail and your system will still be running. 12 disks will be used for redundancy. Might be too excessive! Combine them into a RAID0 array. Guy - To unsubscribe from this list: send the l

RE: Few questions

2007-12-07 Thread Guy Watkins
man md man mdadm I use RAID6. Happy with it so far, but haven't had a disk failure yet. RAID5 sucks because if you have 1 failed disk and 1 bad block on any other disk, you are hosed. Hope that helps. } -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED]

RE: very degraded RAID5, or increasing capacity by adding discs

2007-10-08 Thread Guy Watkins
} -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Janek Kozicki } Sent: Monday, October 08, 2007 6:47 PM } To: linux-raid@vger.kernel.org } Subject: Re: very degraded RAID5, or increasing capacity by adding discs } } Janek Kozicki said:

RE: very degraded RAID5, or increasing capacity by adding discs

2007-10-08 Thread Guy Watkins
} -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Richard Scobie } Sent: Monday, October 08, 2007 3:27 PM } To: linux-raid@vger.kernel.org } Subject: Re: very degraded RAID5, or increasing capacity by adding discs } } Janek Kozicki wrote: }

RE: RAID6 clean?

2007-08-17 Thread Guy Watkins
} -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Neil Brown } Sent: Monday, June 04, 2007 2:59 AM } To: Guy Watkins } Cc: 'linux-raid' } Subject: Re: RAID6 clean? } } On Monday June 4, [EMAIL PROTECTED] wrote: } > I have a RA

RE: mdadm create to existing raid5

2007-07-12 Thread Guy Watkins
ck the 1 disk. You must be able to determine which disk was written to. I don't know how to do that unless you have the output from "mdadm -D" during the create/syncing. But please don't proceed until someone else confirms what I say or gives better advice! Guy - To unsubscribe fr

RE: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-07-12 Thread Guy Watkins
ig EMC array we had had enough battery power to power about 400 disks while the 16 Gig of cache was flushed. I think EMC told me the batteries would last about 20 minutes. I don't recall if the array was usable during the 20 minutes. We never tested a power failure. Guy - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

RAID6 clean?

2007-06-03 Thread Guy Watkins
It would be nice if there was an array option to allow an "un-clean" array to be started. An option that would be set in the md superblock. Thanks, Guy - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-06-02 Thread Guy Watkins
reaches the barrier, corruption should be assumed. It seems to me each block device that represents more than 2 other devices must do a flush at a barrier so that all devices will cross the barrier at the same time. Guy - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: raid10 on centos 5

2007-05-04 Thread Guy Watkins
} -Original Message- } From: Ruslan Sivak [mailto:[EMAIL PROTECTED] } Sent: Friday, May 04, 2007 7:22 PM } To: Guy Watkins } Cc: linux-raid@vger.kernel.org } Subject: Re: raid10 on centos 5 } } Guy Watkins wrote: } > } -Original Message- } > } From: [EMAIL PROTECTED] [mailto

RE: RAID6 question

2007-05-04 Thread Guy Watkins
} -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Guy Watkins } Sent: Saturday, April 28, 2007 8:52 PM } To: linux-raid@vger.kernel.org } Subject: RAID6 question } } I read in processor.com that Adaptec has a RAID 6/60 that is patented

RE: raid10 on centos 5

2007-05-04 Thread Guy Watkins
s. With RAID1+RAID0, any one disk can fail, a second failure has a 1 in 3 chance of vast data loss. I hope this helps, Guy - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

RAID6 question

2007-04-28 Thread Guy Watkins
I read in processor.com that Adaptec has a RAID 6/60 that is patented. Does Linux RAID6 have a conflict? Thanks, Guy Adaptec also has announced a new family of Unified Serial (meaning 3Gbps SAS/SATA) RAID controllers for PCI Express. Five models include cards with four, eight, 12, and 16

RE: mkinitrd and RAID6 on FC5

2007-04-23 Thread Guy Watkins
} -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of H. Peter Anvin } Sent: Monday, April 23, 2007 1:49 PM } To: Guy Watkins } Cc: linux-raid@vger.kernel.org } Subject: Re: mkinitrd and RAID6 on FC5 } } Guy Watkins wrote: } > Is this a RED

RE: ATA cables and drives

2006-09-16 Thread Guy
GB / 16 MB /?/ SATA And 2 of these: Seagate "Barracuda 7200.9" 300 GB / 16 MB /?/ SATA No problems with above, very quiet and cool. And many (17+) 18 Gig 10,000 RPM SCSI disks. 1 or 2 have failed in the last 2 years. But they are vary out of warranty. No problems, very

RE: proactive-raid-disk-replacement

2006-09-16 Thread Guy
est to read from the disk being replaced. You could even migrate many disks at the same time. Your data would remain redundant throughout the process. Guy } } -- } bill davidsen <[EMAIL PROTECTED]> } CTO TMR Associates, Inc } Doing interesting things with small computers since 1979 }

RE: 2 Hard Drives & RAID

2006-09-09 Thread Guy
} -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of berk walker } Sent: Saturday, September 09, 2006 4:09 PM } To: Justin Piszcz } Cc: Sandra L. McGrew; linux-raid@vger.kernel.org } Subject: Re: 2 Hard Drives & RAID } } Justin Piszcz wrote:

RE: Can you IMAGE Mirrored OS Drives?

2006-08-15 Thread Guy
ace), /dev/sdb would become the new /dev/sda anyway. And so fstab } would be correct in pointing to /dev/sda. You should mirror swap space! Otherwise you will have an outage if a disk fails. Most places would not accept an outage for a simple disk failure. IMO. Guy } } Sound opinions welcome. }

I need a PCI V2.1 4 port SATA card

2006-06-27 Thread Guy
hat I want. Btw, I plan to buy 3 or 4 Seagate ST3320620AS disks. Barracuda 7200.10 SATA 320G. Thanks, Guy - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: [PATCH 003 of 5] md: Change ENOTSUPP to EOPNOTSUPP

2006-04-29 Thread Guy
ing blkdev_issue_flush were } appropriate. } } Whether filesystems actually do this, I am less certain. What if a disk is hot added while the filesystem is mounted. And the new disk does not support barriers but the old disks do? Or you have a mix? If the new disk can't be handled co

RE: mdadm + raid1 of 2 disks and now need to add more

2006-04-11 Thread Guy
} -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Ming Zhang } Sent: Tuesday, April 11, 2006 6:13 PM } To: Andy Smith } Cc: linux-raid@vger.kernel.org } Subject: Re: mdadm + raid1 of 2 disks and now need to add more } } On Tue, 2006-04-11

RE: addendum: was Re: recovering data on a failed raid-0 installation

2006-03-29 Thread Guy
v/md0" or "mdadm -E /dev/hda2". Or the output from "cat /proc/mdstat", from before you re-created the array. Guy } -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Technomage } Sent: Wednesday, March 29, 2006 11:15

RE: recovering data on a failed raid-0 installation

2006-03-28 Thread Guy
this: dd if=/dev/hdb2 of=/dev/null bs=64k or dd if=/dev/hdb of=/dev/null bs=64k Guy } -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Technomage } Sent: Wednesday, March 29, 2006 12:09 AM } To: linux-raid@vger.kernel.org } Subject

RE: raid5 performance question

2006-03-06 Thread Guy
Does test 1 have 4 processes? Does test 2 have 1 process? The number of testing processes should be the same in both tests. } -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Raz Ben-Jehuda(caro) } Sent: Monday, March 06, 2006 6:46 AM } To:

RE: NVRAM support

2006-02-13 Thread Guy
Not the same amount! Match the size of the NV RAM disk with RAM at a fraction of the cost. With the money saved, buy a computer for the kids. :) } -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Andy Smith } Sent: Monday, February 13, 200

RE: ludicrous speed: raid6 reconstruction

2006-02-03 Thread Guy
Don't forget, that speed is per disk! :) In about 10 years we will laugh at how slow this is. } -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Mike Hardy } Sent: Friday, February 03, 2006 4:56 PM } To: linux-raid@vger.kernel.org } Subjec

RE: RAID 16?

2006-02-02 Thread Guy
that is over kill. Guy } -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Mario 'BitKoenig' Holbe } Sent: Thursday, February 02, 2006 1:42 PM } To: linux-raid@vger.kernel.org } Subject: Re: RAID 16? } } Matthias Urlichs <[EM

RE: cpu consumption

2006-01-23 Thread Guy
s the CPU load? The above does not reflect CPU usage. I beleve it reports on average outsanding IOs. With 6 arrays syncing, I would have expected 6 or 12. Run "top", or "sar -u 10 10" to see CPU usage. Guy } } Does anybody have an idea what is the problem? Thank a lot in

RE: Save to use spindown?

2006-01-21 Thread Guy
en a light bulb fail while on, only during on/off cycles. I have seen disks go bad while in use. But more often I see disks that were fine, until the power is cycled. Guy } -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Mark Hahn } Sent:

RE: array size

2005-12-13 Thread Guy
The last disk is not excluded, since the excluded space is used for parity (xor). An equal part of all disks is excluded (the parity data). However, the size is the same as 1 disk being excluded. :) > -Original Message- > From: [EMAIL PROTECTED] [mailto:linux-raid- > [EMAIL PROTECTED] O

RE: comparing FreeBSD to linux

2005-11-21 Thread Guy
> -Original Message- > From: [EMAIL PROTECTED] [mailto:linux-raid- > [EMAIL PROTECTED] On Behalf Of Raz Ben-Jehuda(caro) > Sent: Monday, November 21, 2005 9:47 AM > To: Guy > Cc: Linux RAID Mailing List > Subject: Re: comparing FreeBSD to linux > > fetching from

RE: comparing FreeBSD to linux

2005-11-20 Thread Guy
faster under Linux? :) That would explain the 9.3 times increase in CPU load. It is important that you are comparing the CPU load at the same disks rate, or at least factor in the disk rate. Guy > > I need to switch to linux from freebsd. I am using in linux 2.6.6 kernel . > is problem

RE: raid5 reliability (was raid5 write performance)

2005-11-19 Thread Guy
> -Original Message- > From: [EMAIL PROTECTED] [mailto:linux-raid- > [EMAIL PROTECTED] On Behalf Of Carlos Carvalho > Sent: Saturday, November 19, 2005 2:30 PM > To: linux-raid@vger.kernel.org > Subject: raid5 reliability (was raid5 write performance) > > Guy ([EM

RE: raid5 write performance

2005-11-18 Thread Guy
> -Original Message- > From: Mike Hardy [mailto:[EMAIL PROTECTED] > Sent: Friday, November 18, 2005 11:57 PM > To: Guy > Cc: 'Dan Stromberg'; 'Jure Pečar'; linux-raid@vger.kernel.org > Subject: Re: raid5 write performance > > > > Gu

RE: raid5 write performance

2005-11-18 Thread Guy
t think the file system can tell when a write is truly complete. I don't recall ever having a Linux system crash, so I am not worried. But power failures cause the same risk, or maybe more. I have seen power failures, even with a UPS! Guy > > Dan Stromberg wrote: > &

RE: Where is the performance bottleneck?

2005-08-30 Thread Guy
In most of your results, your CPU usage is very high. Once you get to about 90% usage, you really can't do much else, unless you can improve the CPU usage. Guy > -Original Message- > From: [EMAIL PROTECTED] [mailto:linux-raid- > [EMAIL PROTECTED] On Behalf Of Holge

RE: number of global spares?

2005-08-26 Thread Guy
ot data backup! It is hardware redundancy!! Data loss or corruption can still occur with a RAID solution. RAID won't help if someone fat fingers a "rm" command. Corruption of the filesystem can also cause major data loss, without a failed disk. If the data was lost, what would i

RE: RAID10 vs. LVM on RAID1

2005-08-22 Thread Guy
RAID10 will work with an odd number of disks. > -Original Message- > From: [EMAIL PROTECTED] [mailto:linux-raid- > [EMAIL PROTECTED] On Behalf Of Gregory Seidman > Sent: Monday, August 22, 2005 5:31 PM > To: Linux RAID list > Subject: RAID10 vs. LVM on RAID1 > > Is there any advantage to

RE: mdadm memory leak?

2005-07-04 Thread Guy
6k av, 508128k used, 7168k free, 0k shrd, 128412k buff I think buff (128412k) is the amount of "unused" memory. But not sure. I never had a memory issue with Linux, so have not researched this. But I have on other Unixes. Guy > -Original Message- > From: [EMAIL PR

RE: Questions about software RAID

2005-04-20 Thread Guy
> From: Martin K. Petersen [mailto:[EMAIL PROTECTED] > Sent: Wednesday, April 20, 2005 11:49 AM > To: Guy > Cc: 'Frank Wittig'; [EMAIL PROTECTED]; linux-raid@vger.kernel.org > Subject: Re: Questions about software RAID > > >>>>> "Guy"

RE: Questions about software RAID

2005-04-19 Thread Guy
> Hervé Eychenne wrote: > > >Maybe you are an experienced guy so it seems so simple to you... but > >I'm always amused when an experienced guy refuses to make things > >simpler for those who aren't as much as he is. And sends them to > >Microsoft. Great. >

EVMS or md?

2005-04-04 Thread Guy
erformance data comparing the 2? One bad point for EVMS, no RAID6. :( One good point for EVMS, bad Block Relocation (but only on writes). Not sure how EVMS handles read errors. I am getting on the mailing list(s). I must know more about this!!! Guy > -Original Message- > F

RE: Adaptec 3210S Problems

2005-04-03 Thread Guy
or. It is common for the SCSI card to supply term power. Then all of your disks would be configured the same. Guy > > > On Sat, 2 Apr 2005, Guy wrote: > > > > > > >> -Original Message- > >> From: [EMAIL PROTECTED] [mailto:linux-raid- > >&

RE: Adaptec 3210S Problems

2005-04-02 Thread Guy
it shouldn't have to be that way - and that's one > reason I did not jump on the termination right away. Evidently I have > cheap caddies . You just said the cable has a terminator block after the last drive!! What is that? It sounds like the terminato

RE: Adaptec 3210S Problems

2005-04-02 Thread Guy
es they keep > > getting flagged as missing. If I do a "Read system config" the drives > > show up and the flag goes away. A few minutes later they are flagged as > > > > missing again. This is a little discouraging as it appears I have a > > problem. Any

RE: syncing RAID1 with more than 10MB/sec

2005-03-30 Thread Guy
nt values type these 2 lines: cat /proc/sys/dev/raid/speed_limit_min cat /proc/sys/dev/raid/speed_limit_max To temporarily change the defaults use these 2 commands: echo 1000 > /proc/sys/dev/raid/speed_limit_min echo 10 > /proc/sys/dev/raid/speed_limit_max Adjust above as required.

RE: Raid1 problem can't add remove or mark faulty -- it did work

2005-03-27 Thread Guy
I am not sure what bd_claim is, but it is somewhat like open(). My guess is your disk is in use, maybe nounted. Run this command and send the output. df Guy > -Original Message- > From: [EMAIL PROTECTED] [mailto:linux-raid- > [EMAIL PROTECTED] On Behalf Of rrk > Sent: S

RE: [PATCH 1/2] md bitmap bug fixes

2005-03-21 Thread Guy
someone fixes the problem and you want to re-sync. Both "A" and "B" have done disk I/O that the other does not know about. Both bitmap must be used to re-sync, or a 100% re-sync must be done. I think what I have outlined above is quite reasonable. Guy -Original Message--

RE: [PATCH 1/2] md bitmap bug fixes

2005-03-19 Thread Guy
Oh! I never read it like you just said. I have been reading it like copy in both directions based on both bitmaps! What you said below, seems reasonable. Guy -Original Message- From: Lars Marowsky-Bree [mailto:[EMAIL PROTECTED] Sent: Saturday, March 19, 2005 12:54 PM To: Guy; '

RE: [PATCH 1/2] md bitmap bug fixes

2005-03-19 Thread Guy
ge a superset at the block level? AND the 2 blocks and update both? :) I don't think a filesystem would like that. It would be real bad to re-sync if the filesystem is mounted! In the case of a split brain, I think one must be 100% voided, and a full re-sync must be done. Guy -O

RE: [PERFORM] Postgres on RAID5

2005-03-14 Thread Guy
You said: "If your write size is smaller than chunk_size*N (N = number of data blocks in a stripe), in order to calculate correct parity you have to read data from the remaining drives." Neil explained it in this message: http://marc.theaimsgroup.com/?l=linux-raid&m=1086821907

RE: 'Segmentation fault' after running 'mdadm --examine --brief --scan --config=partitions'

2005-03-13 Thread Guy
the config file. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Erik Wasser Sent: Sunday, March 13, 2005 7:22 AM To: linux-raid@vger.kernel.org Subject: 'Segmentation fault' after running 'mdadm --examine --brief --scan --config=partitions&

RE: sw_raid5-failed_disc(s)-2

2005-03-12 Thread Guy
I guess I need to know more history. Before your problems, was hdi1 a spare? Describe the array before you had problems. Then what went wrong. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ronny Plattner Sent: Saturday, March 12, 2005 1:54 PM To

RE: sw_raid5-failed_disc(s)-2

2005-03-12 Thread Guy
It seems like a trick question! :) You don't use "missing" on assemble, it is a keyword for the create command. For assemble. just don't list that device. mdadm --assemble --run --force /dev/md2 /dev/hdi1 /dev/hdk1 /dev/hdo1 mdadm will know which disk is which. Guy -

RE: Convert raid5 to raid1?

2005-03-10 Thread Guy
my opinion. My opinions are the best! :) Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of John McMonagle Sent: Thursday, March 10, 2005 7:02 PM To: Brad Campbell Cc: linux-raid@vger.kernel.org Subject: Re: Convert raid5 to raid1? Brad Not saying

RE: Convert raid5 to raid1?

2005-03-10 Thread Guy
The only problem I have is related to bad blocks. This problem is common to all RAID types. RAID5 is more likely to have problems. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Brad Campbell Sent: Thursday, March 10, 2005 6:04 PM To: John

RE: Spare disk could not sleep / standby

2005-03-07 Thread Guy
I have no idea, but... Is the disk IO reads or writes. If writes, scary Maybe data destined for the array goes to the spare sometimes. I hope not. I feel safe with my 2.4 kernel. :) Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Peter

RE: Joys of spare disks!

2005-03-02 Thread Guy
0 bad block example would take almost 17 hours. I think 1000 bad blocks at one time is an indication you have a head failure. In that case, the disk is bad. Does anyone know how many spare blocks are on a disk? My worse disk has 28 relocated bad blocks. Guy -Original Message- From:

RE: Joys of spare disks!

2005-03-01 Thread Guy
That has not been my experience, but I have Seagate drives! Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Brad Campbell Sent: Tuesday, March 01, 2005 11:57 PM To: Robin Bowes Cc: linux-raid@vger.kernel.org Subject: Re: Joys of spare disks! Robin

RE: Joys of spare disks!

2005-03-01 Thread Guy
I think the overhead related to fixing the bad blocks would be insignificant compared to the overhead of degraded mode. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Molle Bestefich Sent: Tuesday, March 01, 2005 10:51 PM To: linux-raid

RE: Using the md driver to look at a bad hardware RAID.

2005-03-01 Thread Guy
It is a 7 drive array. If you use 6 of 7 drives, md will not try to re-sync. But I have no idea if how to re-use the previous RAID data. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Omri Schwarz Sent: Tuesday, March 01, 2005 6:09 PM To: linux

RE: No swap can be dangerous (was Re: swap on RAID (was Re: swp - Re: ext3 journal on software raid))

2005-03-01 Thread Guy
ory is needed, the Kernel should be able to relocate as needed. Maybe no code exists to do that, but I think it would be easier to do than to swap to disk (assuming you have enough free memory). Guy -Original Message- From: Andrew Walrond [mailto:[EMAIL PROTECTED] Sent: Friday, January 0

RE: BUG (Deadlock) in 2.6.10

2005-02-27 Thread Guy
must re-sync 3 disks. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Christian Schmid Sent: Sunday, February 27, 2005 10:34 AM To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; linux-raid@vger.kernel.org Subject: BUG (Deadlock) in 2.6.10 Hello. Just for y

RE: RAID1 robust read and read/write correct and EVMS-BBR

2005-02-23 Thread Guy
This is very good! But most of my disk space is RAID5. Any chance you have similar plans for RAID5? Thanks, Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nagpure, Dinesh Sent: Wednesday, February 23, 2005 2:56 PM To: '[EMAIL PROTECTED

RE: [OT] best tape backup system?

2005-02-23 Thread Guy
drives, SCSI cards and terminators. But those disks still work today in U2W (LVD-80) mode. My last attempt to mix SCSI disks and tapes was over 1 year ago, using RH9. On previous attempts I would have used RH7. I don't recall ever using RH8. Guy -Original Message- From: [EMAIL PROT

RE: [OT] best tape backup system?

2005-02-22 Thread Guy
buses, 17 disk drives and 2 tape drives. Works just fine. But no tape on the same SCSI bus as disks. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Michael Tokarev Sent: Tuesday, February 22, 2005 4:53 PM To: linux-raid@vger.kernel.org Subject: Re

RE: [OT] best tape backup system?

2005-02-22 Thread Guy
3 tape (12-24 Gig). The backup took more than 24 hours. That is better than 10 to 1. :) So, yes, giving the correct data, 2 to 1 can be done, but not in the real world. IMO. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Gordon Henderson Sen

RE: [OT] best tape backup system?

2005-02-22 Thread Guy
last time I checked, I got about 1.1 to 1. What a marketing scam!!!! Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Gordon Henderson Sent: Tuesday, February 22, 2005 10:41 AM To: Louis-David Mitterrand Cc: linux-raid@vger.kernel.org Subject: Re: [OT]

RE: Question regarding mdadm.conf

2005-02-17 Thread Guy
. mdadm does not use the names from /proc/partitions but only the major and minor device numbers. It scans /dev to find the name that matches the num- bers. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Torsten E. Sent: Thursday

Bad blocks

2005-02-17 Thread Guy
in grown table. Does anyone know how many defects is considered too many? Guy - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

RE: [Bugme-new] [Bug 4211] New: md configuration destroys disk GPT label

2005-02-14 Thread Guy
Maybe I am confused, but if you use the whole disk, I would expect the whole disk could be over-written! What am I missing? Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andrew Morton Sent: Monday, February 14, 2005 1:17 PM To: linux-raid

RE: [PATCH md 2 of 4] Fix raid6 problem

2005-02-03 Thread Guy
RAID6 array that they believe is stable and safe? And please give some details about the array. Number of disks, sizes, LVM, FS, SCSI, ATA and anything else you can think of? Also, details about any disk failures and how well recovery went? Thanks, Guy -Original Message- From: [EMAIL

RE: Broken harddisk

2005-01-29 Thread Guy
For future reference: Everyone should do a nightly disk test to prevent bad blocks from hiding undetected. smartd, badblocks or dd can be used. Example: dd if=/dev/sda of=/dev/null bs=64k Just create a nice little script that emails you the output. Put this script in a nighty cron to run while

RE: RAID-10 with odd number of disks (was Re: Software RAID 0+1 with mdadm.)

2005-01-27 Thread Guy
n Thu, Jan 27, 2005 at 12:16:31PM -0500, Guy wrote: > > It rotates the pairs! > > Assume 3 disks, A, B and C. > > Each stripe would be on these disks: > > A+B > > C+A > > B+C > > A+B > > C+A > > B+C > > ... > > Hmm, difficult to visu

RE: RAID-10 with odd number of disks (was Re: Software RAID 0+1 with mdadm.)

2005-01-27 Thread Guy
, January 27, 2005 11:19 AM To: 'linux-raid' Subject: RAID-10 with odd number of disks (was Re: Software RAID 0+1 with mdadm.) On Thu, Jan 27, 2005 at 10:50:43AM -0500, Guy wrote: > RAID10 will work with an odd number of disks! If really is cool! It will? How? Does it just make the last

RE: Software RAID 0+1 with mdadm.

2005-01-27 Thread Guy
ut of the RAID1 arrays. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Holger Kiehl Sent: Thursday, January 27, 2005 3:13 AM To: Neil Brown Cc: linux-raid Subject: Re: Software RAID 0+1 with mdadm. >> >> I have since upgraded to mdad

RE: irq timeout: status=0xd0 { Busy }

2005-01-26 Thread Guy
Why would you fsck the failed member of a RAID5? You said "format", please elaborate! You should verify the disk is readable. It looks like your disk is bad. But a read test would be reasonable. Try this: dd if=/dev/had of=/dev/null bs=64k It should complete without errors. It will do a full

RE: Software RAID 0+1 with mdadm.

2005-01-26 Thread Guy
-syncing 2 disks of data. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Brad Dameron Sent: Wednesday, January 26, 2005 3:33 PM To: linux-raid@vger.kernel.org Subject: RE: Software RAID 0+1 with mdadm. On Tue, 2005-01-25 at 15:04, Guy wrote: > For a more sta

RE: booting from a HW RAID volume

2005-01-26 Thread Guy
2.75TB into virtual disks that are all smaller than 2TB. Just some ideas. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Carlos Knowlton Sent: Wednesday, January 26, 2005 12:46 PM To: linux-raid@vger.kernel.org Subject: booting from a HW RAID volume

RE: Software RAID 0+1 with mdadm.

2005-01-25 Thread Guy
/dev/md1 /dev/md2 You can put a file system directly on /dev/md0 Are all of the disks on the same cable? Not sure about your booting issue. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Brad Dameron Sent: Tuesday, January 25, 2005 5:28 PM To

RE: No response?

2005-01-20 Thread Guy
At least: Different SCSI or IDE bus. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of David Dougall Sent: Thursday, January 20, 2005 2:18 PM To: Kanoa Withington Cc: Mario Holbe; linux-raid@vger.kernel.org Subject: Re: No response? By "diff

RE: No response?

2005-01-20 Thread Guy
Are you sure it is RAID? Maybe hardware RAID? Send the output of these commands: cat /proc/mdstat df mdadm -D /dev/md? If using LVM: vgdisplay -v Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of David Dougall Sent: Thursday, January 20, 2005 1:57

RE: Checking if RAID does work?

2005-01-19 Thread Guy
disk directly, unless you are recovering from an abnormal failure. Doing so could cause your array to be out of sync, and md would not know it has occurred. A normal failure would allow you to have normal access to your data, just using 1 less disk, without user intervention. I hope this helps!

RE: RAID5 drive failure, please verify my commands

2005-01-18 Thread Guy
You should download "SeaTools Enterprise". If this tool fails the drive, I think it is safe to return. The tool uses the sg devices. I am not sure, but I think these are for SCSI devices. Guy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of D

RE: 4 questions. Chieftec chassis case CA-01B, resync times, selecting ide driver module loading, raid5 :2 drives on same ide channel

2005-01-16 Thread Guy
of md. My /etc/sysctl.conf has a date of Dec 12, 2003. So, whatever kernel I had over 1 year ago had a default of 10,000, or so. Anyway, it has helped some people in the past. :) I guess it depends on the kernel/md version. I guess a default of no limit would be nice. But no support for that,

RE: 4 questions. Chieftec chassis case CA-01B, resync times, selecting ide driver module loading, raid5 :2 drives on same ide channel

2005-01-16 Thread Guy
If your rebuild seems too slow, make sure you increase the speed limit! Details in "man md". echo 10 > /proc/sys/dev/raid/speed_limit_max I added this to /etc/sysctl.conf # RAID rebuild min/max speed K/Sec per device dev.raid.speed_limit_min = 1000 dev.raid.speed_limit_max