I tried to add a 6th disk to a RAID-5 with raidreconf 0.1.2
Almost being done raidreconf aborted with the error message:
raid5_map_global_to_local: disk 0 block out of range: 2442004 (2442004)
gblock = 7326012
aborted
After searching the web I believe this is due to different disk sizes. Because
On Wed, 2005-03-09 at 17:43 +0100, Luca Berra wrote:
> On Wed, Mar 09, 2005 at 11:28:48AM +0100, Jimmy Hedman wrote:
> >Is there any way i can make this work? Could it be doable with mdadm in
> >a initrd?
> >
> mdassembled was devise for this purpose.
>
> create an /etc/mdadm.conf with
> echo "DEV
[EMAIL PROTECTED] wrote:
After searching the web I believe this is due to different disk sizes. Because I
use different disks (vendor and type) having different geometries it is not
possible to have partitions of exact the same size. They match as good as
possible but some always have different amo
Neil Brown wrote:
Growing a raid5 or raid6 by adding another drive is conceptually
possible to do while the array is online, but I have not definite
plans to do this (I would like to). Growing a raid5 into a raid6
would also be useful.
These require moving lots of data around, and need to be able
Hi,
I have 6 WD800Jb disk drives. I used 4 of them in a RAID5 (using the
whole disk - no partitions) array.
I have mixed them all up, and now want to get some data off the array.
How best to find out which drives were in the array?
Here are the partition tables (obtained using fdisk on OS X):
WCA
On Thu, 2005-03-10 at 22:17 +0800, Max Waterman wrote:
> Hi,
>
> I have 6 WD800Jb disk drives. I used 4 of them in a RAID5 (using the
> whole disk - no partitions) array.
>
> I have mixed them all up, and now want to get some data off the array.
>
> How best to find out which drives were in the
Max Waterman wrote:
I have 6 WD800Jb disk drives. I used 4 of them in a RAID5 (using the
whole disk - no partitions) array.
I have mixed them all up, and now want to get some data off the array.
How best to find out which drives were in the array?
put them in a linux box and run "mdadm -E " on e
Hi,
I have many problems with RAID in kernel 2.6.10.
First of all, I have the md, raid1,... into the kernel, superblocks in
the RAIDs and "Linux RAID autodetect" as the partition types. Moreover,
I make an initrd. However, when the kernel boots, it doesn't recognize
the RAID disks:
md: raid1
Hmm.. for me:
> smartctl -A -d ata /dev/sda
On my work machine with Debian Sarge:
smartctl version 5.32 Copyright (C) 2002-4 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
Smartctl: Device Read Identity
Hmmm, yea.. I'm hoping I get a better one next time. I'll bore you to
tears, I mean, let you know when it comes in :D
Derek
PS: Make sure the 'saveauto' is set to on for SMART data to be saved
automatically through power-cycles
i.e.
> smartctl --saveauto=on /dev/hda
you might want to do this t
On Thu, Mar 10, 2005 at 11:03:44AM +0100, Jimmy Hedman wrote:
On Wed, 2005-03-09 at 17:43 +0100, Luca Berra wrote:
On Wed, Mar 09, 2005 at 11:28:48AM +0100, Jimmy Hedman wrote:
>Is there any way i can make this work? Could it be doable with mdadm in
>a initrd?
>
mdassembled was devise for this purp
Was planning to adding a hot spare to my 3 disk raid5 array and was
thinking if I go to 4 drives I would be a better off as 2 raid1 arrays
considering the current state of raid5.
If you think that is wrong please speak up now :)
Thinking I would make a raid1 array for /.
The rest of the firs
John McMonagle wrote:
Just wonder what happens to the md sequence when I remove the original
raid arrays?
When I'm done will I have md0,md1 and md2 or md2,md3 and md4?
they will have the name you entered when you created the array.
after removing one array from the system all arrays will still ha
John McMonagle wrote:
Was planning to adding a hot spare to my 3 disk raid5 array and was
thinking if I go to 4 drives I would be a better off as 2 raid1 arrays
considering the current state of raid5.
I just wonder about the comment "considering the current state of raid5". What might be wrong
The only problem I have is related to bad blocks. This problem is common to
all RAID types. RAID5 is more likely to have problems.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brad Campbell
Sent: Thursday, March 10, 2005 6:04 PM
To: John McMonagl
Brad Campbell wrote:
John McMonagle wrote:
I just wonder about the comment "considering the current state of
raid5". What might be wrong with raid5 currently?
Perhaps he's referring to the possibility of undetectable data
corruption that can occur with software raid5? Granted, there's a very
s
Brad
Not saying its broke.
Part of my reasoning to go to raid5 was that I could expand.
While it can be done I don't really see it as practical.
Also it's looking like I probably will not need to expand.
raid5 with 3 drives and 1 spare
or 2 - 2 drive raid1 drives have the same space.
Which is less
You asked:
"raid5 with 3 drives and 1 spare
or 2 - 2 drive raid1 drives have the same space.
Which is less likely to have a failure cause data loss?"
Assume 4 drives.
With RAID5 using 3 drives and 1 spare...
==
If a disk is kicked out be
On Thursday March 10, [EMAIL PROTECTED] wrote:
> Hi,
>
> I have many problems with RAID in kernel 2.6.10.
..
> And dmesg says:
>
> md: raidstart(pid 2944) used deprecated START_ARRAY ioctl. This will not
> <-- !!!
> be supported beyond 2.6
19 matches
Mail list logo