Bill Davidsen said: (by the date of Fri, 02 Nov 2007 09:01:05 -0400)
> So I would expect this to make a very large performance difference, so
> even if it work it would do so slowly.
I was trying to find out the stripe layout for few hours, using
hexedit and dd. And I'm baffled:
md1 : activ
On Fri, 2007-11-02 at 15:15 -0400, Doug Ledford wrote:
> It was tested, it simply obviously had a bug you hit. Assuming that
> your particular failure situation is the only possible outcome for all
> the other people that used it would be an invalid assumption. There are
> lots of code paths in a
On Fri, 2007-11-02 at 13:21 -0500, Alberto Alonso wrote:
> On Fri, 2007-11-02 at 11:45 -0400, Doug Ledford wrote:
>
> > The key word here being "supported". That means if you run across a
> > problem, we fix it. It doesn't mean there will never be any problems.
>
> On hardware specs I normally
On Fri, 2007-11-02 at 11:45 -0400, Doug Ledford wrote:
> The key word here being "supported". That means if you run across a
> problem, we fix it. It doesn't mean there will never be any problems.
On hardware specs I normally read "supported" as "tested within that
OS version to work within spe
On Fri, 2007-11-02 at 11:09 +, David Greaves wrote:
> David
> PS I can't really contribute to your list - I'm only using cheap desktop
> hardware.
> -
If you had failures and it properly handled them, then you can
contribute to the good combinations, so far that's the list
that is kind of e
H. Peter Anvin wrote:
Doug Ledford wrote:
device /dev/sda (hd0)
root (hd0,0)
install --stage2=/boot/grub/stage2 /boot/grub/stage1 (hd0)
/boot/grub/e2fs_stage1_5 p /boot/grub/stage2 /boot/grub/menu.lst
device /dev/hdc (hd0)
root (hd0,0)
install --stage2=/boot/grub/stage2 /boot/grub/stage1 (hd0
On Thu, 2007-11-01 at 11:57 -0700, H. Peter Anvin wrote:
> Doug Ledford wrote:
> >
> > Correct, and that's what you want. The alternative is that if the BIOS
> > can see the first disk but it's broken and can't be used, and if you
> > have the boot sector on the second disk set to read from BIOS
On Thu, 2007-11-01 at 14:02 -0700, H. Peter Anvin wrote:
> Doug Ledford wrote:
> >>
> >> I would argue that ext[234] should be clearing those 512 bytes. Why
> >> aren't they cleared
> >
> > Actually, I didn't think msdos used the first 512 bytes for the same
> > reason ext3 doesn't: space for a
On Fri, 2007-11-02 at 03:41 -0500, Alberto Alonso wrote:
> On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote:
> > Not in the older kernel versions you were running, no.
>
> These "old versions" (specially the RHEL) are supposed to be
> the official versions supported by Redhat and the hardware
Any reason 0.9 is the default? Should I be worried about using 1.0
superblocks? And can I "upgrade" my array from 0.9 to 1.0 superblocks?
Thanks,
Greg
On 11/1/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Tuesday October 30, [EMAIL PROTECTED] wrote:
> > Which is the default type of superblock? 0
Janek Kozicki wrote:
Hello,
My three HHDs have following speeds:
hda - speed 70 MB/sec
hdc - speed 27 MB/sec
sda - speed 60 MB/sec
They create a raid1 /dev/md0 and raid5 /dev/md1 arrays. I wanted to
ask if mdadm is trying to pick the fastest HDD during operation?
Maybe I can "tell" whic
Neil Brown wrote:
On Thursday November 1, [EMAIL PROTECTED] wrote:
Hello,
I have raid5 /dev/md1, --chunk=128 --metadata=1.1. On it I have
created LVM volume called 'raid5', and finally a logical volume
'backup'.
Then I formatted it with command:
mkfs.ext3 -b 4096 -E stride=32 -E resize=
Am 02.11.2007 um 12:43 schrieb Neil Brown:
For now, you will have to live with a smallish bitmap, which probably
isn't a real problem.
Ok then.
Array Slot : 3 (0, 1, failed, 2, 3, 4)
Array State : uuUuu 1 failed
This time I'm getting nervous - Array State failed doesn't sound
go
Alberto Alonso wrote:
On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote:
Not in the older kernel versions you were running, no.
These "old versions" (specially the RHEL) are supposed to be
the official versions supported by Redhat and the hardware
vendors, as they were very specif
Neil Brown wrote:
On Tuesday October 30, [EMAIL PROTECTED] wrote:
Which is the default type of superblock? 0.90 or 1.0?
The default default is 0.90.
However a local device can be set in mdadm.conf with e.g.
CREATE metdata=1.0
If you change to 1.start, 1.ed, 1.4k names for clar
Neil Brown wrote:
On Friday October 26, [EMAIL PROTECTED] wrote:
Perhaps you could have called them 1.start, 1.end, and 1.4k in the
beginning? Isn't hindsight wonderful?
Those names seem good to me. I wonder if it is safe to generate them
in "-Eb" output
If you agree that th
Hello,
My three HHDs have following speeds:
hda - speed 70 MB/sec
hdc - speed 27 MB/sec
sda - speed 60 MB/sec
They create a raid1 /dev/md0 and raid5 /dev/md1 arrays. I wanted to
ask if mdadm is trying to pick the fastest HDD during operation?
Maybe I can "tell" which HDD is preferred?
Th
On Friday November 2, [EMAIL PROTECTED] wrote:
>
> Am 02.11.2007 um 10:22 schrieb Neil Brown:
>
> > On Friday November 2, [EMAIL PROTECTED] wrote:
> >> I have a 5 disk version 1.0 superblock RAID5 which had an internal
> >> bitmap that has been reported to have a size of 299 pages in /proc/
> >>
Janek Kozicki wrote:
And because LVM is putting its own metadata on /dev/md1, the ext3
partition is shifted by some (unknown for me) amount of bytes from
the beginning of /dev/md1.
It seems to be multiply of 64KiB. You can specify it during pvcreate, with
--metadatasize option. It will be ro
Alberto Alonso wrote:
> On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote:
>> Not in the older kernel versions you were running, no.
>
> These "old versions" (specially the RHEL) are supposed to be
> the official versions supported by Redhat and the hardware
> vendors, as they were very speci
Am 02.11.2007 um 11:22 schrieb Ralf Müller:
# mdadm -E /dev/sdg1
/dev/sdg1:
Magic : a92b4efc
Version : 01
Feature Map : 0x1
Array UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19
Name : 1
Creation Time : Wed Oct 31 14:30:55 2007
Raid Level : raid5
Raid
Am 02.11.2007 um 10:22 schrieb Neil Brown:
On Friday November 2, [EMAIL PROTECTED] wrote:
I have a 5 disk version 1.0 superblock RAID5 which had an internal
bitmap that has been reported to have a size of 299 pages in /proc/
mdstat. For whatever reason I removed this bitmap (mdadm --grow --
bi
I have a 5 disk version 1.0 superblock RAID5 which had an internal
bitmap that has been reported to have a size of 299 pages in /proc/
mdstat. For whatever reason I removed this bitmap (mdadm --grow --
bitmap=none) and recreated it afterwards (mdadm --grow --
bitmap=internal). Now it has a rep
On Sat, 2007-10-27 at 11:26 -0400, Bill Davidsen wrote:
> Alberto Alonso wrote:
> > On Fri, 2007-10-26 at 18:12 +0200, Goswin von Brederlow wrote:
> >
> >
> >> Depending on the hardware you can still access a different disk while
> >> another one is reseting. But since there is no timeout in md
On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote:
> I wasn't belittling them. I was trying to isolate the likely culprit in
> the situations. You seem to want the md stack to time things out. As
> has already been commented by several people, myself included, that's a
> band-aid and not a f
Hi Dan,
On Friday 02 November 2007 03:36, Dan Williams wrote:
> > This is happened because of the specific implementation of
> > dma_wait_for_async_tx().
>
> So I take it you are not implementing interrupt based callbacks in your
driver?
Why not ? I have interrupt based callbacks in my dr
26 matches
Mail list logo