Re: stride / stripe alignment on LVM ?

2007-11-02 Thread Janek Kozicki
Bill Davidsen said: (by the date of Fri, 02 Nov 2007 09:01:05 -0400) > So I would expect this to make a very large performance difference, so > even if it work it would do so slowly. I was trying to find out the stripe layout for few hours, using hexedit and dd. And I'm baffled: md1 : activ

Re: Implementing low level timeouts within MD

2007-11-02 Thread Alberto Alonso
On Fri, 2007-11-02 at 15:15 -0400, Doug Ledford wrote: > It was tested, it simply obviously had a bug you hit. Assuming that > your particular failure situation is the only possible outcome for all > the other people that used it would be an invalid assumption. There are > lots of code paths in a

Re: Implementing low level timeouts within MD

2007-11-02 Thread Doug Ledford
On Fri, 2007-11-02 at 13:21 -0500, Alberto Alonso wrote: > On Fri, 2007-11-02 at 11:45 -0400, Doug Ledford wrote: > > > The key word here being "supported". That means if you run across a > > problem, we fix it. It doesn't mean there will never be any problems. > > On hardware specs I normally

Re: Implementing low level timeouts within MD

2007-11-02 Thread Alberto Alonso
On Fri, 2007-11-02 at 11:45 -0400, Doug Ledford wrote: > The key word here being "supported". That means if you run across a > problem, we fix it. It doesn't mean there will never be any problems. On hardware specs I normally read "supported" as "tested within that OS version to work within spe

Re: Implementing low level timeouts within MD

2007-11-02 Thread Alberto Alonso
On Fri, 2007-11-02 at 11:09 +, David Greaves wrote: > David > PS I can't really contribute to your list - I'm only using cheap desktop > hardware. > - If you had failures and it properly handled them, then you can contribute to the good combinations, so far that's the list that is kind of e

Re: switching root fs '/' to boot from RAID1 with grub

2007-11-02 Thread berk walker
H. Peter Anvin wrote: Doug Ledford wrote: device /dev/sda (hd0) root (hd0,0) install --stage2=/boot/grub/stage2 /boot/grub/stage1 (hd0) /boot/grub/e2fs_stage1_5 p /boot/grub/stage2 /boot/grub/menu.lst device /dev/hdc (hd0) root (hd0,0) install --stage2=/boot/grub/stage2 /boot/grub/stage1 (hd0

Re: switching root fs '/' to boot from RAID1 with grub

2007-11-02 Thread Doug Ledford
On Thu, 2007-11-01 at 11:57 -0700, H. Peter Anvin wrote: > Doug Ledford wrote: > > > > Correct, and that's what you want. The alternative is that if the BIOS > > can see the first disk but it's broken and can't be used, and if you > > have the boot sector on the second disk set to read from BIOS

Re: Time to deprecate old RAID formats?

2007-11-02 Thread Doug Ledford
On Thu, 2007-11-01 at 14:02 -0700, H. Peter Anvin wrote: > Doug Ledford wrote: > >> > >> I would argue that ext[234] should be clearing those 512 bytes. Why > >> aren't they cleared > > > > Actually, I didn't think msdos used the first 512 bytes for the same > > reason ext3 doesn't: space for a

Re: Implementing low level timeouts within MD

2007-11-02 Thread Doug Ledford
On Fri, 2007-11-02 at 03:41 -0500, Alberto Alonso wrote: > On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote: > > Not in the older kernel versions you were running, no. > > These "old versions" (specially the RHEL) are supposed to be > the official versions supported by Redhat and the hardware

Re: Superblocks

2007-11-02 Thread Greg Cormier
Any reason 0.9 is the default? Should I be worried about using 1.0 superblocks? And can I "upgrade" my array from 0.9 to 1.0 superblocks? Thanks, Greg On 11/1/07, Neil Brown <[EMAIL PROTECTED]> wrote: > On Tuesday October 30, [EMAIL PROTECTED] wrote: > > Which is the default type of superblock? 0

Re: doesm mdadm try to use fastest HDD ?

2007-11-02 Thread Bill Davidsen
Janek Kozicki wrote: Hello, My three HHDs have following speeds: hda - speed 70 MB/sec hdc - speed 27 MB/sec sda - speed 60 MB/sec They create a raid1 /dev/md0 and raid5 /dev/md1 arrays. I wanted to ask if mdadm is trying to pick the fastest HDD during operation? Maybe I can "tell" whic

Re: stride / stripe alignment on LVM ?

2007-11-02 Thread Bill Davidsen
Neil Brown wrote: On Thursday November 1, [EMAIL PROTECTED] wrote: Hello, I have raid5 /dev/md1, --chunk=128 --metadata=1.1. On it I have created LVM volume called 'raid5', and finally a logical volume 'backup'. Then I formatted it with command: mkfs.ext3 -b 4096 -E stride=32 -E resize=

Re: Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller
Am 02.11.2007 um 12:43 schrieb Neil Brown: For now, you will have to live with a smallish bitmap, which probably isn't a real problem. Ok then. Array Slot : 3 (0, 1, failed, 2, 3, 4) Array State : uuUuu 1 failed This time I'm getting nervous - Array State failed doesn't sound go

Re: Implementing low level timeouts within MD

2007-11-02 Thread Bill Davidsen
Alberto Alonso wrote: On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote: Not in the older kernel versions you were running, no. These "old versions" (specially the RHEL) are supposed to be the official versions supported by Redhat and the hardware vendors, as they were very specif

Re: Superblocks

2007-11-02 Thread Bill Davidsen
Neil Brown wrote: On Tuesday October 30, [EMAIL PROTECTED] wrote: Which is the default type of superblock? 0.90 or 1.0? The default default is 0.90. However a local device can be set in mdadm.conf with e.g. CREATE metdata=1.0 If you change to 1.start, 1.ed, 1.4k names for clar

Re: Time to deprecate old RAID formats?

2007-11-02 Thread Bill Davidsen
Neil Brown wrote: On Friday October 26, [EMAIL PROTECTED] wrote: Perhaps you could have called them 1.start, 1.end, and 1.4k in the beginning? Isn't hindsight wonderful? Those names seem good to me. I wonder if it is safe to generate them in "-Eb" output If you agree that th

doesm mdadm try to use fastest HDD ?

2007-11-02 Thread Janek Kozicki
Hello, My three HHDs have following speeds: hda - speed 70 MB/sec hdc - speed 27 MB/sec sda - speed 60 MB/sec They create a raid1 /dev/md0 and raid5 /dev/md1 arrays. I wanted to ask if mdadm is trying to pick the fastest HDD during operation? Maybe I can "tell" which HDD is preferred? Th

Re: Very small internal bitmap after recreate

2007-11-02 Thread Neil Brown
On Friday November 2, [EMAIL PROTECTED] wrote: > > Am 02.11.2007 um 10:22 schrieb Neil Brown: > > > On Friday November 2, [EMAIL PROTECTED] wrote: > >> I have a 5 disk version 1.0 superblock RAID5 which had an internal > >> bitmap that has been reported to have a size of 299 pages in /proc/ > >>

Re: stride / stripe alignment on LVM ?

2007-11-02 Thread Michal Soltys
Janek Kozicki wrote: And because LVM is putting its own metadata on /dev/md1, the ext3 partition is shifted by some (unknown for me) amount of bytes from the beginning of /dev/md1. It seems to be multiply of 64KiB. You can specify it during pvcreate, with --metadatasize option. It will be ro

Re: Implementing low level timeouts within MD

2007-11-02 Thread David Greaves
Alberto Alonso wrote: > On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote: >> Not in the older kernel versions you were running, no. > > These "old versions" (specially the RHEL) are supposed to be > the official versions supported by Redhat and the hardware > vendors, as they were very speci

Re: Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller
Am 02.11.2007 um 11:22 schrieb Ralf Müller: # mdadm -E /dev/sdg1 /dev/sdg1: Magic : a92b4efc Version : 01 Feature Map : 0x1 Array UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19 Name : 1 Creation Time : Wed Oct 31 14:30:55 2007 Raid Level : raid5 Raid

Re: Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller
Am 02.11.2007 um 10:22 schrieb Neil Brown: On Friday November 2, [EMAIL PROTECTED] wrote: I have a 5 disk version 1.0 superblock RAID5 which had an internal bitmap that has been reported to have a size of 299 pages in /proc/ mdstat. For whatever reason I removed this bitmap (mdadm --grow -- bi

Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller
I have a 5 disk version 1.0 superblock RAID5 which had an internal bitmap that has been reported to have a size of 299 pages in /proc/ mdstat. For whatever reason I removed this bitmap (mdadm --grow -- bitmap=none) and recreated it afterwards (mdadm --grow -- bitmap=internal). Now it has a rep

Re: Software RAID when it works and when it doesn't

2007-11-02 Thread Alberto Alonso
On Sat, 2007-10-27 at 11:26 -0400, Bill Davidsen wrote: > Alberto Alonso wrote: > > On Fri, 2007-10-26 at 18:12 +0200, Goswin von Brederlow wrote: > > > > > >> Depending on the hardware you can still access a different disk while > >> another one is reseting. But since there is no timeout in md

Re: Implementing low level timeouts within MD

2007-11-02 Thread Alberto Alonso
On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote: > I wasn't belittling them. I was trying to isolate the likely culprit in > the situations. You seem to want the md stack to time things out. As > has already been commented by several people, myself included, that's a > band-aid and not a f

Re: Bug in processing dependencies by async_tx_submit() ?

2007-11-02 Thread Yuri Tikhonov
Hi Dan, On Friday 02 November 2007 03:36, Dan Williams wrote: > > This is happened because of the specific implementation of > > dma_wait_for_async_tx(). > > So I take it you are not implementing interrupt based callbacks in your driver? Why not ? I have interrupt based callbacks in my dr