MD sysfs/kobject issues in SAN environment.

2008-02-22 Thread greg
for the block device and its associated partitions to be removed. This tends to violate the premise of 'least surprise' when the systems staff logs in to recover the situation and can't remove the failed device from the RAID1 device and are potentially forced to add a completely dif

Re: raid problem: after every reboot /dev/sdb1 is removed?

2008-02-01 Thread Greg Cormier
oting? Greg On Fri, Feb 1, 2008 at 7:52 AM, Berni <[EMAIL PROTECTED]> wrote: > Hi! > > I have the following problem with my softraid (raid 1). I'm running > Ubuntu 7.10 64bit with kernel 2.6.22-14-generic. > > After every reboot my first boot partition in md0 is not

Re: Raid over 48 disks ... for real now

2008-01-18 Thread Greg Cormier
> I wonder how long it would take to run an fsck on one large filesystem? :) I would imagine you'd have time to order a new system, build it, and restore the backups before the fsck was done! - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAI

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-18 Thread Greg Cormier
> Also, don't use ext*, XFS can be up to 2-3x faster (in many of the > benchmarks). I'm going to swap file systems and give it a shot right now! :) How is stability of XFS? I heard recovery is easier with ext2/3 due to more people using it, more tools available, etc? Greg - To

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-18 Thread Greg Cormier
save time (mke2fs takes a while on 1.5TB) Next step will be playing with read aheads and stripe cache sizes I guess! I'm open to any comments/suggestions you guys have! Greg - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Greg Cormier
What sort of tools are you using to get these benchmarks, and can I used them for ext3? Very interested in running this on my server. Thanks, Greg On Jan 16, 2008 11:13 AM, Justin Piszcz <[EMAIL PROTECTED]> wrote: > For these benchmarks I timed how long it takes to extract a standard

Replacing an active drive.

2008-01-14 Thread Greg Cormier
I'd have FOUR drives listed with the old drive always missing. So I guess I wasn't properly removing the current drive? Or maybe I marked it as failed but didn't remove it? In any case it drove me nuts having that array working but having that phantom old drive haunting me in the d

Change Stripe size?

2007-12-31 Thread Greg Cormier
backup the array and re-create it with a larger stripe size? Thanks and happy new year! Greg - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Auto assembly errors with mdadm and 64K aligned partitions.

2007-12-13 Thread greg
Good morning to Neil and everyone on the list, hope your respective days are going well. Quick overview. We've isolated what appears to be a failure mode with mdadm assembling RAID1 (and presumably other level) volumes which kernel based RAID autostart is able to do correctly. We picked up on th

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-14 Thread Greg KH
On Tue, Nov 13, 2007 at 10:36:30PM -0700, Dan Williams wrote: > On Nov 13, 2007 8:43 PM, Greg KH <[EMAIL PROTECTED]> wrote: > > > > > > Careful, it looks like you cherry picked commit 4ae3f847 "md: raid5: > > > fix clearing of biofill operations"

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-13 Thread Greg KH
On Tue, Nov 13, 2007 at 08:36:05PM -0700, Dan Williams wrote: > On Nov 13, 2007 5:23 PM, Greg KH <[EMAIL PROTECTED]> wrote: > > On Tue, Nov 13, 2007 at 04:22:14PM -0800, Greg KH wrote: > > > On Mon, Oct 22, 2007 at 05:15:27PM +1000, NeilBrown wrote: > > > > >

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-13 Thread Greg KH
On Tue, Nov 13, 2007 at 04:22:14PM -0800, Greg KH wrote: > On Mon, Oct 22, 2007 at 05:15:27PM +1000, NeilBrown wrote: > > > > It appears that a couple of bugs slipped in to md for 2.6.23. > > These two patches fix them and are appropriate for 2.6.23.y as well > > as

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-13 Thread Greg KH
n unsigned compare to allow creation of bitmaps > with v1.0 metadata. > [PATCH 002 of 2] md: raid5: fix clearing of biofill operations I don't see these patches in 2.6.24-rcX, are they there under some other subject? thanks, greg k-h - To unsubscribe from this list: send the line &qu

Re: Superblocks

2007-11-02 Thread Greg Cormier
Any reason 0.9 is the default? Should I be worried about using 1.0 superblocks? And can I "upgrade" my array from 0.9 to 1.0 superblocks? Thanks, Greg On 11/1/07, Neil Brown <[EMAIL PROTECTED]> wrote: > On Tuesday October 30, [EMAIL PROTECTED] wrote: > > Which is the d

Re: Superblocks

2007-10-30 Thread Greg Cormier
Which is the default type of superblock? 0.90 or 1.0? On 10/30/07, Neil Brown <[EMAIL PROTECTED]> wrote: > On Friday October 26, [EMAIL PROTECTED] wrote: > > Can someone help me understand superblocks and MD a little bit? > > > > I've got a raid5 array with 3 disks - sdb1, sdc1, sdd1. > > > > --ex

Superblocks

2007-10-26 Thread Greg Cormier
e I start on my next troubleshooting of why my array rebuilds every time I reboot :) Thanks, Greg - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Raid5 Reshape gone wrong, please help

2007-08-23 Thread Greg Nicholson
On 8/23/07, Greg Nicholson <[EMAIL PROTECTED]> wrote: > > > OK I've reproduced the original issue on a seperate box. > 2.6.23-rc3 does not like to grow Raid 5 arrays. MDadm 2.6.3 > > mdadm --add /dev/md0 /dev/sda1 > mdadm -G --backup-file=/root/backup.raid.fil

Re: Raid5 Reshape gone wrong, please help

2007-08-23 Thread Greg Nicholson
OK I've reproduced the original issue on a seperate box. 2.6.23-rc3 does not like to grow Raid 5 arrays. MDadm 2.6.3 mdadm --add /dev/md0 /dev/sda1 mdadm -G --backup-file=/root/backup.raid.file /dev/md0 (Yes, I added the backup-file this time... just to be sure.) Mdadm began the grow, and

Re: Raid5 Reshape gone wrong, please help

2007-08-20 Thread Greg Nicholson
On 8/19/07, Greg Nicholson <[EMAIL PROTECTED]> wrote: > On 8/19/07, Greg Nicholson <[EMAIL PROTECTED]> wrote: > > On 8/19/07, Neil Brown <[EMAIL PROTECTED]> wrote: > > > On Saturday August 18, [EMAIL PROTECTED] wrote: > > > > > > > > That

Re: Raid5 Reshape gone wrong, please help

2007-08-19 Thread Greg Nicholson
On 8/19/07, Greg Nicholson <[EMAIL PROTECTED]> wrote: > On 8/19/07, Neil Brown <[EMAIL PROTECTED]> wrote: > > On Saturday August 18, [EMAIL PROTECTED] wrote: > > > > > > That looks to me like the first 2 gig is completely empty on the > > > drive.

Re: Raid5 Reshape gone wrong, please help

2007-08-19 Thread Greg Nicholson
On 8/19/07, Neil Brown <[EMAIL PROTECTED]> wrote: > On Saturday August 18, [EMAIL PROTECTED] wrote: > > > > That looks to me like the first 2 gig is completely empty on the > > drive. I really don't think it actually started to do anything. > > The backup data is near the end of the device. If yo

Re: Raid5 Reshape gone wrong, please help

2007-08-18 Thread Greg Nicholson
On 8/18/07, Neil Brown <[EMAIL PROTECTED]> wrote: > On Friday August 17, [EMAIL PROTECTED] wrote: > > I was trying to resize a Raid 5 array of 4 500G drives to 5. Kernel > > version 2.6.23-rc3 was the kernel I STARTED on this. > > > > I added the device to the array : > > mdadm --add /dev/md0 /d

Raid5 Reshape gone wrong, please help

2007-08-17 Thread Greg Nicholson
I was trying to resize a Raid 5 array of 4 500G drives to 5. Kernel version 2.6.23-rc3 was the kernel I STARTED on this. I added the device to the array : mdadm --add /dev/md0 /dev/sdb1 Then I started to grow the array : mdadm --grow /dev/md0 --raid-devices=5 At this point the machine locked

[patch 08/26] md: Fix bug in error handling during raid1 repair.

2007-07-30 Thread Greg KH
t data back, so the read error isn't fixed, and the device probably gets a zero-length write which it might complain about. Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Signed-off-by: Chris Wright <[EMAIL PROTECTED]> Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]&g

[patch 07/26] md: Fix two raid10 bugs.

2007-07-30 Thread Greg KH
can cause confusion. Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Signed-off-by: Chris Wright <[EMAIL PROTECTED]> Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]> --- drivers/md/raid10.c |6 ++ 1 file changed, 6 insertions(+) diff .prev/drivers/md/raid10.c ./d

Software RAID across multiple SATA host controllers - OK?

2007-07-16 Thread Greg Neumarke
that one controller? Is this a bad idea, or no problem? Thanks -Greg - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

[patch 11/31] Fix calculation for size of filemap_attr array in md/bitmap.

2007-04-11 Thread Greg KH
ed long' shorter than required. We need a round-up in there. Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]> --- drivers/md/bitmap.c |4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) --- a/drivers/md/bitmap.c +++

[patch 021/101] md: Avoid possible BUG_ON in md bitmap handling.

2007-03-07 Thread Greg KH
ead of calling BUG_ON we now wait for the count to drop down a bit. This requires a new wait_queue_head, some waiting code, and a wakeup call. Tested by limiting the counter to 20 instead of 16383 (writes go a lot slower in that case...). Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Signed

[patch 045/101] md: Fix raid10 recovery problem.

2007-03-07 Thread Greg KH
block address and probably a device error. The code for working with device sizes was fairly confused and spread out, so this has been tided up a bit. Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]> --- drivers/md/raid10.c | 38

Re: 2.6.19.2, cp 18gb_file 18gb_file.2 = OOM killer, 100% reproducible (multi-threaded USB no go)

2007-01-21 Thread Greg KH
gt; > > Justin. > > - > > Running 2.6.20-rc5 now, the multi-threaded USB probing causes my UPS not > to work, probably because udev has problems or something, it is also the > only USB device I have plugged into the machine. multi-threaded USB is about to go away as it c

Re: RAID5 refuses to accept replacement drive.

2006-11-03 Thread greg
On Oct 26, 7:25am, Neil Brown wrote: } Subject: Re: RAID5 refuses to accept replacement drive. Hi Neil, hope the end of the week is going well for you. > On Wednesday October 25, [EMAIL PROTECTED] wrote: > > Good morning to everyone, hope everyone's day is going well. > > > > Neil, I sent this

Re: [PATCH 000 of 6] md: udev events and cache bypass for reads

2006-10-31 Thread Greg KH
uitable for 2.6.19 (if it isn't too late and gregkh thinks it > is good). Rest are for 2.6.20. No objections from me. thanks, greg k-h - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 001 of 6] md: Send online/offline uevents when an md array starts/stops.

2006-10-31 Thread Greg KH
On Tue, Oct 31, 2006 at 05:00:46PM +1100, NeilBrown wrote: > > This allows udev to do something intelligent when an > array becomes available. > > cc: [EMAIL PROTECTED] > Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Acked-by: Greg Kroah-Hartman <[EMAIL PROTECTED]&

Re: RAID5 refuses to accept replacement drive.

2006-10-31 Thread greg
esultant binary would always exit complaining that a device had not been specified. I remember the dietlibc documentation noting that the GNU folks had an inconsistent world view when it came to getopt processing semantics... :-) I suspect there is a common thead involved in both cases. > NeilB

RAID5 refuses to accept replacement drive.

2006-10-25 Thread greg
We've handled a lot of MD failures, first time anything like this has happened. I feel like there is probably a 'brown paper bag' solution to this but I can't see it. Thoughts? Greg --- /dev/md3: Ve

Re: PERC5 - MegaRaid-SAS problems..

2006-10-17 Thread Greg Dickie
Never lost an XFS filesystem completely. Can't say the same about ext3. just my 2 cents, Greg On Tue, 2006-10-17 at 13:58 -0400, Andrew Moise wrote: > On 10/17/06, Gordon Henderson <[EMAIL PROTECTED]> wrote: > > Anyway, it's currently in a RAID-1 configuration (which I

[patch 56/67] MD: Fix problem where hot-added drives are not resynced.

2006-10-11 Thread Greg KH
y without a resync. From: Neil Brown <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Cc: Richard Bollinger <[EMAIL PROTECTED]> Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]> --- drivers/md/md.c |1 + 1 file changed, 1 insertion(+) --- linux-2.6.18.orig/drivers/md/md

Re: libata hotplug and md raid?

2006-09-14 Thread Greg KH
hotplug devices? The answer to both of these questions is, "not very well." Me and Kay have been talking with Neil Brown about this and he agrees that it needs to be fixed up. That md device needs to have proper lifetime rules and go away proper. Hopefully it gets fixed soon. thanks, greg k-h - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

[patch 19/20] MD: Fix a potential NULL dereference in md/raid1

2006-08-21 Thread Greg KH
really the wrong place of the add. We could add to the count of corrected errors once the are sure it was corrected, not before trying to correct it. Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]> diff .prev/drivers/md/raid1.c ./dri

Re: [stable] [PATCH] Fix a potential NULL dereference in md/raid1

2006-08-21 Thread Greg KH
next > statement. > Further is it is really the wrong place of the add. > We could add to the count of corrected errors > once the are sure it was corrected, not before > trying to correct it. > > Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Queued for -stable, thanks

Re: [stable] [PATCH 001 of 5] md: Avoid oops when attempting to fix read errors on raid10

2006-05-01 Thread Greg KH
On Fri, Apr 28, 2006 at 12:50:15PM +1000, NeilBrown wrote: > > We should add to the counter for the rdev *after* checking > if the rdev is NULL !!! > > Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Queued to -stable, thanks. greg k-h - To unsubscribe from this list: send

Re: [klibc] Exporting which partitions to md-configure

2006-01-30 Thread Greg KH
On Mon, Jan 30, 2006 at 07:21:33PM -0800, Greg KH wrote: > On Mon, Jan 30, 2006 at 04:52:08PM -0800, H. Peter Anvin wrote: > > I'm putting the final touches on kinit, which is the user-space > > replacement (based on klibc) for the whole in-kernel root-mount complex. >

Re: [klibc] Exporting which partitions to md-configure

2006-01-30 Thread Greg KH
> presumably through sysfs. What are you looking for exactly? udev has a great helper program, volume_id, that identifies any type of filesystem that Linux knows about (it was based on the ext2 lib code, but smaller, and much more sane, and works better.) Would that help out here? thanks, greg k

Re: [PATCH md 009 of 10] Improve 'scan_mode' and rename it to 'sync_action'

2005-11-02 Thread Greg KH
;recovery); > + md_unregister_thread(mddev->sync_thread); > + mddev->sync_thread = NULL; > + mddev->recovery = 0; > + printk("stop at %llu\n", (unsigned long > long)mddev->recovery_cp); Your printk() needs a KERN_ level. Or is this debugging code left over? thanks, greg k-h - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH md 004 of 10] Fix some locking and module refcounting issues with md's use of sysfs.

2005-11-02 Thread Greg KH
rted back again. > > > > Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Acked-by: Greg Kroah-Hartman <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH md 001 of 10] Make sure /block link in /sys/.../md/ goes to correct devices.

2005-11-02 Thread Greg KH
ect object (whether partition or > not), we need to respect this difference... > (Thus current code shows a link to the whole device, whether we are > using a partition or not, which is wrong). > > Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Acked-by: Greg Kroah-Hartman <

Re: [PATCH md 010 of 10] Document sysfs usage of md, and make a couple of small refinements.

2005-11-02 Thread Greg KH
. > > Signed-off-by: Neil Brown <[EMAIL PROTECTED]> Nice, gotta love documentation :) Acked-by: Greg Kroah-Hartman <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH md 005 of 10] Split off some md attributes in sysfs to a separate group.

2005-11-02 Thread Greg KH
ed-off-by: Neil Brown <[EMAIL PROTECTED]> Acked-by: Greg Kroah-Hartman <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Anyone clueful on forcing 3ware drive discovery?

2005-09-08 Thread Dr. Greg Wettstein
Good afternoon, I hope the day is going well for everyone. We are using a number of 3Ware 75xx and 8xxx controller cards to build high capacity storage solutions on top of Linux with commodity drives. We use the cards in JBOD mode and build software RAID5 on top of the individual disks. As a resu

Re: [PERFORM] Postgres on RAID5

2005-03-14 Thread Greg Stark
l accessible > size up. Well, many drives also cuts average latency. So even if you have no need for more bandwidth you still benefit from a lower average response time by adding more drives. -- greg - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the bo

Re: [PERFORM] Postgres on RAID5

2005-03-13 Thread Greg Stark
whether postgres performs differently with fsync=off. This would even be a reasonable mode to run under for initial database loads. It shouldn't make much of a difference with hardware like this though. And you should be aware that running under this mode in production would put your data at