for the block
device and its associated partitions to be removed.
This tends to violate the premise of 'least surprise' when the systems
staff logs in to recover the situation and can't remove the failed
device from the RAID1 device and are potentially forced to add a
completely dif
oting?
Greg
On Fri, Feb 1, 2008 at 7:52 AM, Berni <[EMAIL PROTECTED]> wrote:
> Hi!
>
> I have the following problem with my softraid (raid 1). I'm running
> Ubuntu 7.10 64bit with kernel 2.6.22-14-generic.
>
> After every reboot my first boot partition in md0 is not
> I wonder how long it would take to run an fsck on one large filesystem?
:)
I would imagine you'd have time to order a new system, build it, and
restore the backups before the fsck was done!
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAI
> Also, don't use ext*, XFS can be up to 2-3x faster (in many of the
> benchmarks).
I'm going to swap file systems and give it a shot right now! :)
How is stability of XFS? I heard recovery is easier with ext2/3 due to
more people using it, more tools available, etc?
Greg
-
To
save
time (mke2fs takes a while on 1.5TB)
Next step will be playing with read aheads and stripe cache sizes I
guess! I'm open to any comments/suggestions you guys have!
Greg
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL
What sort of tools are you using to get these benchmarks, and can I
used them for ext3?
Very interested in running this on my server.
Thanks,
Greg
On Jan 16, 2008 11:13 AM, Justin Piszcz <[EMAIL PROTECTED]> wrote:
> For these benchmarks I timed how long it takes to extract a standard
I'd have FOUR
drives listed with the old drive always missing. So I guess I wasn't
properly removing the current drive? Or maybe I marked it as failed
but didn't remove it?
In any case it drove me nuts having that array working but having that
phantom old drive haunting me in the d
backup the array and re-create it with a larger stripe size?
Thanks and happy new year!
Greg
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Good morning to Neil and everyone on the list, hope your respective
days are going well.
Quick overview. We've isolated what appears to be a failure mode with
mdadm assembling RAID1 (and presumably other level) volumes which
kernel based RAID autostart is able to do correctly.
We picked up on th
On Tue, Nov 13, 2007 at 10:36:30PM -0700, Dan Williams wrote:
> On Nov 13, 2007 8:43 PM, Greg KH <[EMAIL PROTECTED]> wrote:
> > >
> > > Careful, it looks like you cherry picked commit 4ae3f847 "md: raid5:
> > > fix clearing of biofill operations"
On Tue, Nov 13, 2007 at 08:36:05PM -0700, Dan Williams wrote:
> On Nov 13, 2007 5:23 PM, Greg KH <[EMAIL PROTECTED]> wrote:
> > On Tue, Nov 13, 2007 at 04:22:14PM -0800, Greg KH wrote:
> > > On Mon, Oct 22, 2007 at 05:15:27PM +1000, NeilBrown wrote:
> > > >
>
On Tue, Nov 13, 2007 at 04:22:14PM -0800, Greg KH wrote:
> On Mon, Oct 22, 2007 at 05:15:27PM +1000, NeilBrown wrote:
> >
> > It appears that a couple of bugs slipped in to md for 2.6.23.
> > These two patches fix them and are appropriate for 2.6.23.y as well
> > as
n unsigned compare to allow creation of bitmaps
> with v1.0 metadata.
> [PATCH 002 of 2] md: raid5: fix clearing of biofill operations
I don't see these patches in 2.6.24-rcX, are they there under some other
subject?
thanks,
greg k-h
-
To unsubscribe from this list: send the line &qu
Any reason 0.9 is the default? Should I be worried about using 1.0
superblocks? And can I "upgrade" my array from 0.9 to 1.0 superblocks?
Thanks,
Greg
On 11/1/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Tuesday October 30, [EMAIL PROTECTED] wrote:
> > Which is the d
Which is the default type of superblock? 0.90 or 1.0?
On 10/30/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Friday October 26, [EMAIL PROTECTED] wrote:
> > Can someone help me understand superblocks and MD a little bit?
> >
> > I've got a raid5 array with 3 disks - sdb1, sdc1, sdd1.
> >
> > --ex
e I start on my next
troubleshooting of why my array rebuilds every time I reboot :)
Thanks,
Greg
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 8/23/07, Greg Nicholson <[EMAIL PROTECTED]> wrote:
>
>
> OK I've reproduced the original issue on a seperate box.
> 2.6.23-rc3 does not like to grow Raid 5 arrays. MDadm 2.6.3
>
> mdadm --add /dev/md0 /dev/sda1
> mdadm -G --backup-file=/root/backup.raid.fil
OK I've reproduced the original issue on a seperate box.
2.6.23-rc3 does not like to grow Raid 5 arrays. MDadm 2.6.3
mdadm --add /dev/md0 /dev/sda1
mdadm -G --backup-file=/root/backup.raid.file /dev/md0
(Yes, I added the backup-file this time... just to be sure.)
Mdadm began the grow, and
On 8/19/07, Greg Nicholson <[EMAIL PROTECTED]> wrote:
> On 8/19/07, Greg Nicholson <[EMAIL PROTECTED]> wrote:
> > On 8/19/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> > > On Saturday August 18, [EMAIL PROTECTED] wrote:
> > > >
> > > > That
On 8/19/07, Greg Nicholson <[EMAIL PROTECTED]> wrote:
> On 8/19/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> > On Saturday August 18, [EMAIL PROTECTED] wrote:
> > >
> > > That looks to me like the first 2 gig is completely empty on the
> > > drive.
On 8/19/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Saturday August 18, [EMAIL PROTECTED] wrote:
> >
> > That looks to me like the first 2 gig is completely empty on the
> > drive. I really don't think it actually started to do anything.
>
> The backup data is near the end of the device. If yo
On 8/18/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Friday August 17, [EMAIL PROTECTED] wrote:
> > I was trying to resize a Raid 5 array of 4 500G drives to 5. Kernel
> > version 2.6.23-rc3 was the kernel I STARTED on this.
> >
> > I added the device to the array :
> > mdadm --add /dev/md0 /d
I was trying to resize a Raid 5 array of 4 500G drives to 5. Kernel
version 2.6.23-rc3 was the kernel I STARTED on this.
I added the device to the array :
mdadm --add /dev/md0 /dev/sdb1
Then I started to grow the array :
mdadm --grow /dev/md0 --raid-devices=5
At this point the machine locked
t data back, so the read error isn't fixed,
and the device probably gets a zero-length write which it might
complain about.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Signed-off-by: Chris Wright <[EMAIL PROTECTED]>
Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]&g
can cause confusion.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Signed-off-by: Chris Wright <[EMAIL PROTECTED]>
Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]>
---
drivers/md/raid10.c |6 ++
1 file changed, 6 insertions(+)
diff .prev/drivers/md/raid10.c ./d
that one controller? Is this a bad
idea, or no problem?
Thanks
-Greg
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
ed long' shorter than required. We need a round-up in there.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]>
---
drivers/md/bitmap.c |4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
--- a/drivers/md/bitmap.c
+++
ead of calling BUG_ON we now wait
for the count to drop down a bit. This requires a new wait_queue_head,
some waiting code, and a wakeup call.
Tested by limiting the counter to 20 instead of 16383 (writes go a lot slower
in that case...).
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Signed
block address and probably
a device error.
The code for working with device sizes was fairly confused and spread
out, so this has been tided up a bit.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]>
---
drivers/md/raid10.c | 38
gt;
> > Justin.
> > -
>
> Running 2.6.20-rc5 now, the multi-threaded USB probing causes my UPS not
> to work, probably because udev has problems or something, it is also the
> only USB device I have plugged into the machine.
multi-threaded USB is about to go away as it c
On Oct 26, 7:25am, Neil Brown wrote:
} Subject: Re: RAID5 refuses to accept replacement drive.
Hi Neil, hope the end of the week is going well for you.
> On Wednesday October 25, [EMAIL PROTECTED] wrote:
> > Good morning to everyone, hope everyone's day is going well.
> >
> > Neil, I sent this
uitable for 2.6.19 (if it isn't too late and gregkh thinks it
> is good). Rest are for 2.6.20.
No objections from me.
thanks,
greg k-h
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Oct 31, 2006 at 05:00:46PM +1100, NeilBrown wrote:
>
> This allows udev to do something intelligent when an
> array becomes available.
>
> cc: [EMAIL PROTECTED]
> Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Acked-by: Greg Kroah-Hartman <[EMAIL PROTECTED]&
esultant binary
would always exit complaining that a device had not been specified. I
remember the dietlibc documentation noting that the GNU folks had an
inconsistent world view when it came to getopt processing
semantics... :-)
I suspect there is a common thead involved in both cases.
> NeilB
We've handled a lot of MD failures, first time anything like this has
happened. I feel like there is probably a 'brown paper bag' solution
to this but I can't see it.
Thoughts?
Greg
---
/dev/md3:
Ve
Never lost an XFS filesystem completely. Can't say the same about ext3.
just my 2 cents,
Greg
On Tue, 2006-10-17 at 13:58 -0400, Andrew Moise wrote:
> On 10/17/06, Gordon Henderson <[EMAIL PROTECTED]> wrote:
> > Anyway, it's currently in a RAID-1 configuration (which I
y without a resync.
From: Neil Brown <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Cc: Richard Bollinger <[EMAIL PROTECTED]>
Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]>
---
drivers/md/md.c |1 +
1 file changed, 1 insertion(+)
--- linux-2.6.18.orig/drivers/md/md
hotplug devices?
The answer to both of these questions is, "not very well." Me and Kay
have been talking with Neil Brown about this and he agrees that it needs
to be fixed up. That md device needs to have proper lifetime rules and
go away proper. Hopefully it gets fixed soon.
thanks,
greg k-h
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
really the wrong place of the add. We could add to the
count of corrected errors once the are sure it was corrected, not before
trying to correct it.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Signed-off-by: Greg Kroah-Hartman <[EMAIL PROTECTED]>
diff .prev/drivers/md/raid1.c ./dri
next
> statement.
> Further is it is really the wrong place of the add.
> We could add to the count of corrected errors
> once the are sure it was corrected, not before
> trying to correct it.
>
> Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Queued for -stable, thanks
On Fri, Apr 28, 2006 at 12:50:15PM +1000, NeilBrown wrote:
>
> We should add to the counter for the rdev *after* checking
> if the rdev is NULL !!!
>
> Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Queued to -stable, thanks.
greg k-h
-
To unsubscribe from this list: send
On Mon, Jan 30, 2006 at 07:21:33PM -0800, Greg KH wrote:
> On Mon, Jan 30, 2006 at 04:52:08PM -0800, H. Peter Anvin wrote:
> > I'm putting the final touches on kinit, which is the user-space
> > replacement (based on klibc) for the whole in-kernel root-mount complex.
>
> presumably through sysfs.
What are you looking for exactly? udev has a great helper program,
volume_id, that identifies any type of filesystem that Linux knows about
(it was based on the ext2 lib code, but smaller, and much more sane, and
works better.)
Would that help out here?
thanks,
greg k
;recovery);
> + md_unregister_thread(mddev->sync_thread);
> + mddev->sync_thread = NULL;
> + mddev->recovery = 0;
> + printk("stop at %llu\n", (unsigned long
> long)mddev->recovery_cp);
Your printk() needs a KERN_ level. Or is this debugging code left over?
thanks,
greg k-h
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
rted back again.
>
>
>
> Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Acked-by: Greg Kroah-Hartman <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
ect object (whether partition or
> not), we need to respect this difference...
> (Thus current code shows a link to the whole device, whether we are
> using a partition or not, which is wrong).
>
> Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Acked-by: Greg Kroah-Hartman <
.
>
> Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Nice, gotta love documentation :)
Acked-by: Greg Kroah-Hartman <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
ed-off-by: Neil Brown <[EMAIL PROTECTED]>
Acked-by: Greg Kroah-Hartman <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Good afternoon, I hope the day is going well for everyone.
We are using a number of 3Ware 75xx and 8xxx controller cards to build
high capacity storage solutions on top of Linux with commodity drives.
We use the cards in JBOD mode and build software RAID5 on top of the
individual disks. As a resu
l accessible
> size up.
Well, many drives also cuts average latency. So even if you have no need for
more bandwidth you still benefit from a lower average response time by adding
more drives.
--
greg
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the bo
whether postgres performs differently with
fsync=off. This would even be a reasonable mode to run under for initial
database loads. It shouldn't make much of a difference with hardware like this
though. And you should be aware that running under this mode in production
would put your data at
51 matches
Mail list logo