Re: Kernel 2.6.23.9 + mdadm 2.6.2-2 + Auto rebuild RAID1?

2007-12-06 Thread Nix
On 6 Dec 2007, Jan Engelhardt verbalised: > On Dec 5 2007 19:29, Nix wrote: >>> >>> On Dec 1 2007 06:19, Justin Piszcz wrote: >>> >>>> RAID1, 0.90.03 superblocks (in order to be compatible with LILO, if >>>> you use 1.x superblocks with

Re: Kernel 2.6.23.9 + mdadm 2.6.2-2 + Auto rebuild RAID1?

2007-12-05 Thread Nix
On 1 Dec 2007, Jan Engelhardt uttered the following: > > On Dec 1 2007 06:19, Justin Piszcz wrote: > >> RAID1, 0.90.03 superblocks (in order to be compatible with LILO, if >> you use 1.x superblocks with LILO you can't boot) > > Says who? (Don't use LILO ;-) Well, your kernels must be on a 0.90-s

Re: md device naming question

2007-09-24 Thread Nix
On 19 Sep 2007, maximilian attems said: > hello, > > working on initramfs i'd be curious to know what the /sys/block > entry of a /dev/md/NN device is. have a user request to support > it and no handy box using it. > > i presume it may also be /sys/block/mdNN ? That's it, e.g. /sys/block/md0. Not

Re: Software based SATA RAID-5 expandable arrays?

2007-07-11 Thread Nix
On 11 Jul 2007, Michael stated: > I am running Suse, and the check program is not available `check' isn't a program. The line suggested has a typo: it should be something like this: 30 2 * * Mon echo check > /sys/block/md0/md/sync_action The only program that line needs is `echo' and I'm sure

Re: spare not becoming active

2007-07-03 Thread Nix
On 3 Jul 2007, Simon spake thusly: > I have 3 identical drives, fresh made, i issue the command: > mdadm --create --verbose /dev/md1 --level=raid5 --raid-devices=3 > /dev/sdb2 /dev/sdc2 /dev/sdd2 OK... > /dev/md1: >Version : 00.90.03 > Creation Time : Tue Jul 3 16:29:38 2007 > Raid

Re: limits on raid

2007-06-21 Thread Nix
On 21 Jun 2007, Neil Brown stated: > I have that - apparently naive - idea that drives use strong checksum, > and will never return bad data, only good data or an error. If this > isn't right, then it would really help to understand what the cause of > other failures are before working out how to

Re: Software based SATA RAID-5 expandable arrays?

2007-06-19 Thread Nix
On 19 Jun 2007, Michael outgrape: [regarding `welcome to my killfile'] > Grow up man, and I thanks for the threat. I will take that into > account if anything bad happens to my computer system. Read and learn. All he's saying is `I am automatically ignoring

Re: below 10MB/s write on raid5

2007-06-13 Thread Nix
On 12 Jun 2007, Jon Nelson told this: > On Mon, 11 Jun 2007, Nix wrote: > >> On 11 Jun 2007, Justin Piszcz told this: >> loki:~# time dd if=/dev/md1 bs=1000 count=502400 of=/dev/null >> 502400+0 records in >> 502400+0 records out >> 50240 bytes (502 MB) copi

Re: below 10MB/s write on raid5

2007-06-13 Thread Nix
On 12 Jun 2007, Jon Nelson told this: > On Mon, 11 Jun 2007, Nix wrote: > >> On 11 Jun 2007, Justin Piszcz told this: >> loki:~# time dd if=/dev/md1 bs=1000 count=502400 of=/dev/null >> 502400+0 records in >> 502400+0 records out >> 50240 bytes (502 MB) copi

Re: below 10MB/s write on raid5

2007-06-11 Thread Nix
On 11 Jun 2007, Justin Piszcz told this: > You can do a read test. > > 10gb read test: > > dd if=/dev/md0 bs=1M count=10240 of=/dev/null > > What is the result? > > I've read that LVM can incur a 30-50% slowdown. FWIW I see a much smaller penalty than that. loki:~# lvs -o +devices LV

Re: RAID SB 1.x autodetection

2007-05-30 Thread Nix
On 30 May 2007, Bill Davidsen stated: > Nix wrote: >> On 29 May 2007, Jan Engelhardt uttered the following: >> >> >>> from your post at >>> http://www.mail-archive.com/linux-raid@vger.kernel.org/msg07384.html I read >>> that autodetecting arrays w

Re: RAID SB 1.x autodetection

2007-05-30 Thread Nix
On 29 May 2007, Jan Engelhardt uttered the following: > from your post at > http://www.mail-archive.com/linux-raid@vger.kernel.org/msg07384.html I > read that autodetecting arrays with a 1.x superblock is currently > impossible. Does it at least work to force the kernel to always assume a > 1.

Re: Recovery of software RAID5 using FC6 rescue?

2007-05-09 Thread Nix
On 9 May 2007, Michael Tokarev spake thusly: > Nix wrote: >> On 8 May 2007, Michael Tokarev told this: >>> BTW, for such recovery purposes, I use initrd (initramfs really, but >>> does not matter) with a normal (but tiny) set of commands inside, >>> thanks to

Re: Recovery of software RAID5 using FC6 rescue?

2007-05-09 Thread Nix
On 8 May 2007, Michael Tokarev told this: > BTW, for such recovery purposes, I use initrd (initramfs really, but > does not matter) with a normal (but tiny) set of commands inside, > thanks to busybox. So everything can be done without any help from > external "recovery CD". Very handy at times,

Re: raid6 array , part id 'fd' not assembling at boot .

2007-03-31 Thread Nix
On 19 Mar 2007, James W. Laferriere outgrabe: > What I don't see is the reasoning behind the use of initrd . It's a > kernel ran to put the dev tree in order , start up devices ,... Just to > start the kernel again ? That's not what initrds do. No second kernel is started, and

Re: mdadm file system type check

2007-03-17 Thread Nix
On 17 Mar 2007, Chris Lindley told this: > What I think the OP is getting at is that MDADM will create an array > with partitions whose type is not set to FD (Linux Raid Auto), but are > perhaps 83. > > The issue with that is that upon a reboot mdadm will not be able to > start the array. I think

Re: PATA/SATA Disk Reliability paper

2007-02-22 Thread Nix
On 22 Feb 2007, [EMAIL PROTECTED] uttered the following: > On 20 Feb 2007, Al Boldi outgrape: >> Eyal Lebedinsky wrote: >>> Disks are sealed, and a dessicant is present in each to keep humidity >>> down. If you ever open a disk drive (e.g. for the magnets, or the mirror >>> quality platters, or fo

Re: PATA/SATA Disk Reliability paper

2007-02-22 Thread Nix
On 20 Feb 2007, Al Boldi outgrape: > Eyal Lebedinsky wrote: >> Disks are sealed, and a dessicant is present in each to keep humidity >> down. If you ever open a disk drive (e.g. for the magnets, or the mirror >> quality platters, or for fun) then you can see the dessicant sachet. > > Actually, they

Re: Ooops on read-only raid5 while unmounting as xfs

2007-01-24 Thread Nix
On 23 Jan 2007, Neil Brown said: > On Tuesday January 23, [EMAIL PROTECTED] wrote: >> >> My question is then : what prevents the upper layer to open the array >> read-write, submit a write and make the md code BUG_ON() ? > > The theory is that when you tell an md array to become read-only, it > t

Re: bad performance on RAID 5

2007-01-21 Thread Nix
On 18 Jan 2007, Bill Davidsen spake thusly: > ) Steve Cousins wrote: >> time dd if=/dev/zero of=/mount-point/test.dat bs=1024k count=1024 > That doesn't give valid (repeatable) results due to caching issues. Go > back to the thread I started on RAID-5 write, and see my results. More > important, th

Re: FailSpare event?

2007-01-15 Thread Nix
On 14 Jan 2007, Neil Brown told this: > A quick look suggests that the following patch might make a > difference, but there is more to it than that. I think there are > subtle differences due to the use of version-1 superblocks. That > might be just another one-line change, but I want to make sur

Re: FailSpare event?

2007-01-15 Thread Nix
On 15 Jan 2007, Bill Davidsen told this: > Nix wrote: >> Number Major Minor RaidDevice State >>0 860 active sync /dev/sda6 >>1 8 221 active sync /dev/sdb6 >>3 225

Re: FailSpare event?

2007-01-14 Thread Nix
On 13 Jan 2007, [EMAIL PROTECTED] uttered the following: > mdadm-2.6 bug, I fear. I haven't tracked it down yet but will look > shortly: I can't afford to not run mdadm --monitor... odd, that > code hasn't changed during 2.6 development. Whoo! Compile Monitor.c without optimization and the problem

Re: FailSpare event?

2007-01-13 Thread Nix
On 13 Jan 2007, [EMAIL PROTECTED] uttered the following: > On 12 Jan 2007, Ernst Herzberg told this: >> Then every about 60 sec 4 times >> >> event=SpareActive >> mddev=/dev/md3 > > I see exactly this on both my RAID-5 arrays, neither of which have any > spare device --- nor have any active device

Re: FailSpare event?

2007-01-13 Thread Nix
On 13 Jan 2007, [EMAIL PROTECTED] spake thusly: > On 12 Jan 2007, Ernst Herzberg told this: >> Then every about 60 sec 4 times >> >> event=SpareActive >> mddev=/dev/md3 > > I see exactly this on both my RAID-5 arrays, neither of which have any > spare device --- nor have any active devices transit

Re: FailSpare event?

2007-01-13 Thread Nix
On 12 Jan 2007, Ernst Herzberg told this: > Then every about 60 sec 4 times > > event=SpareActive > mddev=/dev/md3 I see exactly this on both my RAID-5 arrays, neither of which have any spare device --- nor have any active devices transitioned to spare (which is what that event is actually suppose

Re: Linux RAID version question

2006-11-27 Thread Nix
On 27 Nov 2006, Dragan Marinkovic stated: > On 11/26/06, Nix <[EMAIL PROTECTED]> wrote: >> Well, I assemble my arrays with the command >> >> /sbin/mdadm --assemble --scan --auto=md [...] >> No metadata versions needed anywhere. [...] > But you do have to

Re: Linux RAID version question

2006-11-26 Thread Nix
On 25 Nov 2006, Dragan Marinkovic stated: > Hm, I was playing with RAID 5 with one spare (3 + 1) and metadata > version 1.2 . If I let it build to some 10% and cleanly reboot it does > not start where it left off -- basically it starts from scratch. I was > under the impression that RAID with metad

Re: invalid (zero) superblock magic upon creation of a new RAID-1 array

2006-11-06 Thread Nix
On 6 Nov 2006, Thomas Andrews uttered the following: > Thanks Neil, I fixed my problem by creating the raid set using the "-e" > option: > > mdadm -C /dev/md0 -e 0.90 --level=raid1 --raid-devices=2 /dev/sda1 > /dev/sdb1 > > You're suggestion to use mdadm to assemble the array is not an option

Re: why partition arrays?

2006-10-22 Thread Nix
On 21 Oct 2006, Bodo Thiesen yowled: > was hdb and what was hdd? And hde? Hmmm ...), so we decided the following > structure: > > hda -> vg called raida -> creating LVs called raida1..raida4 > hdb -> vg called raidb -> creating LVs called raidb1..raidb4 I'm interested: why two VGs? Why not have

Re: Starting point of the actual RAID data area

2006-10-11 Thread Nix
On 8 Oct 2006, Daniel Pittman said: > Jyri Hovila <[EMAIL PROTECTED]> writes: >> I would appreciate it a lot if somebody could give me a hand here. All >> I need to understand right now is how I can find out the first sector >> of the actual RAID data. I'm starting with a simple configuration, >> w

Re: Recipe for Mirrored OS Drives

2006-10-03 Thread Nix
On Tue, 03 Oct 2006, David Greaves prattled cheerily: > FYI I've done quite a bit on the Howto section: > http://linux-raid.osdl.org/index.php/Overview Ka wow. > It still needs a lot of work I think but it's getting there... Yeah: the `booting on RAID' and RAID_Boot could be merged, and it certa

Re: Recipe for Mirrored OS Drives

2006-10-02 Thread Nix
On 2 Oct 2006, David Greaves spake: > I suggest you link from http://linux-raid.osdl.org/index.php/RAID_Boot The pages don't really have the same purpose. RAID_Boot is `how to boot your RAID system using initramfs'; this is `how to set up a RAID system in the first place', i.e., setup. I'll give

Re: Care and feeding of RAID?

2006-09-09 Thread Nix
On 5 Sep 2006, Paul Waldo uttered the following: > What about bitmaps? Nobody has mentioned them. It is my > understanding that you just turn them on with "mdadm /dev/mdX -b > internal". Any caveats for this? Notably, how many additional writes does it incur? I have some RAID arrays using drive

Re: Care and feeding of RAID?

2006-09-09 Thread Nix
On 6 Sep 2006, Mario Holbe spake: > You don't necessarily need one. However, since Neil considers in-kernel > RAID-autodetection a bad thing and since mdadm typically relies on > mdadm.conf for RAID-assembly You can specify the UUID on the command-line too (although I don't). The advantage of the

Re: Making bootable SATA RAID1 array in Mandriva 2006

2006-08-16 Thread Nix
On 16 Aug 2006, Justin Piszcz murmured woefully: > > -- snip -- > > If you are using a custom compiled kernel, why on earth would you want to use > an initrd? > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to [EMAIL PROTECTED] > More majord

Re: remark and RFC

2006-08-16 Thread Nix
On 16 Aug 2006, Molle Bestefich murmured woefully: > Peter T. Breuer wrote: >> > The comm channel and "hey, I'm OK" message you propose doesn't seem >> > that different from just hot-adding the disks from a shell script >> > using 'mdadm'. >> >> [snip speculations on possible blocking calls] > > Y

Re: raid5/lvm setup questions

2006-08-07 Thread Nix
On 5 Aug 2006, David Greaves prattled cheerily: > As an example of the cons: I've just set up lvm2 over my raid5 and whilst > testing snapshots, the first thing that happened was a kernel BUG and an > oops... I've been backing up using writable snapshots on LVM2 over RAID-5 for some time. No BUGs

Re: md reports: unknown partition table - fixed.

2006-07-22 Thread Nix
On 20 Jul 2006, Neil Brown uttered the following: > On Tuesday July 18, [EMAIL PROTECTED] wrote: >> >> I think there's a bug here somewhere. I wonder/suspect that the >> superblock should contain the fact that it's a partitioned/able md device? > > I've thought about that and am not in favour. >

Re: only 4 spares and no access to my data

2006-07-18 Thread Nix
On 18 Jul 2006, Neil Brown moaned: > The superblock locations for sda and sda1 can only be 'one and the > same' if sda1 is at an offset in sda which is a multiple of 64K, and > if sda1 ends near the end of sda. This certainly can happen, but it > is by no means certain. > > For this reason, versi

Re: Still can't get md arrays that were started from an initrd to shutdown

2006-07-17 Thread Nix
On 17 Jul 2006, Christian Pernegger suggested tentatively: > I'm still having problems with some md arrays not shutting down > cleanly on halt / reboot. > > The problem seems to affect only arrays that are started via an > initrd, even if they do not have the root filesystem on them. > That's all

Re: mdadm 2.5.2 - Static built , Interesting warnings when

2006-06-27 Thread Nix
On 27 Jun 2006, James W. Laferriere uttered the following: > Hello All , What change in Glibc mekes this necessary ? Is there a glibc 2.x has always had the requirement that lookups that use the NSS mechanism require the use of dynamically linked libraries. This is not new. Solaris (from

Re: Multiple raids on one machine?

2006-06-27 Thread Nix
On Tue, 27 Jun 2006, Chris Allen wondered: > Nix wrote: >> There is a third alternative which can be useful if you have a mess of >> drives of widely-differing capacities: make several RAID arrays so as to >> tesselate >> space across all the drives, and then pile an LV

Re: [PATCH*2] mdadm works with uClibc from SVN

2006-06-27 Thread Nix
On Tue, 27 Jun 2006, Neil Brown prattled cheerily: > On Tuesday June 27, [EMAIL PROTECTED] wrote: >> ,[ config.c:load_partitions() ] >> | name = map_dev(major, minor, 1); >> | >> | d = malloc(sizeof(*d)); >> | d->devname = strdup(name); >> ` >> > > Ahh.. uhmmm... Oh yes. I've fixed that

Re: Multiple raids on one machine?

2006-06-27 Thread Nix
On 25 Jun 2006, Chris Allen uttered the following: > Back to my 12 terabyte fileserver, I have decided to split the storage > into four partitions each of 3TB. This way I can choose between XFS > and EXT3 later on. > > So now, my options are between the following: > > 1. Single 12TB /dev/md0, par

Re: [PATCH*2] mdadm works with uClibc from SVN

2006-06-27 Thread Nix
On 26 Jun 2006, Neil Brown said: > On Tuesday June 20, [EMAIL PROTECTED] wrote: >> For some time, mdadm's been dumping core on me in my uClibc-built >> initramfs. As you might imagine this is somewhat frustrating, not least >> since my root filesystem's in LVM on RAID. Half an hour ago I got around

Re: [PATCH*2] mdadm works with uClibc from SVN

2006-06-24 Thread Nix
On Sat, 24 Jun 2006, Luca Berra said: > On Fri, Jun 23, 2006 at 08:45:47PM +0100, Nix wrote: >>On Fri, 23 Jun 2006, Neil Brown mused: >>> Is there some #define in an include file which will allow me to tell >>> if the current uclibc supports ftw or not? > > it is

Re: Large single raid and XFS or two small ones and EXT3?

2006-06-23 Thread Nix
On 23 Jun 2006, Francois Barre uttered the following: >> The problem is that there is no cost effective backup available. > > One-liner questions : > - How does Google make backups ? Replication across huge numbers of cheap machines on a massively distributed filesystem. -- `NB: Anyone suggesti

Re: Large single raid and XFS or two small ones and EXT3?

2006-06-23 Thread Nix
On 23 Jun 2006, PFC suggested tentatively: > - ext3 is slow if you have many files in one directory, but has > more mature tools (resize, recovery etc) This is much less true if you turn on the dir_index feature. -- `NB: Anyone suggesting that we should say "Tibibytes" instead of Te

Re: Large single raid and XFS or two small ones and EXT3?

2006-06-23 Thread Nix
On 23 Jun 2006, Christian Pedaschus said: > and my main points for using ext3 is still: "it's a very mature fs, > nobody will tell you such horrible storys about data-lossage with ext3 > than with any other filesystem." Actually I can, but it required bad RAM *and* a broken disk controller *and* a

Re: [PATCH*2] mdadm works with uClibc from SVN

2006-06-23 Thread Nix
On Fri, 23 Jun 2006, Neil Brown mused: > On Friday June 23, [EMAIL PROTECTED] wrote: >> On 20 Jun 2006, [EMAIL PROTECTED] prattled cheerily: >> > For some time, mdadm's been dumping core on me in my uClibc-built >> > initramfs. As you might imagine this is somewhat frustrating, not least >> > since

Re: [PATCH*2] mdadm works with uClibc from SVN

2006-06-23 Thread Nix
On 20 Jun 2006, [EMAIL PROTECTED] prattled cheerily: > For some time, mdadm's been dumping core on me in my uClibc-built > initramfs. As you might imagine this is somewhat frustrating, not least > since my root filesystem's in LVM on RAID. Half an hour ago I got around > to debugging this. Ping?

Re: [PATCH*2] mdadm works with uClibc from SVN

2006-06-20 Thread Nix
On 20 Jun 2006, [EMAIL PROTECTED] suggested tentatively: > Imagine my surprise when I found that it was effectively guaranteed to > crash: map_dev() in util.c is stubbed out for uClibc builds, and > returns -1 at all times. Um, that is, returns NULL. Obviously. -- `NB: Anyone suggesting that we

[PATCH*2] mdadm works with uClibc from SVN

2006-06-20 Thread Nix
For some time, mdadm's been dumping core on me in my uClibc-built initramfs. As you might imagine this is somewhat frustrating, not least since my root filesystem's in LVM on RAID. Half an hour ago I got around to debugging this. Imagine my surprise when I found that it was effectively guaranteed

Re: RAID tuning?

2006-06-14 Thread Nix
On 13 Jun 2006, Gordon Henderson said: > On Tue, 13 Jun 2006, Adam Talbot wrote: >> Can any one give me more info on this error? Pulled from >> /var/log/messages. >> "raid6: read error corrected!!" > > Not seen that one!!! The message is pretty easy to figure out and the code (in drivers/md/raid

Re: Problems with device-mapper on top of RAID-5 and RAID-6

2006-06-05 Thread Nix
On 2 Jun 2006, Uwe Meyer-Gruhl uttered the following: > Neil's suggestion indicates that there may be a race condition > stacking md and dm over each other, but I have not yet tested that > patch. I once had problems stacking cryptoloop over RAID-6, so it > might really be a stacking problem. We do

Re: [PATCH] mdadm 2.5

2006-06-05 Thread Nix
On 29 May 2006, Neil Brown suggested tentatively: > On Sunday May 28, [EMAIL PROTECTED] wrote: >> - mdadm-2.4-strict-aliasing.patch >> fix for another srict-aliasing problem, you can typecast a reference to a >> void pointer to anything, you cannot typecast a reference to a >> struct. > > Why can'

Re: Does software RAID take advantage of SMP, or 64 bit CPU(s)?

2006-05-25 Thread Nix
On 23 May 2006, Neil Brown noted: > On Monday May 22, [EMAIL PROTECTED] wrote: >> A few simple questions about the 2.6.16+ kernel and software RAID. >> Does software RAID in the 2.6.16 kernel take advantage of SMP? > > Not exactly. RAID5/6 tends to use just one cpu for parity > calculations, but

Re: problems with raid=noautodetect - solved

2006-05-25 Thread Nix
On 24 May 2006, Florian Dazinger uttered the following: > Neil Brown wrote: >> Presumably you have a 'DEVICE' line in mdadm.conf too? What is it. >> My first guess is that it isn't listing /dev/sdd? somehow. >> Otherwise, can you add a '-v' to the mdadm command that assembles the >> array, and cap

Re: xfs or ext3?

2006-05-10 Thread Nix
On 10 May 2006, Dexter Filmore wrote: > Do I have to provide stride parameter like for ext2? Yes, definitely. -- `On a scale of 1-10, X's "brokenness rating" is 1.1, but that's only because bringing Windows into the picture rescaled "brokenness" by a factor of 10.' --- Peter da Silva - To unsu

Re: disks becoming slow but not explicitly failing anyone?

2006-04-24 Thread Nix
On 23 Apr 2006, Mark Hahn stipulated: >> I've seen a lot of cheap disks say (generally deep in the data sheet >> that's only available online after much searching and that nobody ever >> reads) that they are only reliable if used for a maximum of twelve hours >> a day, or 90 hours a week, or someth

Re: disks becoming slow but not explicitly failing anyone?

2006-04-23 Thread Nix
On 23 Apr 2006, Mark Hahn said: > some people claim that if you put a normal (desktop) > drive into a 24x7 server (with real round-the-clock load), you should > expect failures quite promptly. I'm inclined to believe that with > MTBF's upwards of 1M hour, vendors would not clai

Re: naming of md devices

2006-03-24 Thread Nix
On 23 Mar 2006, Dan Christensen moaned: > To answer myself, the boot parameter raid=noautodetect is supposed > to turn off autodetection. However, it doesn't seem to have an > effect with Debian's 2.6.16 kernel. It does disable autodetection > for my self-compiled kernel, but since that kernel ha

Re: naming of md devices

2006-03-24 Thread Nix
On 23 Mar 2006, Daniel Pittman uttered the following: > The initramfs tool, which is mostly shared with Ubuntu, is less stupid. > It uses mdadm and a loop to scan through the devices found on the > machine and find what RAID levels are required, then builds the RAID > arrays with mdrun. That's muc

Re: naming of md devices

2006-03-22 Thread Nix
On 22 Mar 2006, Dan Christensen prattled cheerily: > I currently use kernel autodetection of my raid devices. I'm finding > that if I use a stock Debian kernel versus a self-compiled kernel > (2.6.15.6), the arrays md0 and md1 are switched, which creates a > problem mounting my root filesystem. >

Re: A random initramfs script

2006-03-17 Thread Nix
On Fri, 17 Mar 2006, Andre Noll murmured woefully: > On 00:41, Nix wrote: > >> > So I downloaded iproute2-2.4.7-now-ss020116-try.tar.gz, but there >> > seems to be a problem with errno.h: >> >> Holy meatballs that's ancient. > > It is the most re

Re: A random initramfs script

2006-03-16 Thread Nix
On Fri, 17 Mar 2006, Andre Noll stated: > On 07:50, Nix wrote: >> If / was a ramfs (as rootfs is), you'd run out of memory... > > Yes, it's an additional piece of rope, and I already used it to shoot > myself in the foot by doing a backup with "rsync -a /home /mn

Re: A random initramfs script

2006-03-15 Thread Nix
On Thu, 16 Mar 2006, Neil Brown wrote: > On Wednesday March 15, [EMAIL PROTECTED] wrote: >> On 08:29, Nix wrote: >> > Yeah, that would work. Neil's very *emphatic* about hardwiring the UUIDs of >> > your arrays, though I'll admit that given the existence of

Re: A random initramfs script

2006-03-15 Thread Nix
On Wed, 15 Mar 2006, Andre Noll gibbered uncontrollably: > On 21:37, Nix wrote: > >> In the interests of pushing people away from in-kernel autodetection, >> I thought I'd provide the initramfs script I just knocked up to boot >> my RAID+LVM system. It's had

A random initramfs script

2006-03-14 Thread Nix
algorithm 2 [...] raid5: device sdb7 operational as raid disk 0 raid5: device hda5 operational as raid disk 2 raid5: device sda7 operational as raid disk 1 raid5: allocated 3155kB for md2 raid5: raid level 5 set md2 active with 3 out of 3 devices, algorithm 2 Anyway, without further ado,