On Wednesday 28 July 2004 10:14, Richard Marshall wrote: > Hi Justin, > > >Out of curiosity, what does the line look like? > > i have tried various options, but right now it looks like this: > > proc /proc proc defaults 0 0 > /dev/md0 / ext3 defaults,errors=remount-ro 0 > 1 /dev/sda1 /boot ext3 defaults 0 2 > /dev/sdb1 /boot2 ext3 defaults 0 2 > /dev/md1 /share xfs defaults 0 2 > /dev/sda2 none swap sw 0 0 > /dev/sdb2 none swap sw 0 0 > /dev/hdc /media/cdrom0 iso9660 ro,user,noauto 0 0 > /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 > OK, this looks fine.
> >Since you've got your raid going (after the manual start), you can > > create your config file automatically. Just issue the command "mdadm > > --detail --scan" and redirect the output to /etc/mdadm.conf. You'll > > then have to manually add the device line, and even though you can, I > > wouldn't use wildcards, I'd spell out each device involved in the raid. > > In your case, I'm guessing the file could end up looking like this (at > > least for md1): > > I had tried that, but kept coming across conflicting info on whether the > devicesshould be explictly defined. Today I have modified it so that it > is pretty much as you have suggested below (but without the devices line, > which I will now add. However, i have device and array arguments in CAPS, > should I use lowercase instead? > You should probably use caps. I was just too lazy to press caps-lock. Sorry for the confusion. > >device /dev/sda5 /dev/sdb5 /dev/sda3 /dev/sdb3 > >array /dev/md1 UUID=263f5308:d2877768:142f22b5:c434a317 > >devices=/dev/sda5,/dev/sdb5 > > > >Of course, the last two lines are actually one line in the config file. > > > >Note that while you don't need a config file, without one "you'd need to > >specify more detailed information about an array on the command in order > > to activate it." [1] Also note that you only need one device line for > > the config file. You don't need one per array. > > > > > >You can confirm that md is a module by either looking > >at /boot/config-2.6.7-1-686 or by looking at the output of lsmod. > > Either way, the raid capability is there, as your first raid array is > > built and mounted correctly upon boot. > > Ok, can confirm that md is a module, and also that i am using an initrd > image to get the machine to mount the / filesystem > I wonder if this is an order problem? It seems that your config file is set up properly for the first array, and I'd guess it's OK for the second array, especially if you can do "mdadm -As /dev/md1" (after a reboot) and it works. If you can, that leaves the only difference as the file system. Did you add the XFS module to the initial ram disk? It probably needs to be in there, because /share is probably mounted the same time that / is mounted, so you can't count on it loading the XFS driver until after it tries to mount the file systems. If, on the other hand, you can't start the array without specifying the partitions involved, then see below. Your config file may have a problem. > >While you're searching for the md module, you might also check the xfs > >module. I'm pretty sure it's loaded, as the last three lines of dmesg > > seem to indicate, but you should probably check just to be sure. > > yup, it's there. > OK, but I wonder if your problem will go away if XFS is in the initrd. > >If setting up your config file fails, post the output of "mdadm > >-E /dev/sda5" and "mdadm -E /dev/sdb5". > > will try the config file and reboot in a sec. In the mean-time, here's > the outputs anyway: > > file-srvdeb:/home/rich# mdadm -E /dev/sda5 > /dev/sda5: > Magic : a92b4efc > Version : 00.90.00 > UUID : 5c158e6e:dc2e6e64:ad9592f8:c72b0868 > Creation Time : Tue Jul 27 15:45:41 2004 > Raid Level : raid1 > Device Size : 60934400 (58.11 GiB 62.40 GB) > Raid Devices : 2 > Total Devices : 2 > Preferred Minor : 1 > > Update Time : Wed Jul 28 16:45:50 2004 > State : clean, no-errors > Active Devices : 2 > Working Devices : 2 > Failed Devices : 0 > Spare Devices : 0 > Checksum : dbdfe8b0 - correct > Events : 0.390 > > > Number Major Minor RaidDevice State > this 0 8 5 0 active sync /dev/sda5 > 0 0 8 5 0 active sync /dev/sda5 > 1 1 8 21 1 active sync /dev/sdb5 > > file-srvdeb:/home/rich# mdadm -E /dev/sdb5 > /dev/sdb5: > Magic : a92b4efc > Version : 00.90.00 > UUID : 5c158e6e:dc2e6e64:ad9592f8:c72b0868 > Creation Time : Tue Jul 27 15:45:41 2004 > Raid Level : raid1 > Device Size : 60934400 (58.11 GiB 62.40 GB) > Raid Devices : 2 > Total Devices : 2 > Preferred Minor : 1 > > Update Time : Wed Jul 28 16:45:50 2004 > State : clean, no-errors > Active Devices : 2 > Working Devices : 2 > Failed Devices : 0 > Spare Devices : 0 > Checksum : dbdfe8c2 - correct > Events : 0.390 > > > Number Major Minor RaidDevice State > this 1 8 21 1 active sync /dev/sdb5 > 0 0 8 5 0 active sync /dev/sda5 > 1 1 8 21 1 active sync /dev/sdb5 > > > thanks again, > > Rich Hmm, it seems that your UUIDs output by mdadm after the raid is running do not match the ones you posted from the config file snippet. That is definitely a problem. I think you might need to change the values in the config file to match the values in the actual array. HTH Justin Guerin -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]