I upgraded my server to beowulf. After rebooting, all home directories except root's are no longer accessible.
They are all on an LVM on software RAID. The problem seems to be that two of my three RAID1 systems are not starting up properly. What can I do about it? hendrik@april:/$ cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md1 : inactive sda2[3](S) 2391296000 blocks super 1.2 md2 : inactive sda3[0](S) 1048512 blocks md0 : active raid1 sdf4[1] 706337792 blocks [2/1] [_U] unused devices: <none> hendrik@april:/$ hendrik@april:/$ cat /etc/mdadm/mdadm.conf DEVICE partitions ARRAY /dev/md0 level=raid1 num-devices=2 UUID=4dc189ba:e7a12d38:e6262cdf:db1beda2 ARRAY /dev/md1 metadata=1.2 name=april:1 UUID=c328565c:16dce536:f16da6e2:db603645 ARRAY /dev/md2 UUID=5d63f486:183fd2ea:c2a3a88f:cb2b61de MAILADDR root hendrik@april:/$ The standard recommendation seems to be to replace lines in /etc/mdadm/mdadm.conf by lines prouced by mdadm --examine --scan: april:~# mdadm --examine --scan ARRAY /dev/md/1 metadata=1.2 UUID=c328565c:16dce536:f16da6e2:db603645 name=april:1 ARRAY /dev/md2 UUID=5d63f486:183fd2ea:c2a3a88f:cb2b61de ARRAY /dev/md0 UUID=4dc189ba:e7a12d38:e6262cdf:db1beda2 april:~# But this replacement involves changing a line that dies work (md0), not changing one that did not (md2), and changing another one that did not work (md1). Since --examine's suggested changes seem uncorrelated with the active/inactive record, I have little faith in this alleged fix without first gaining more understanding. -- hendrik
_______________________________________________ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng