Ad 1: Yes, the SATA controller has to support Hot-Swap. You _can_ remove the device nodes by running # echo 1 > /sys/block/<device>/device/delete
Ad 2: Depends on the controller, see 1. It might recognize the new drive, or not. It might see the correct device, or not. Ad 3: As long as the second HDD is within the BIOS boot order, that should work. Regards, /peter Am 19.07.2016 um 16:01 schrieb Urs Thuermann: > In my RAID 1 array /dev/md0 consisting of two SATA drives /dev/sda1 > and /dev/sdb1 the first drive /dev/sda has failed. I have called > mdadm --fail and mdadm --remove on that drive and then pulled the > cables and removed the drive. The RAID array continues to work fine > but in degraded mode. > > I have some questions: > > 1. The block device nodes /dev/sda and /dev/sda1 still exist and the > partitions are still listed in /proc/partitions. > > That causes I/O errors when running LVM tools or fdisk -l or other > tools that try to access/scan all block devices. > > Shouldn't the device nodes and entries in /proc/partitions > disappear when the drive is pulled? Or does the BIOS or the SATA > controller have to support this? > > 2. Can I hotplug the new drive and rebuild the RAID array? Since > removal of the old drive seems not to be detected I wonder if the > new drive will be detected correctly. Will the kernel continue > with the old drive's size and partitioning, as is still found in > /proc/partitions? Will a call > > blockdev --rereadpt /dev/sda > > help? > > 3. Alternativley, I could reboot the system. I have called > > grub-install /dev/sdb > > and hope this suffices to make the system bootable again. > Would that be safer? > > Any other suggestions? > > > urs >
signature.asc
Description: OpenPGP digital signature