I am glad that you phrased your request "It would better if it managed to say it failed doing the requested operation.".
Because it indeed did successfully perform the operation, exactly as the output indicated. That is, it DID indeed set the MD_DISK_FAULTY attribute on the /dev/sdb2 device of the /dev/md0 array. To be more precise, it set the attribute via ioctl() call to the kernel 'md' driver. (~ lines 980-995 of Manage.c). Unfortunately, (or rather, fortunately, for your data as well as your blood pressure), the kernel 'md' driver, when receiving this request, sets flag to initiate a recovery, or, if a recovery is already in progress (as in your case), sets flag for MD_RECOVERY_RECOVER. I have not attempted to understand all the possibilities in the kernel driver. However, it appears that, at least for RAID-1, the FAULTY flag on the (sdb2) device is cleared when the recovery completes, and the 'RECOVERY_RECOVER' finds nothing more to do. At this point, I believe this a "won't fix" issue; one could potentially ask for mdadm to do some before/after status-check magic and "handle" this and other potential cases in some "better" way. Asking it to do so raises a great deal many more problems than it solves. OTOH, if you believe the bug can be closed .... :-) -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org