Am 19. May, 2025 schwätzte Ryan Petris via PLUG-discuss so:
Yeah, I powered off the drive bay to see what would happen :).
And I'm guessing it was in-use when you powered it off, i.e. part of a
running mdadm array.
I powered off all the disks in the array after stopping it.
Granted, powering off w/o stopping is part of my use case as the computer
is on UPS, but the drive housing is not :).
I'm not removing individual drives. I'm removing all the drives in the
array. I have one external housing and several sets of drives that need to
go in it, depending on what I'm doing.
I'm not changing them out that often, but might need to on ocassion for
projects.
ciao,
der.hans
but I'm not rebooting for hot-pluggable
drives
The problem is that if hot-pluggable drives are still in use at the time you
remove them, it's likely that it will fall into an invalid state where it
seemingly doesn't exist and yet still somehow exists. I've had this happen in
particular with not-so-great NVME to USB controllers where lots of activity
causes the drive to fall offline.
My hunch is that you needed to fail/remove the drive before it would get cleaned up, and
you'd also have to zero the superblock on the drive you removed before you can add it as
a "new" drive. See here:
https://wiki.archlinux.org/title/RAID#Removing_devices_from_an_array
I need it to work if I remove the disks to do something, then put
them back in.
You shouldn't be doing this... if you're removing disks with the intent of
adding them back and the array just continuing like nothing happened, you need
to unmount then stop/disassemble the array first or just shut the machine down,
and then when you're done power it back on or reassemble and mount the array
again.
On Mon, May 19, 2025, at 12:01 AM, der.hans wrote:
Am 18. May, 2025 schwätzte Ryan Petris via PLUG-discuss so:
moin moin,
It picked up the wrong devices when powering the drive bay down and up
again. mdadm was showing and trying to use the orginal sd names rather
than the newly assigned names.
That doesn't sound right, something else must have been going on.
What likely happened is if the drive was lost unexpectedly, the old
device names were never cleaned up properly, so when it got plugged back
Yeah, I powered off the drive bay to see what would happen :).
in there were in effect duplicates which may have confused mdadm. If you
were to have restarted it would have likely been fine.
Rebooting will likely fix it, but I'm not rebooting for hot-pluggable
drives. I need it to work if I remove the disks to do something, then put
them back in.
I'm certain I can make it work, but I'd like it to be less annoying.
I'm also trying to refresh my memory a bit and learn the current state of
"make it work like I want".
ciao,
der.hans
..."fine" assuming that the array immediately went offline and then assembled properly
when the lost device existed again. However if the array stayed online and continued to work then
mdadm would likely reject the drive if it came back online anyway and you'd have to
"remove" and re-add it for it to rebuild.
On Sat, May 17, 2025, at 9:54 PM, der.hans wrote:
Am 17. May, 2025 schwätzte Ryan Petris via PLUG-discuss so:
You can't, unfortunately. The only thing in /dev you can technically
rename are network interfaces; the rest you can only add symlinks for,
which is what for instance /dev/disk/by-partlabel/* is.
A shame, I like long path names :)
Why do you care what mdadm shows for device names anyway? If you're
concerned that it will pick up the wrong disk on reboot, it won't
because internally it's using its own identifiers to find the right
drives, and will snow you whatever /dev/sd* it happens to end up
It picked up the wrong devices when powering the drive bay down and up
again. mdadm was showing and trying to use the orginal sd names rather
than the newly assigned names.
I will experiment more with mdadm. It's been a while since I used it, so
was expecting some reacquaintance exercises to be necessary.
ciao,
der.hans
on when running mdadm commands. If you want to figure out which
/dev/sd* device belongs to your disk label, you can just run `readlink
/dev/disk/by-partlabel/raid...`.
On Sat, May 17, 2025, at 6:46 PM, der.hans via PLUG-discuss wrote:
moin moin,
I'm using a USB JBOD enclosure to build a RAID set.
Restarting the enclosure ends up with the drives on new names, e.g. sda,
sdb and sdc come back sdd, sde and sdf.
I added labels to my disk partitions and was hoping to use them, e.g.
/dev/disk/by-partlabel/raid{0,1,2}, but mdadm turned them back to
/dev/sd{d,e,f} names as members of the array
Anyone know how I can either get /dev/sd names not to change for removable
media or get mdadm to accept the names I want to use?
I'd rather the latter.
ciao,
der.hans
--
# https://www.SpiralArray.com https://www.PhxLinux.org
# <arclight> Delicious red tape, like a Twizzler but flat. And adhesive.
---------------------------------------------------
PLUG-discuss mailing list: PLUG-discuss@lists.phxlinux.org
To subscribe, unsubscribe, or to change your mail settings:
https://lists.phxlinux.org/mailman/listinfo/plug-discuss
--
# https://www.SpiralArray.com https://www.PhxLinux.org
# "Luckily, this is a comic book, for which no idea is too complex."
# -- Larry Gonick from The Cartoon History of the United States
--
# https://www.SpiralArray.com https://www.PhxLinux.org
# "Lie detector eyeglasses perfected: Civilization collapses."
# -- Richard Powers
--
# https://www.SpiralArray.com https://www.PhxLinux.org
# "The only thing necessary for evil to triumph is for good men to do
# nothing" -- falsely attributed to Edmund Burke
---------------------------------------------------
PLUG-discuss mailing list: PLUG-discuss@lists.phxlinux.org
To subscribe, unsubscribe, or to change your mail settings:
https://lists.phxlinux.org/mailman/listinfo/plug-discuss