[ceph-users] Re: nodes with high density of OSDs

2025-04-12 Thread Gregory Orange
On 12/4/25 20:56, Tim Holloway wrote: > Which brings up something I've wondered about for some time. Shouldn't > it be possible for OSDs to be portable? That is, if a box goes bad, in > theory I should be able to remove the drive and jack it into a hot-swap > bay on another server and have that ser

[ceph-users] Re: nodes with high density of OSDs

2025-04-12 Thread Anthony D'Atri
> Apparently those UUIDs aren't as reliable as I thought. > > I've had problems with a server box that hosts a ceph VM. VM? > Looks like the mobo disk controller is unreliable Lemme guess, it is an IR / RoC / RAID type? As opposed to JBOB / IT? If the former and it’s an LSI SKU as most are,

[ceph-users] Re: FS not mount after update to quincy

2025-04-12 Thread Iban Cabrillo
Hi Konstantine, Perfect!!! it works Regards, I -- Ibán Cabrillo Bartolomé Instituto de Física de Cantabria (IFCA-CSIC) Santander, Spain Tel: +34942200969/+34669930421 Responsible for advanced computing service (RSC)

[ceph-users] Re: nodes with high density of OSDs

2025-04-12 Thread Paul Mezzanini
OSDs are absolutely portable. I've moved them around by simply migrating the journal back into the spinner, moving the drive, pulling the journal back out and then doing ceph-volume lvm activate all. /var/lib/ceph/ are all tmpfs mounts generated on boot. This is for "physical" setups and not c

[ceph-users] Re: nodes with high density of OSDs

2025-04-12 Thread Tim Holloway
For administered (container) OSDs, the setup would likely be similar. If my experience is indicative, the mere presence of an OSD's metadata directory under /var/lib/ceph/ should be enough to cause ceph to generate the container. So all that's necessary is to move the OSD metadata over there

[ceph-users] Re: nodes with high density of OSDs

2025-04-12 Thread Tim Holloway
Apparently those UUIDs aren't as reliable as I thought. I've had problems with a server box that hosts a ceph VM. Looks like the mobo disk controller is unreliable AND one of the disks passes SMART but has interface problems. So I moved the disks to an alternate box. Between relocation and dr

[ceph-users] Re: nodes with high density of OSDs

2025-04-12 Thread Tim Holloway
One possibility would be so have ceph simply set aside space on the OSD and echo the metadata there automatically. Then a mechanism could scan for un-adopted drives and import as needed. So even a dead host would be OK as long as the device/LV was still usable. I've migrated non-ceph LVs, after

[ceph-users] Re: nodes with high density of OSDs

2025-04-12 Thread Tim Holloway
When I first migrated to Ceph, my servers were all running CentOS 7, which I (wrongly) thought could not handle anything above Octopus, and on top of that, I initially did legacy installs. So in order to run Pacific and to keep the overall clutter in the physical box configuration down, I made