Do you maybe have leftover residue in /var/lib/ceph complicating deployment?
> On May 16, 2025, at 9:39 AM, Konold, Martin <martin.kon...@konsec.com> wrote:
>
> Am 2025-05-16 18:01, schrieb Anthony D'Atri:
>> I wouldn’t think blkdiscard would necessarily fully clean. I would try sgdisk —zap-all or Ceph-volume lvm zap
>
> I gave this a try in addition to a reboot but no changes. Still not created a bluestore osd as intended.
>
> I guess this is the culprit:
>
> 2025-05-16T16:28:14.865+0000 7be7486fb880 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label /var/lib/ceph/osd/ceph-1//block at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
>
> Maybe someone has a hint?
>
> Yours
> --martin
>
> ------------------------------------------------------------------------------
> # ceph-volume lvm prepare --bluestore --data vgsdd/sdd1
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a5ae5940-26f2-44c8-bf16-bfdfa6de20ba
> Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
> --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
> Running command: /usr/bin/chown -h ceph:ceph /dev/vgsdd/sdd1
> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-7
> Running command: /usr/bin/ln -s /dev/vgsdd/sdd1 /var/lib/ceph/osd/ceph-1/block
> Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
> stderr: got monmap epoch 20
> --> Creating keyring file for osd.1
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
> Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid a5ae5940-26f2-44c8-bf16-bfdfa6de20ba --setuser ceph --setgroup ceph
> stderr: 2025-05-16T16:28:14.865+0000 7be7486fb880 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label /var/lib/ceph/osd/ceph-1//block at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
> stderr: 2025-05-16T16:28:14.865+0000 7be7486fb880 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label /var/lib/ceph/osd/ceph-1//block at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
> stderr: 2025-05-16T16:28:14.866+0000 7be7486fb880 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label /var/lib/ceph/osd/ceph-1//block at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
> stderr: 2025-05-16T16:28:14.867+0000 7be7486fb880 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label /var/lib/ceph/osd/ceph-1//block at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
> stderr: 2025-05-16T16:28:14.867+0000 7be7486fb880 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
> stderr: 2025-05-16T16:28:15.127+0000 7be7486fb880 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
> --> ceph-volume lvm prepare successful for: vgsdd/sdd1
> root@pve-03 (pve-03.t3):~# ceph-volume lvm activate --all
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph-authtool --gen-print-key
> --> OSD ID 0 FSID a764e57e-ad3b-4823-8849-454f133526f6 process is active. Skipping activation
> --> Activating OSD ID 1 FSID a5ae5940-26f2-44c8-bf16-bfdfa6de20ba
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/vgsdd/sdd1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
> Running command: /usr/bin/ln -snf /dev/vgsdd/sdd1 /var/lib/ceph/osd/ceph-1/block
> Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-7
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
> Running command: /usr/bin/systemctl enable ceph-volume@lvm-1-a5ae5940-26f2-44c8-bf16-bfdfa6de20ba
> stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-a5ae5940-26f2-44c8-bf16-bfdfa6de20ba.service → /lib/systemd/system/ceph-volume@.service.
> Running command: /usr/bin/systemctl enable --runtime ceph-osd@1
> stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service → /lib/systemd/system/ceph-osd@.service.
> Running command: /usr/bin/systemctl start ceph-osd@1
> --> ceph-volume lvm activate successful for osd ID: 1
>
> ceph-volume-systemd.log has:
>
> [2025-05-16 16:23:43,600][ceph_volume.process][INFO ] stderr --> RuntimeError: could not find osd.1 with osd_fsid a2a1cb42-006f-4659-a461-bb0ffbf190ff
>
> ps:
> ceph 11572 0.5 0.0 689688 83000 ? Ssl 16:29 0:02 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
>
> # systemctl status ceph-osd@1.service
> ● ceph-osd@1.service - Ceph object storage daemon osd.1
> Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtime; preset: enabled)
> Drop-In: /usr/lib/systemd/system/ceph-osd@.service.d
> └─ceph-after-pve-cluster.conf
> Active: active (running) since Fri 2025-05-16 16:29:11 UTC; 7min ago
> Process: 11567 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 1 (code=exited, status=0/SUCCESS)
> Main PID: 11572 (ceph-osd)
> Tasks: 68
> Memory: 49.4M
> CPU: 2.239s
> CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@1.service
> └─11572 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
>
> Mai 16 16:29:11 pve-03 systemd[1]: Starting ceph-osd@1.service - Ceph object storage daemon osd.1...
> Mai 16 16:29:11 pve-03 systemd[1]: Started ceph-osd@1.service - Ceph object storage daemon osd.1.
> Mai 16 16:29:15 pve-03 ceph-osd[11572]: 2025-05-16T16:29:15.066+0000 7b3558e37880 -1 osd.1 0 log_to_monitors true
> Mai 16 16:29:16 pve-03 ceph-osd[11572]: 2025-05-16T16:29:16.351+0000 7b3554dd76c0 -1 osd.1 0 waiting for initial osdmap
> Mai 16 16:29:16 pve-03 ceph-osd[11572]: 2025-05-16T16:29:16.352+0000 7b354c6166c0 -1 osd.1 0 failed to load OSD map for epoch 32827, got 0 bytes
> Mai 16 16:29:16 pve-03 ceph-osd[11572]: 2025-05-16T16:29:16.657+0000 7b3548b646c0 -1 osd.1 33967 osdmap NOUP flag is set, waiting for it to clear
> Mai 16 16:29:17 pve-03 ceph-osd[11572]: 2025-05-16T16:29:17.354+0000 7b3548b646c0 -1 osd.1 33968 osdmap NOUP flag is set, waiting for it to clear
> Mai 16 16:29:18 pve-03 ceph-osd[11572]: 2025-05-16T16:29:18.362+0000 7b3548b646c0 -1 osd.1 33969 osdmap NOUP flag is set, waiting for it to clear
>
> --
> Martin Konold - Prokurist, CTO
> KONSEC GmbH - make things real
> Amtsgericht Stuttgart, HRB 23690
> Geschäftsführer: Andreas Mack
> Im Köller 3, 70794 Filderstadt, Germany
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io