I followed this doc from Proxmox to test IO on my OSD

https://www.proxmox.com/images/download/pve/docs/Proxmox-VE-Ceph-Benchmark-202312-rev0.pdf


Did it on 3 OSD to bench disk on a testing ceph cluster and one went down


Wanted to see the difference between a rados bench and fio


Vivien

________________________________
De : Anthony D'Atri <a...@dreamsnake.net>
Envoyé : mercredi 23 juillet 2025 14:06:55
À : GLE, Vivien
Cc : Sinan Polat; ceph-users@ceph.io
Objet : Re: [ceph-users] Ceph OSD down (unable to mount object store)

You ran a write fio job on the underlying device.  That would make any software 
unhappy.

Did you mean to run a read test?  Or to test on an RBD volume or a filesystem 
built within an RBD volume?

>
>
> Thanks for your answer !
>
>
> Do you know by any chance why fio does this ? Are fio and ceph incompatible ?
>
>
> Let's destroy this OSD then
>
>
> Vivien
>
> ________________________________
> De : Sinan Polat <sinan86po...@gmail.com>
> Envoyé : mercredi 23 juillet 2025 13:57:07
> À : GLE, Vivien
> Cc : ceph-users@ceph.io
> Objet : Re: [ceph-users] Ceph OSD down (unable to mount object store)
>
> Hi Vivien,
>
> Your fio test has very likely destroyed the Ceph OSD block device and the 
> problem is not just the symlink, it's data corruption on the underlying 
> device.
>
> Zap the drive, recreate the OSD and let your cluster rebalance.
>
> Sinan
>
> Op wo 23 jul 2025 om 14:10 schreef GLE, Vivien 
> <vivien....@inist.fr<mailto:vivien....@inist.fr>>:
> Hi,
>
>
> I did a fio bench and believe that it destroy one of my OSD, this is the 
> command used  :
>
>
> fio --ioengine=libaio --filename=/dev/sda --direct=1 --sync=1 --rw=write 
> --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio
> fio --ioengine=libaio --filename=/dev/sda --direct=1 --sync=1 --rw=write 
> --bs=4M --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio
>
>
> Log file after the command :
>
> 2025-07-23T08:41:44.448+0000 734bf7f6e680  1 bdev(0x59ba86d4ee00 
> /var/lib/ceph/osd/ceph-2/block) close
> 2025-07-23T08:41:44.719+0000 734bf7f6e680  1 bdev(0x59ba86d4ee00 
> /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
> 2025-07-23T08:41:44.719+0000 734bf7f6e680  0 bdev(0x59ba86d4ee00 
> /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on 
> /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
> 2025-07-23T08:41:44.720+0000 734bf7f6e680  1 bdev(0x59ba86d4ee00 
> /var/lib/ceph/osd/ceph-2/block) open size 1000203091968 (0xe8e0c00000, 932 
> GiB) block_size 4096 (4 KiB) rotational device, discard supported
> 2025-07-23T08:41:44.723+0000 734bf7f6e680 -1 
> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label unable to decode 
> label /var/lib/ceph/osd/ceph-2/block at offset 66: void 
> bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) 
> decode past end of struct encoding: Malformed input [buffer:3]
> 2025-07-23T08:41:44.724+0000 734bf7f6e680 -1 
> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label unable to decode 
> label /var/lib/ceph/osd/ceph-2/block at offset 4096: End of buffer [buffer:2]
> 2025-07-23T08:41:44.724+0000 734bf7f6e680 -1 
> bluestore(/var/lib/ceph/osd/ceph-2) _check_main_bdev_label not all labels 
> read properly
> 2025-07-23T08:41:44.724+0000 734bf7f6e680  1 bdev(0x59ba86d4ee00 
> /var/lib/ceph/osd/ceph-2/block) close
> 2025-07-23T08:41:44.983+0000 734bf7f6e680 -1 osd.2 0 OSD:init: unable to 
> mount object store
> 2025-07-23T08:41:44.983+0000 734bf7f6e680 -1  ** ERROR: osd init failed: (5) 
> Input/output error
>
>
>
> After checking I saw that the block soft link might be wrong
>
> On a healthy osd :
>
> # ll /var/lib/ceph/cluster-id/osd.5/
> total 72
> drwx------  2 167 167 4096 Jul 22 10:36 ./
> drwx------ 12 167 167 4096 Jul 22 10:30 ../
> lrwxrwxrwx  1 167 167   93 Jul 15 14:39 block -> 
> /dev/ceph-c31f0e16-0460-4bc5-9470-468270b4c68a/osd-block-72aa3074-e2f9-45f8-a468-03c02d36f1de
>
>
> On my breaking osd :
>
> /var/lib/ceph/cluster-id/osd.2# ll
> total 72
> drwx------  2 167 167 4096 Jul 23 10:41 ./
> drwx------ 11 167 167 4096 Jul 22 10:35 ../
> lrwxrwxrwx  1 167 167  111 Jul 23 10:41 block -> 
> /dev/mapper/ceph--bd0ca671--2b89--4530--be2b--f41443822a91-osd--block--342578c3--4603--4023--a564--8fca6dcc1040
>
> Is there a correct way of changing it or I'm going in the wrong direction ?
>
> Thanks
>
> Vivien
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
> To unsubscribe send an email to 
> ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to