[ceph-users] Re: ceph orch issue: lsblk: /dev/vg_osd/lvm_osd: not a block device

2024-05-26 Thread Eugen Block
Can you try the vg/lv syntax instead? ceph orch daemon add osd ceph1:vg_osd/lvm_osd Although both ways work in my small test cluster with 18.2.2 (as far as I know 18.2.3 hasn't been released yet): # ceph orch daemon add osd soc9-ceph:/dev/test-ceph/lv_osd Created osd(s) 0 on host 'soc9-ceph'

[ceph-users] Re: ceph orch issue: lsblk: /dev/vg_osd/lvm_osd: not a block device

2024-05-26 Thread duluxoz
Nope, tried that, it didn't work (similar error messages). Thanks for input  :-) So, still looking for ideas on this one - thanks in advance ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Problem in changing monitor address and public_network

2024-05-26 Thread farhad kh
Hello, according to ceph own document and the article that I sent the link to, I tried to change the address of the ceph machines and its public network. But the guarantee that I had to set the machines with the new address(ceph orch host set-addr opcrgfpsksa0101 10.248.35.213) , the command was n

[ceph-users] Re: ceph orch issue: lsblk: /dev/vg_osd/lvm_osd: not a block device

2024-05-26 Thread Cedric
Not sure you need (or you should) prepare the block device manualy, ceph can handle these tasks. Did you try to cleanup and retry by providing /dev/sda6 with the ceph orch daemon add ? On Sun, May 26, 2024, 10:50 duluxoz wrote: > Hi All, > > Is the following a bug or some other problem (I can't

[ceph-users] Re: Lousy recovery for mclock and reef

2024-05-26 Thread Cedric
Also osd_max_backfills and osd_recovery_max_active can plays a role, but I wonder if they still has effect with the new mpq feature. On Sun, May 26, 2024, 09:37 Mazzystr wrote: > I can't explain the problem. I have to recover three discs that are hdds. > I figured on just replacing one to give

[ceph-users] Re: Lousy recovery for mclock and reef

2024-05-26 Thread Cedric
What about drives IOPS ? Hdd tops at an average 150, you can use iostat -xmt to get these values (also last column show disk utilization which is very usefull) On Sun, May 26, 2024, 09:37 Mazzystr wrote: > I can't explain the problem. I have to recover three discs that are hdds. > I figured on

[ceph-users] ceph orch issue: lsblk: /dev/vg_osd/lvm_osd: not a block device

2024-05-26 Thread duluxoz
Hi All, Is the following a bug or some other problem (I can't tell)  :-) Brand new Ceph (Reef v18.2.3) install on Rocky Linux v9.4 - basically, its a brand new box. Ran the following commands (in order; no issues until final command): 1. pvcreate /dev/sda6 2. vgcreate vg_osd /dev/sda6 3. lvc

[ceph-users] Re: Lousy recovery for mclock and reef

2024-05-26 Thread Sake Ceph
Hi Isn't this just the limit of one HDD or the other HDD's for providing the data? Don't forget, recovery will drop even more for the last few objects. At least I noticed this when replacing a drive in my (little) cluster. Kind regards, Sake > Op 26-05-2024 09:36 CEST schreef Mazzystr : >

[ceph-users] Re: Lousy recovery for mclock and reef

2024-05-26 Thread Mazzystr
I can't explain the problem. I have to recover three discs that are hdds. I figured on just replacing one to give the full recovery capacity of the cluster to that one disc. I was never able to achieve a higher recovery rate than about 22 MiB/sec so I just added the other two discs. Recovery bou