[ceph-users] Cannot mount RBD on client

2024-06-21 Thread service . plant
Hi everyone! I've encountered situation I cannot even google. In a nutshell, rbd map test/kek --id test hags forever on ```futex(0x7ffdfa73d748, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY``` instruction in strace. Of course, I have all the keyrings and ceph.co

[ceph-users] Re: Cannot mount RBD on client

2024-06-21 Thread service . plant
Hi Etienne, indeed, even ```rados ls --pool test``` hangs on the same instruction futex(0x7ffc2de0cb10, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=10215, tv_nsec=619004859}, FUTEX_BITSET_MATCH_ANY Yes, by netcat I have checked from client side and all OSD ports are succeeded.

[ceph-users] ceph commands on host cannot connect to cluster after cephx disabling

2024-03-06 Thread service . plant
Hello everybody, Suddenly faced with a problem with (probably) authorization playing with cephx. So, long story short: 1) Rollout completely new testing cluster by cephadm with only one node 2) According to docs I've set this to /etc/ceph/ceph.conf auth_cluster_required = none auth_serv

[ceph-users] ceph-volume fails when adding spearate DATA and DATA.DB volumes

2024-03-06 Thread service . plant
Hi all! I;ve faced an issue I couldnt even google. Trying to create OSD with two separate LVM for data.db and data, gives me intresting error ``` root@ceph-uvm2:/# ceph-volume lvm prepare --bluestore --data ceph-block-0/block-0 --block.db ceph-db-0/db-0 --> Incompatible flags were found, some v

[ceph-users] Re: ceph-volume fails when adding spearate DATA and DATA.DB volumes

2024-03-07 Thread service . plant
Oh, dude! You opened my eyes! I thought (it is written this way in documentation) that all the commands need to be executed under cephadm shell. That is why I always ran 'cephadm shell' first, falling down into container env, and then all the rest. Where can I read about proper usage of cephadm t

[ceph-users] "ceph orch daemon add osd" deploys broken OSD

2024-04-02 Thread service . plant
Hi everybody. I've faced the situation when I cannot redeploy OSD on a new disk So, I need to replace osd.30 cuz disk always reports about problems with I\O. I do `ceph orch daemon osd.30 --replace` Then I zap DB ``` root@server-2:/# ceph-volume lvm zap /dev/ceph-db/db-88 --> Zapping: /dev/ceph

[ceph-users] Re: cephadm: daemon osd.x on yyy is in error state

2024-04-02 Thread service . plant
probably `ceph mgr fail` will help. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: "ceph orch daemon add osd" deploys broken OSD

2024-04-06 Thread service . plant
Hello everyone, any ideas? Even small hints would help a lot! ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephadm: daemon osd.x on yyy is in error state

2024-04-06 Thread service . plant
did it help? Maybe you found a better solution? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io