[ceph-users] Re: set proxy for ceph installation

2023-09-26 Thread Dario Graña
Hi Majid, You can try to manually execute the command */usr/bin/podman pull quay.io/ceph/ceph:v17 * and start debugging the problem from there. Regards! On Tue, Sep 26, 2023 at 3:42 PM Majid Varzideh wrote: > hi friends > i have deployed my first node in cluster.

[ceph-users] Re: Quincy NFS ingress failover

2023-09-26 Thread Thorne Lawler
Thanks Christoph! I had just been assuming that the connectivity check was elsewhere, or was implicit in some way. I have certainly not seen any evidence of Quincy trying to move the IP address when the node fails. On 26/09/2023 8:00 pm, Ackermann, Christoph wrote: Dear list members,, aft

[ceph-users] Re: After upgrading from 17.2.6 to 18.2.0, OSDs are very frequently restarting due to livenessprobe failures

2023-09-26 Thread Igor Fedotov
Hi Sudhin, any publicly available cloud storage, e.g. Google drive should work. Thanks, Igor On 26/09/2023 22:52, sbeng...@gmail.com wrote: Hi Igor, Please let where can I upload the OSD logs. Thanks. Sudhin ___ ceph-users mailing list -- ceph-user

[ceph-users] Re: After upgrading from 17.2.6 to 18.2.0, OSDs are very frequently restarting due to livenessprobe failures

2023-09-26 Thread sbengeri
Hi Igor, Please let where can I upload the OSD logs. Thanks. Sudhin ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: CEPH zero iops after upgrade to Reef and manual read balancer

2023-09-26 Thread Laura Flores
Hi Mosharaf, Thanks for the update. If you can reproduce the issue, it would be most helpful to us if you provided the output of `ceph -s`, along with a copy of your osdmap file. If you have this information, you can update the tracker here: https://tracker.ceph.com/issues/62836 Thanks, Laura O

[ceph-users] Re: replacing storage server host (not drives)

2023-09-26 Thread Konstantin Shalygin
Hi, The procedure is simple: get another host and put current disk to new host. Setup boot and network's and back to business k Sent from my iPhone > On Sep 26, 2023, at 17:38, Wyll Ingersoll > wrote: > > What is the recommended procedure for replacing the host itself without > destroying

[ceph-users] replacing storage server host (not drives)

2023-09-26 Thread Wyll Ingersoll
We have a storage node that is failing, but the disks themselves are not. What is the recommended procedure for replacing the host itself without destroying the OSDs or losing data? This cluster is running ceph 16.2.11 using ceph orchestrator with docker containers on Ubuntu 20.04 (focal).

[ceph-users] set proxy for ceph installation

2023-09-26 Thread Majid Varzideh
hi friends i have deployed my first node in cluster. we dont have direct internet on my server so i have to set proxy for that.i set it /etc/environment /etc/profile but i get bellow error 2023-09-26 17:09:38,254 7f04058b4b80 DEBUG ---

[ceph-users] Re: Balancer blocked as autoscaler not acting on scaling change

2023-09-26 Thread Anthony D'Atri
Note that this will adjust override reweight values, which will conflict with balancer upmaps. > On Sep 26, 2023, at 3:51 AM, c...@elchaka.de wrote: > > Hi an idea is to see what > > Ceph osd test-reweight-by-utilization > shows. > If it looks usefull you can run the above command without "

[ceph-users] cephfs health warn

2023-09-26 Thread Ben
Hi, see below for details of warnings. the cluster is running 17.2.5. the warnings have been around for a while. one concern of mine is num_segments growing over time. clients with warn of MDS_CLIENT_OLDEST_TID increase from 18 to 25 as well. The nodes are with kernel 4.19.0-91.82.42.uelc20.x86_64.

[ceph-users] Re: pgs incossistent every day same osd

2023-09-26 Thread Jorge JP
Hello, Thankyou. I think the instructions are: 1. Mark osd failed with out 2. Waiting for rebalancing the data and wait to OK status 3. Mark as down 4. Delete osd 5. Replace device by new 6. Add new osd Is correct? ___ ceph-users ma

[ceph-users] Re: pgs incossistent every day same osd

2023-09-26 Thread Janek Bevendorff
Yes. If you've seen this reoccur multiple times, you can expect it will only get worse with time. You should replace the disk soon. Very often these disks are also starting to slow down other operations in the cluster as the read times increase. On 26/09/2023 13:17, Jorge JP wrote: Hello, F

[ceph-users] pgs incossistent every day same osd

2023-09-26 Thread Jorge JP
Hello, First, sorry for my english... Since a few weeks, I receive every day notifies with HEALTH ERR in my ceph. The notifies are related to inconssistent pgs and ever are on same osd. I ran smartctl test to the disk osd assigned and the result is "passed". Should replace the disk by other ne

[ceph-users] Re: Quincy NFS ingress failover

2023-09-26 Thread Ackermann, Christoph
Dear list members,, after upgrading to reef (18.2.0) I spent some time with CephFS, NFS & HA(Ingress). I can confirm that Ingress (count either 1 or 2) works well IF only ONE backend server is configured. But this is, of course, no HA. ;-) Two or more backend servers won't work because there isn'

[ceph-users] rbd rados cephfs libs compilation

2023-09-26 Thread Arnaud Morin
Hey ceph users! I'd like to compile the different lib-xyz from ceph (rados, rbd, cephfs) with ENABLE_SHARED=OFF (I want static libraries). Since few days I am struggling building the whole ceph repo on debian 12. Is there any way to build only the libraries? I dont need ceph, but only the client

[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)

2023-09-26 Thread Joseph Fernandes
CC ceph-user (Apologies, Forgot last time I replied) On Tue, Sep 26, 2023 at 1:11 PM Joseph Fernandes wrote: > Hello Venky, > > Did you get a chance to look into the updated program? Am I > missing something? > I suppose it's something trivial I am missing, As I see these API used in > NFS Ganes

[ceph-users] Re: Balancer blocked as autoscaler not acting on scaling change

2023-09-26 Thread ceph
Hi an idea is to see what Ceph osd test-reweight-by-utilization shows. If it looks usefull you can run the above command without "test" Hth Mehmet Am 22. September 2023 11:22:39 MESZ schrieb b...@sanger.ac.uk: >Hi Folks, > >We are currently running with one nearfull OSD and 15 nearfull pools.

[ceph-users] Re: CephFS warning: clients laggy due to laggy OSDs

2023-09-26 Thread Janek Bevendorff
I have had defer_client_eviction_on_laggy_osds set to false for a while and I haven't had any further warnings so far (obviously), but also all the other problems with laggy clients bringing our MDS to a crawl over time seem to have gone. So at least on our cluster, the new configurable seems t