[ceph-users] Re: [EXTERNAL] Re: Renaming a ceph node

2023-02-16 Thread Eugen Block
I'm glad it worked for you. We used the 'swap-bucket' command when we needed to replace an OSD node without waiting for the draining of the old one to finish and then wait for the backfilling of the new one. I created a temporary bucket where I moved the old host (ceph osd crush move ), th

[ceph-users] how to sync data on two site CephFS

2023-02-16 Thread zxcs
Hi, Experts, we already have a CephFS cluster, called A, and now we want to setup another CephFS cluster(called B) in other site. And we need to synchronize data with each other for some directory(if all directory can synchronize , then very very good), Means when we write a file in A cluste

[ceph-users] Re: how to sync data on two site CephFS

2023-02-16 Thread Eugen Block
Hi, if you have Pacific or later running you might want look into CephFS mirroring [1]. Basically, it's about (asynchronous) snapshot mirroring: For a given snapshot pair in a directory, cephfs-mirror daemon will rely on readdir diff to identify changes in a directory tree. The diffs are

[ceph-users] Re: how to sync data on two site CephFS

2023-02-16 Thread Robert Sander
Hi, On 16.02.23 12:53, zxcs wrote: we already have a CephFS cluster, called A, and now we want to setup another CephFS cluster(called B) in other site. And we need to synchronize data with each other for some directory(if all directory can synchronize , then very very good), Means when we

[ceph-users] Re: RGW archive zone lifecycle

2023-02-16 Thread J. Eric Ivancich
> On Feb 7, 2023, at 6:07 AM, ond...@kuuk.la wrote: > > Hi, > > I have two Ceph clusters in a multi-zone setup. The first one (master zone) > would be accessible to users for their interaction using RGW. > The second one is set to sync from the master zone with the tier type of the > zone set a

[ceph-users] Re: User + Dev monthly meeting happening tomorrow, Feb. 16th!

2023-02-16 Thread Laura Flores
There are no topics on the agenda, so I'm cancelling the meeting. On Wed, Feb 15, 2023 at 11:55 AM Laura Flores wrote: > Hi Ceph Users, > > The User + Dev monthly meeting is coming up tomorrow, Thursday, Feb. 16th > at 3:00 PM UTC. > > Please add any topics you'd like to discuss to the agenda: >

[ceph-users] Re: clt meeting summary [15/02/2023]

2023-02-16 Thread Nizamudeen A
Maybe an etherpad and pinning that to #sepia channel. On Wed, Feb 15, 2023, 23:32 Laura Flores wrote: > I would be interested in helping catalogue errors and fixes we experience > in the lab. Do we have a preferred platform for this cheatsheet? > > On Wed, Feb 15, 2023 at 11:54 AM Nizamudeen A

[ceph-users] RGW Service SSL HAProxy.cfg

2023-02-16 Thread Jimmy Spets
Hi I am trying to setup the “High availability service for RGW” using SSL both to the HAProxy and from the HAProxy to the RGW backend. The SSL certificate gets applied to both HAProxy and the RGW. If I use the RGW instances directly they work as expected. The RGW config is as follows: servic

[ceph-users] Re: RGW Service SSL HAProxy.cfg

2023-02-16 Thread Marc
> > # This file is generated by cephadm. > global > log127.0.0.1 local2 > chroot/var/lib/haproxy > pidfile/var/lib/haproxy/haproxy.pid > maxconn8000 > daemon > stats socket /var/lib/haproxy/stats > > defaults > modehttp > logglobal > optionhttplog > optiondontlognull > option http-server-close >

[ceph-users] forever stuck "slow ops" osd

2023-02-16 Thread Arvid Picciani
Hi, today our entire cluster froze. or anything that uses librbd to be specific. ceph version 16.2.10 The message that saved me was "256 slow ops, oldest one blocked for 2893 sec, osd.7 has slow ops" , because it makes it immediately clear that this osd is the issue. I stopped the osd, which mad

[ceph-users] Re: forever stuck "slow ops" osd

2023-02-16 Thread Eugen Block
Have you tried to dump the stuck ops from that OSD? It could point to a misbehaving client, I believe there was a thread about that recently in this list. I don’t have the exact command right now but check (within cephadm shell) ‚ceph daemon osd.7 help‘ for the ‚dump‘ options. Zitat von Arv

[ceph-users] RGW cannot list or create openidconnect providers

2023-02-16 Thread mat
Hello, I'm attempting to setup an OpenIDConnect provider with RGW. I'm doing this using the boto3 API & Python. However it seems that the APIs are failing in some unexpected ways because radosgw was not setup correctly. There is sample code below, and yes, I know there are "secrets" in it - but

[ceph-users] Re: Extremally need help. Openshift cluster is down :c

2023-02-16 Thread kreept . sama
Here, we enable mds debug logging into stdout ceph tell mds.gml-okd-cephfs-a config set debug_mds 20/0 ... debug 2023-02-16T09:49:56.265+ 7f0462329700 10 mds.0.server reply to stat on client_request(client.66426408:170 lookup #0x101/csi-vol-91510028-3e45-11ec-9461-0a580a82014a 202

[ceph-users] Re: Extremally need help. Openshift cluster is down :c

2023-02-16 Thread kreept . sama
And one more for memory ceph tell mds.gml-okd-cephfs-a config set debug_mds 0/20 This logs from active mds ... debug 2023-02-16T09:54:39.906+ 7f0460b26700 10 mds.0.cache |__ 0auth [dir 0x100 ~mds0/ [2,head] auth v=1619006913 cv=1619006913/1619006913 dir_auth=0 state=1073741825|complete f

[ceph-users] Re: Extremally need help. Openshift cluster is down :c

2023-02-16 Thread kreept . sama
And we found this when active mds start booting. conf: [mds] debug_mds = 0/20 debug_mds_balancer = 1 debug 2023-02-16T10:25:15.393+ 7fd58cbc6780 0 set uid:gid to 167:167 (ceph:ceph) debug 2023-02-16T10:25:15.393+ 7fd58cbc6780 0 ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1e

[ceph-users] Re: RGW Service SSL HAProxy.cfg

2023-02-16 Thread Jimmy Spets
I forget to add that the Ceph version is 17.2.5 managed with cephadm. /Jimmy ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph noout vs ceph norebalance, which is better for minor maintenance

2023-02-16 Thread Konstantin Shalygin
Hi Will, All our clusters with noout flag by default, since cluster birth. The reasons: * if rebalance will starts due EDAC or SFP degradation, is faster to fix the issue via DC engineers and put node back to work * noout prevents unwanted OSD's fills and the run out of space => outage of serv

[ceph-users] ceph-osd@86.service crashed at a random time.

2023-02-16 Thread luckydog xf
Hello, lists. I have a 108 OSD ceph cluster. All OSDs work fine except one OSD-86. ceph-osd@86.service stopped working at a random time. The disk is normal by checking with `smarctl -a`. It could be fine for a few days after I restart it. Then it goes wrong again. I paste the related log

[ceph-users] ceph-iscsi-cli: cannot remove duplicated gateways.

2023-02-16 Thread luckydog xf
Hi, please see the output below. ceph-iscsi-gw-1.ipa.pthl.hklocalhost.localdomain is the one who is being messed up with a wrong hostname. I want to delete it. /iscsi-target...-igw/gateways> ls o- gateways ...

[ceph-users] Re: RGW cannot list or create openidconnect providers

2023-02-16 Thread Pritha Srivastava
Hi, Have you added oidc-provider caps to the user that is trying to create the openid connect provider/ list openid connect providers, in your case the user which has the access key as 'L70QT3LN71SQXWHS97Y4'. ( https://docs.ceph.com/en/quincy/radosgw/oidc/) Thanks, Pritha On Fri, Feb 17, 2023 at