I'm glad it worked for you. We used the 'swap-bucket' command when we
needed to replace an OSD node without waiting for the draining of the
old one to finish and then wait for the backfilling of the new one. I
created a temporary bucket where I moved the old host (ceph osd crush
move ), th
Hi, Experts,
we already have a CephFS cluster, called A, and now we want to setup another
CephFS cluster(called B) in other site.
And we need to synchronize data with each other for some directory(if all
directory can synchronize , then very very good), Means when we write a file in
A cluste
Hi,
if you have Pacific or later running you might want look into CephFS
mirroring [1]. Basically, it's about (asynchronous) snapshot mirroring:
For a given snapshot pair in a directory, cephfs-mirror daemon will
rely on readdir diff to identify changes in a directory tree. The
diffs are
Hi,
On 16.02.23 12:53, zxcs wrote:
we already have a CephFS cluster, called A, and now we want to setup another
CephFS cluster(called B) in other site.
And we need to synchronize data with each other for some directory(if all
directory can synchronize , then very very good), Means when we
> On Feb 7, 2023, at 6:07 AM, ond...@kuuk.la wrote:
>
> Hi,
>
> I have two Ceph clusters in a multi-zone setup. The first one (master zone)
> would be accessible to users for their interaction using RGW.
> The second one is set to sync from the master zone with the tier type of the
> zone set a
There are no topics on the agenda, so I'm cancelling the meeting.
On Wed, Feb 15, 2023 at 11:55 AM Laura Flores wrote:
> Hi Ceph Users,
>
> The User + Dev monthly meeting is coming up tomorrow, Thursday, Feb. 16th
> at 3:00 PM UTC.
>
> Please add any topics you'd like to discuss to the agenda:
>
Maybe an etherpad and pinning that to #sepia channel.
On Wed, Feb 15, 2023, 23:32 Laura Flores wrote:
> I would be interested in helping catalogue errors and fixes we experience
> in the lab. Do we have a preferred platform for this cheatsheet?
>
> On Wed, Feb 15, 2023 at 11:54 AM Nizamudeen A
Hi
I am trying to setup the “High availability service for RGW” using SSL
both to the HAProxy and from the HAProxy to the RGW backend.
The SSL certificate gets applied to both HAProxy and the RGW. If I use
the RGW instances directly they work as expected.
The RGW config is as follows:
servic
>
> # This file is generated by cephadm.
> global
> log127.0.0.1 local2
> chroot/var/lib/haproxy
> pidfile/var/lib/haproxy/haproxy.pid
> maxconn8000
> daemon
> stats socket /var/lib/haproxy/stats
>
> defaults
> modehttp
> logglobal
> optionhttplog
> optiondontlognull
> option http-server-close
>
Hi,
today our entire cluster froze. or anything that uses librbd to be specific.
ceph version 16.2.10
The message that saved me was "256 slow ops, oldest one blocked for
2893 sec, osd.7 has slow ops" , because it makes it immediately clear
that this osd is the issue.
I stopped the osd, which mad
Have you tried to dump the stuck ops from that OSD? It could point to
a misbehaving client, I believe there was a thread about that recently
in this list. I don’t have the exact command right now but check
(within cephadm shell) ‚ceph daemon osd.7 help‘ for the ‚dump‘ options.
Zitat von Arv
Hello,
I'm attempting to setup an OpenIDConnect provider with RGW. I'm doing this
using the boto3 API & Python. However it seems that the APIs are failing in
some unexpected ways because radosgw was not setup correctly. There is sample
code below, and yes, I know there are "secrets" in it - but
Here, we enable mds debug logging into stdout
ceph tell mds.gml-okd-cephfs-a config set debug_mds 20/0
...
debug 2023-02-16T09:49:56.265+ 7f0462329700 10 mds.0.server reply to stat
on client_request(client.66426408:170 lookup
#0x101/csi-vol-91510028-3e45-11ec-9461-0a580a82014a
202
And one more for memory
ceph tell mds.gml-okd-cephfs-a config set debug_mds 0/20
This logs from active mds
...
debug 2023-02-16T09:54:39.906+ 7f0460b26700 10 mds.0.cache |__ 0auth
[dir 0x100 ~mds0/ [2,head] auth v=1619006913 cv=1619006913/1619006913
dir_auth=0 state=1073741825|complete f
And we found this when active mds start booting.
conf:
[mds]
debug_mds = 0/20
debug_mds_balancer = 1
debug 2023-02-16T10:25:15.393+ 7fd58cbc6780 0 set uid:gid to 167:167
(ceph:ceph)
debug 2023-02-16T10:25:15.393+ 7fd58cbc6780 0 ceph version 16.2.4
(3cbe25cde3cfa028984618ad32de9edc4c1e
I forget to add that the Ceph version is 17.2.5 managed with cephadm.
/Jimmy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Will,
All our clusters with noout flag by default, since cluster birth. The reasons:
* if rebalance will starts due EDAC or SFP degradation, is faster to fix the
issue via DC engineers and put node back to work
* noout prevents unwanted OSD's fills and the run out of space => outage of
serv
Hello, lists.
I have a 108 OSD ceph cluster. All OSDs work fine except one OSD-86.
ceph-osd@86.service stopped working at a random time.
The disk is normal by checking with `smarctl -a`.
It could be fine for a few days after I restart it. Then it goes wrong
again.
I paste the related log
Hi, please see the output below.
ceph-iscsi-gw-1.ipa.pthl.hklocalhost.localdomain is the one who is being
messed up with a wrong hostname. I want to delete it.
/iscsi-target...-igw/gateways> ls
o- gateways
...
Hi,
Have you added oidc-provider caps to the user that is trying to create the
openid connect provider/ list openid connect providers, in your case the
user which has the access key as 'L70QT3LN71SQXWHS97Y4'. (
https://docs.ceph.com/en/quincy/radosgw/oidc/)
Thanks,
Pritha
On Fri, Feb 17, 2023 at
20 matches
Mail list logo