Good evening!
The following problem occurred.
There is a cluster ceph 16.2.10
The cluster was operating normally on Friday. Shut down cluster:
-Excluded all clients
Executed commands:
ceph osd set noout
ceph osd set nobackfill
ceph osd set norecover
ceph osd set norebalance
ceph osd set nodown
cep
>
> The following problem occurred.
> There is a cluster ceph 16.2.10
> The cluster was operating normally on Friday. Shut down cluster:
> -Excluded all clients
> Executed commands:
> ceph osd set noout
> ceph osd set nobackfill
> ceph osd set norecover
> ceph osd set norebalance
> ceph osd set no
Hi!
Currently I have limitted the optput of rgw log to syslog from rsyslog (as
suggested by
Anthony), limitted docker logs from daemon.json.
I still get ops logs written to both logs pool and ops log file
(ops-log-ceph-client.rgw.hostname.log).
How to stop logging ops log on rgw disk and keep logs
Hi, just remove them from balancer.
Also, there are two configs you want to be true on the rgws for LC and GC
processing and false on the rgws that are exposed to clients:
rgw_enable_lc_threads = true
rgw_enable_gc_threads = true
--
Paul
On Mon, Nov 25, 2024 at 8:40 AM Szabo, Istvan (Agoda)
Good afternoon
We tried to leave only one mds, stopped others, even deleted one, and turned
off the requirement for stand-by mds. Nothing helped, mds remained in the
status of replays.
Current situation: we now have two active mds in the status of replays, and one
in stand-by.
At the same time,
Hi,
I remember some discussion that someone is using separated gateways for bucket
lifecycles but I couldn't find it.
How is that can be possible?
Thank you
This message is confidential and is for the sole use of the intended
recipient(s). It may also be privi