Thanks, Konstantin. Will try
> From: "Konstantin Shalygin"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Thursday, 29 December, 2022 03:42:56
> Subject: Re: [ceph-users] radosgw not working after upgrade to Quincy
> Hi,
> Just try to read your logs:
>> 2022-12-29T02:07:38.953+ 7
Ceph 17.2.5, dockerized, Ubuntu 20.04, OSD on HDD with WAL/DB on SSD.
Hi all,
old topic, but the problem still exists. I tested it extensively,
with osd_op_queue set either to mclock_scheduler (and profile set to high
recovery) or wpq and the well known options (sleep_time, max_backfill) from
htt
Hi,
Suddenly facing an issue with Ceph cluster I am using ceph version 16.2.6.
I couldn't find any solution for the issue below.
Any suggestions?
health: HEALTH_WARN
1 clients failing to respond to capability release
1 clients failing to advance oldest client/flush ti
Hello,
i seem to not have removed old osd. Now i have:
root@ceph07:/tmp# ceph orch ps |grep -e error -e stopped |grep ceph07
_osd.33 ceph07 stopped 2h ago 2y
quay.io/ceph/ceph:v15.2.17
mon.ceph01ceph07 error 2h ago 2y
quay.io/ce
Daniel,
Could you, for a brief moment, turn on the debug logs for mgr and mds and
then attempt to create the subvol.
I'd like to see what logs gets dumped in the logs when the EINVAL is
returned.
On Wed, Dec 28, 2022 at 10:13 PM Daniel Kovacs
wrote:
> We are on: 17.2.4
>
> Ceph fs volume ls outp