Hi,
please share more details about your cluster like
ceph -s
ceph osd df tree
ceph pg ls-by-pool | head
If the client load is not too high you could increase the
osd_max_scrubs config from 1 to 3 and see if anything improves (what
is the current value?). If the client load is high during
Ilya, Thanks for clarification.
On Thu, May 4, 2023 at 1:12 PM Ilya Dryomov wrote:
> On Thu, May 4, 2023 at 11:27 AM Kamil Madac wrote:
> >
> > Thanks for the info.
> >
> > As a solution we used rbd-nbd which works fine without any issues. If we
> will have time we will also try to disable ipv4
Hi,
FYI - This might be pedantic, but there does not seem to be any difference
between using these two sets of commands:
- ceph osd pause / ceph osd unpause
- ceph osd set pause / ceph osd unset pause
I can see that they both set/unset the pauserd,pausewr flags, but since
they don't report
Hi,
thanks for the pointer. I'll definitely look into upgrading our cluster and
patching it.
As a temporary fix, as stated at line -3 of the dump, the client
'client.96913903:2156912' was causing the crash. When we evicted it,
connected to the machine running this client, and rebooted it, the prob
I got verbal approvals for the listed PRs:
https://github.com/ceph/ceph/pull/51232 -- Venky approved
https://github.com/ceph/ceph/pull/51344 -- Venky approved
https://github.com/ceph/ceph/pull/51200 -- Casey approved
https://github.com/ceph/ceph/pull/50894 -- Radek approved
Suites rados and fs
Hello Frank.
>If your only tool is a hammer ...
>Sometimes its worth looking around.
You are absolutely right! But I have limitations because my customer
is a startup and they want to create a hybrid system with current
hardware for all their needs. That's why I'm spending time to find a
work aro