Hi Yuri,
rados and upgrade/pacific-p2p look good to go.
On Tue, May 10, 2022 at 5:46 AM Benoît Knecht wrote:
>
> On Mon, May 09, 2022 at 07:32:59PM +1000, Brad Hubbard wrote:
> > It's the current HEAD of the pacific branch or, alternatively,
> > https://github.com/ceph/ceph-ci/tree/pacific-16.2.
Hi there, newcomer here.
I've been trying to figure out if it's possible to repair or recover cephfs
after some unfortunate issues a couple of months ago; these couple of nodes
have been offline most of the time since the incident.
I'm sure the problem is that I lack the ceph expertise to quite
In my experience:
"No scrub information available for pg 11.2b5
error 2: (2) No such file or directory"
is the output you get from the command when the up or acting osd set has
changed since the last deep-scrub. Have you tried to run a deep scrub (ceph
pg deep-scrub 11.2b5) on the pg and then try
Hi,
We've got an outstanding issue with one of our Ceph clusters here at RAL. The
cluster is 'Echo', our 40PB cluster. We found an object from an 8+3EC RGW pool
in the failed_repair state. We aren't sure how the object got into this state,
but it doesn't appear to be a case of correlated drive
Yo, I’m having the same problem and can easily reproduce it.
See
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/XOQXZYOWYMMQBWFXMHYDQUJ7LZZPFLSU/
And similar ones.
The problem still exists in Quincy 17.2.0. But looks like it’s too low priority
to be fixed.
Ciao, Uli
> On 09.
Hi,
there's a profile "crash" for that. In a lab setup with Nautilus
thre's one crash client with these caps:
admin:~ # ceph auth get client.crash
[client.crash]
key =
caps mgr = "allow profile crash"
caps mon = "allow profile crash"
On a Octopus cluster deployed
Hi,
I just stumpled over some log messages regarding ceph-crash:
May 10 09:32:19 bigfoot60775 ceph-crash[2756]: WARNING:ceph-crash:post
/var/lib/ceph/crash/2022-05-10T07:10:55.837665Z_7f3b726e-0368-4149-8834-6cafd92fb13f
as client.admin failed: b'2022-05-10T09:32:19.099+0200 7f911ad92700 -1
Den mån 9 maj 2022 kl 21:04 skrev Vladimir Brik
:
> Hello
> Does osd_scrub_auto_repair default to false because it's
> dangerous? I assume `ceph pg repair` is also dangerous then?
>
> In what kinds of situations do they cause problems?
With filestore, there were less (or no?) checksums, so the clu