On 26.09.22 21:27, Dhairya Parmar wrote:
Can you provide some more information on this? Can you show exactly what
error you get while trying to start the cluster?
I fixed the IP/hostname part, but I cannot get the cloned cluster to start
(Monitors issues).
That means that you changed the IP
On 26.09.22 21:00, Frank Schilder wrote:
I wonder if it might be a good idea to collect such experience somewhere in the
ceph documentation, for example, a link unser hardware recommendations->solid
state drives in the docs. Are there legal implications with creating a list of
drives showing
No pg recovery starts automatically when the osd starts.
So you mean that you still have inactive PGs although your OSDs are
all up? In that case try to 'ceph pg repeer ' to activate the
PGs, maybe your RGWs will start then.
I'm using an erasure coded pool for rgw .In that rule we have k=
Hello Robert,
I changed ceph.conf and all the files containing IPs or hostname.
How can I change the mon map inside the MON DB ?
Best Regards,
Ahmed.
-Original Message-
From: Robert Sander
Sent: mardi 27 septembre 2022 08:28
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Ceph Cluste
Hi Eugen,
Thanks for your reply.
Can you suggest a good recovery option in erasure coded pool. because the k
means the copy value 11 and m the parity value 4 . I thought that means in
15 hosts 3 host may down and also we migrate the data.
if i set ceph osd set nodown what will happen to the clus
if i set ceph osd set nodown what will happen to the cluster. Example, the
migration goes on and I enable this parameter. Will it cause any issue
while migrating the data ?
Well, since we don't really know what is going on there it's hard to
tell. But that flag basically prevents the MONs from
I don’t believe there is any tooling to find and clean orphaned bucket index
shards. So if you’re certain they’re no longer needed, you can use `rados`
commands to remove the objects.
Eric
(he/him)
> On Sep 27, 2022, at 2:37 AM, Yuji Ito (伊藤 祐司) wrote:
>
> Hi,
>
> I have encountered a proble
You probably need to start here:
https://docs.ceph.com/en/latest/man/8/monmaptool/
It's used to recover from a MON failure [1].
But may I ask why you want to recover the cloned cluster? Wouldn't it
be easier to just bootstrap a new one and wipe everything? Anyway, the
monmaptool can help you
Hi,
I have and error in ansible CEPH installation .
Root cause for the error is keyring not generated.
It is happened for me only if cluster network, monitor network and public
network are different.
If monitor network and public network are the same- everything works and CEPH
installed success
Hi Matthew,
You just have to take two steps when writing your crush rule. First you
want to get 3 different hosts, then you need 2 osd from each host.
ceph osd getcrushmap -o /tmp/crush
crushtool -d /tmp/crush -o /tmp/crush.txt
#edit it / make new rule
rule custom-ec-ruleset {
id 3
Thank you Tyler,
That looks like exactly what I was looking for (now to test it in a test
rig) :-)
Cheers
Dulux-Oz
On 28/09/2022 07:16, Tyler Brekke wrote:
Hi Matthew,
You just have to take two steps when writing your crush rule. First
you want to get 3 different hosts, then you need 2 o
Hi Eugen,
The OSD fails because of RAM/CPU overloaded whatever it is.After Osd fails
it starts again. That's not the problem.
I need to know why the rgw fails when the osd down ?
The rgw log output below,
2022-09-07T12:03:42.893+ 7fdd23fdc5c0 0 framework: beast
2022-09-07T12:03:42.893+
12 matches
Mail list logo