[ceph-users] Re: Ceph Cluster clone

2022-09-27 Thread Robert Sander
On 26.09.22 21:27, Dhairya Parmar wrote: Can you provide some more information on this? Can you show exactly what error you get while trying to start the cluster? I fixed the IP/hostname part, but I cannot get the cloned cluster to start (Monitors issues). That means that you changed the IP

[ceph-users] Re: weird performance issue on ceph

2022-09-27 Thread Robert Sander
On 26.09.22 21:00, Frank Schilder wrote: I wonder if it might be a good idea to collect such experience somewhere in the ceph documentation, for example, a link unser hardware recommendations->solid state drives in the docs. Are there legal implications with creating a list of drives showing

[ceph-users] Re: External RGW always down

2022-09-27 Thread Eugen Block
No pg recovery starts automatically when the osd starts. So you mean that you still have inactive PGs although your OSDs are all up? In that case try to 'ceph pg repeer ' to activate the PGs, maybe your RGWs will start then. I'm using an erasure coded pool for rgw .In that rule we have k=

[ceph-users] Re: Ceph Cluster clone

2022-09-27 Thread Ahmed Bessaidi
Hello Robert, I changed ceph.conf and all the files containing IPs or hostname. How can I change the mon map inside the MON DB ? Best Regards, Ahmed. -Original Message- From: Robert Sander Sent: mardi 27 septembre 2022 08:28 To: ceph-users@ceph.io Subject: [ceph-users] Re: Ceph Cluste

[ceph-users] Re: External RGW always down

2022-09-27 Thread Monish Selvaraj
Hi Eugen, Thanks for your reply. Can you suggest a good recovery option in erasure coded pool. because the k means the copy value 11 and m the parity value 4 . I thought that means in 15 hosts 3 host may down and also we migrate the data. if i set ceph osd set nodown what will happen to the clus

[ceph-users] Re: External RGW always down

2022-09-27 Thread Eugen Block
if i set ceph osd set nodown what will happen to the cluster. Example, the migration goes on and I enable this parameter. Will it cause any issue while migrating the data ? Well, since we don't really know what is going on there it's hard to tell. But that flag basically prevents the MONs from

[ceph-users] Re: How to remove remaining bucket index shard objects

2022-09-27 Thread J. Eric Ivancich
I don’t believe there is any tooling to find and clean orphaned bucket index shards. So if you’re certain they’re no longer needed, you can use `rados` commands to remove the objects. Eric (he/him) > On Sep 27, 2022, at 2:37 AM, Yuji Ito (伊藤 祐司) wrote: > > Hi, > > I have encountered a proble

[ceph-users] Re: Ceph Cluster clone

2022-09-27 Thread Eugen Block
You probably need to start here: https://docs.ceph.com/en/latest/man/8/monmaptool/ It's used to recover from a MON failure [1]. But may I ask why you want to recover the cloned cluster? Wouldn't it be easier to just bootstrap a new one and wipe everything? Anyway, the monmaptool can help you

[ceph-users] waiting for the monitor(s) to form the quorum.

2022-09-27 Thread Dmitriy Trubov
Hi, I have and error in ansible CEPH installation . Root cause for the error is keyring not generated. It is happened for me only if cluster network, monitor network and public network are different. If monitor network and public network are the same- everything works and CEPH installed success

[ceph-users] Re: 2-Layer CRUSH Map Rule?

2022-09-27 Thread Tyler Brekke
Hi Matthew, You just have to take two steps when writing your crush rule. First you want to get 3 different hosts, then you need 2 osd from each host. ceph osd getcrushmap -o /tmp/crush crushtool -d /tmp/crush -o /tmp/crush.txt #edit it / make new rule rule custom-ec-ruleset { id 3

[ceph-users] Re: 2-Layer CRUSH Map Rule?

2022-09-27 Thread duluxoz
Thank you Tyler, That looks like exactly what I was looking for (now to test it in a test rig)  :-) Cheers Dulux-Oz On 28/09/2022 07:16, Tyler Brekke wrote: Hi Matthew, You just have to take two steps when writing your crush rule. First you want to get 3 different hosts, then you need 2 o

[ceph-users] Re: External RGW always down

2022-09-27 Thread Monish Selvaraj
Hi Eugen, The OSD fails because of RAM/CPU overloaded whatever it is.After Osd fails it starts again. That's not the problem. I need to know why the rgw fails when the osd down ? The rgw log output below, 2022-09-07T12:03:42.893+ 7fdd23fdc5c0 0 framework: beast 2022-09-07T12:03:42.893+