[ceph-users] WG: Multisite sync issue

2022-02-25 Thread Poß , Julian
Hi, i did setup multisite with 2 ceph clusters and multiple rgw's and realms/zonegroups. This setup was installed using ceph ansible branch "stable-5.0", with focal+octopus. During some testing, i noticed that somehow the replication seems to not work as expected. With s3cmd, i put a small fil

[ceph-users] Re: Archive in Ceph similar to Hadoop Archive Utility (HAR)

2022-02-25 Thread Janne Johansson
Den fre 25 feb. 2022 kl 08:49 skrev Anthony D'Atri : > There was a similar discussion last year around Software Heritage’s archive > project, suggest digging up that thread. > Some ideas: > > * Pack them into (optionally compressed) tarballs - from a quick search it > sorta looks like HAR uses a

[ceph-users] Re: WG: Multisite sync issue

2022-02-25 Thread Eugen Block
Hi, I would stop alle RGWs except one in each cluster to limit the places and logs to look at. Do you have a loadbalancer as endpoint or do you have a list of all RGWs as endpoints? Zitat von "Poß, Julian" : Hi, i did setup multisite with 2 ceph clusters and multiple rgw's and realms/

[ceph-users] Re: WG: Multisite sync issue

2022-02-25 Thread Poß , Julian
Hi Eugen, there is currently only one RGW installed for each region+realm. So the places to look at are already pretty much limited. As of now, the RGWs itself are the endpoints. So far no loadbalancer has been put into place there. Best, Julian -Ursprüngliche Nachricht- Von: Eugen Blo

[ceph-users] taking out ssd osd's, having backfilling with hdd's?

2022-02-25 Thread Marc
I am taking out ssd's, and get backfilling on hdd's, how is this possible? 2 active+remapped+backfill_wait 1 active+remapped+backfilling pools 51, 53, 20 are backfilling, these pools are having crush rules replicated_ruleset "steps": [ { "op

[ceph-users] Using NFS-Ganesha V4 with current ceph docker image V16.2.7 ?

2022-02-25 Thread Uwe Richter
Hallo all, I want to use NFS-Ganesha V4 for it's "POSIX ACL support for FSAL_CEPH" (=> https://github.com/nfs-ganesha/nfs-ganesha/wiki/ReleaseNotes_4 ) with a docker container from quay.io/ceph/ceph in our running cluster. For e.g. tag v16.2.7 in the manifest-ContainerConfig.Cmd (=> https://qu

[ceph-users] Re: WG: Multisite sync issue

2022-02-25 Thread Eugen Block
I see, then I misread your statement about multiple RGWs: It also worries me that replication won't work with multiple rgws in one zone, but one of them being unavailable, for instance during maintenance. Is there anything else than the RGW logs pointing to any issues? I find it strange t

[ceph-users] removing osd, reweight 0, backfilling done, after purge, again backfilling.

2022-02-25 Thread Marc
I have a clean cluster state, with the osd's that I am going to remove a reweight of 0. And then after executing 'ceph osd purge 19', I have again remapping+backfilling done? Is this indeed the correct procedure, or is this old? https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/#

[ceph-users] Re: removing osd, reweight 0, backfilling done, after purge, again backfilling.

2022-02-25 Thread Janne Johansson
Den fre 25 feb. 2022 kl 13:00 skrev Marc : > I have a clean cluster state, with the osd's that I am going to remove a > reweight of 0. And then after executing 'ceph osd purge 19', I have again > remapping+backfilling done? > > Is this indeed the correct procedure, or is this old? > https://docs.

[ceph-users] Re: Archive in Ceph similar to Hadoop Archive Utility (HAR)

2022-02-25 Thread Bobby
thanks Anthony and Janneexactly what I have been looking for! On Fri, Feb 25, 2022 at 9:25 AM Janne Johansson wrote: > Den fre 25 feb. 2022 kl 08:49 skrev Anthony D'Atri < > anthony.da...@gmail.com>: > > There was a similar discussion last year around Software Heritage’s > archive project, s

[ceph-users] Re: Archive in Ceph similar to Hadoop Archive Utility (HAR)

2022-02-25 Thread Anthony D'Atri
You bet, glad to help. Zillions of small files indeed present a relatively higher metadata overhead, and can be problematic in multiple ways. When using RGW, indexless buckets may be advantageous. Another phenomenon is space amplification — with say a 1 GB file/object, a partially full la

[ceph-users] Re: WG: Multisite sync issue

2022-02-25 Thread Poß , Julian
As far as i can tell, it can be reproduced every time, yes. That statement was actually about two RGW in one zone. That is also something that I tested. Because I felt like ceph should be able to handle that ha-like on its own. But for the main issue, there is indeed only one rgw in each zone ru

[ceph-users] quay.io image no longer existing, required for node add to repair cluster

2022-02-25 Thread Kai Börnert
Hi, what would be the correct way to move forward? I have a 3 node cephadm installed cluster, one node died, the other two are fine and work as expected, so no data loss, but a lot of remapped/degraded. The dead node was replaced and I wanted to add it to the cluster using "ceph orch host a

[ceph-users] Re: 3 OSDs can not be started after a server reboot - rocksdb Corruption

2022-02-25 Thread Igor Fedotov
Hi Sebastian, I submitted a ticket https://tracker.ceph.com/issues/54409 which shows my analysis based on your previous log (from 21-02-2022). Which wasn't verbose enough at debug-bluestore level to make the final conclusion. Unfortunately the last logs (from 24-02-2022) you shared don't incl

[ceph-users] Re: quay.io image no longer existing, required for node add to repair cluster

2022-02-25 Thread Adam King
For the last question, cephadm has a config option for whether or not it tries to convert image tags to repo digest (ceph config set mgr mgr/cephadm/use_repo_digest true/false). I'm not sure if setting it to false helps if the tag has already been converted though. In terms of getting the cluster

[ceph-users] Re: quay.io image no longer existing, required for node add to repair cluster

2022-02-25 Thread Kai Börnert
Thank you very much :) ceph config set global container_image was the solution to get the new node to deploy fully, and with ceph config set mgr mgr/cephadm/use_repo_digest true/false it will hopefully never repeat now lets hope the recovery is without further trouble Greetings, Kai On

[ceph-users] Re: quay.io image no longer existing, required for node add to repair cluster

2022-02-25 Thread Robert Sander
On 25.02.22 16:43, Adam King wrote: ceph config set mgr mgr/cephadm/use_repo_digest false Nice to know. The other question is: Why is the digest changing for a released Ceph image with a specific version tag? What changes are made to the container image that are not in the release notes?

[ceph-users] Re: quay.io image no longer existing, required for node add to repair cluster

2022-02-25 Thread Adam King
I don't know for sure, but it's possibly a result of the centos 8 EOL stuff from a few weeks ago (they removed seom repos and a lot of our build stuff broke). I think we had to update some of our container images to deal with that. - Adam King On Fri, Feb 25, 2022 at 10:55 AM Robert Sander wrote

[ceph-users] Re: quay.io image no longer existing, required for node add to repair cluster

2022-02-25 Thread Robert Sander
On 25.02.22 17:24, Adam King wrote: I don't know for sure, but it's possibly a result of the centos 8 EOL stuff from a few weeks ago (they removed seom repos and a lot of our build stuff broke). I think we had to update some of our container images to deal with that. IMHO container image cha

[ceph-users] Re: Multisite sync issue

2022-02-25 Thread Mule Te (TWL007)
We have the same issue on Ceph 15.2.15. In the testing cluster, seem like Ceph 16 solved this issue. The PR https://github.com/ceph/ceph/pull/41316 seem to remove this issue, but I do not know why it does not merge back to Ceph 15. Also here is a new

[ceph-users] Quincy release candidate v17.1.0 is available

2022-02-25 Thread Josh Durgin
This is the first release candidate for Quincy. The final release is slated for the end of March. This release has been through large-scale testing thanks to several organizations, including Pawsey Supercomputing Centre, who allowed us to harden cephadm and the ceph dashboard on their 4000-OSD clu