[ceph-users] Multisite RGW - stucked metadata shards (metadata is behind on X shards)

2019-09-11 Thread P. O.
Hi all, In my environment with replicated two (mimic 13.2.6) clusters I have problem with stucked metadata shards. [Master root@rgw-1]$ radosgw-admin sync status realm b144111d-8176-47e5-aa3a-85c65032e8a9 (realm) zonegroup 2ead77cb-f5c2-4d62-9959-12912828fb4b (1_zonegroup)

[ceph-users] Multisite RGW - Large omap objects related with bilogs

2019-08-09 Thread P. O.
Hi all, I have two ceph clusters in RGW multisite environment, with ~1500 bucketes ( 500M objects, 70TB ). Some of the buckets are very dynamic (objects are constantly changing). I have problems with large omap objects in bucket indexes, related with "dynamic buckets". For example: [root@rgw ~]#

Re: [ceph-users] Multisite RGW - endpoints configuration

2019-07-17 Thread P. O.
)? W dniu środa, 17 lipca 2019 P. O. napisał(a): > Hi, > > > Is there any mechanism inside the rgw that can detect faulty endpoints for a > configuration with multiple endpoints? > > Is there any advantage related with the number of replication endpoints? Can > I exp

Re: [ceph-users] Multisite RGW - endpoints configuration

2019-07-17 Thread P. O.
isite configuration. > > On 7/16/19 2:52 PM, P. O. wrote: > >> Hi all, >> >> I have multisite RGW setup with one zonegroup and two zones. Each zone >> has one endpoint configured like below: >> >> "zonegroups": [ >> {

[ceph-users] Multisite RGW - endpoints configuration

2019-07-16 Thread P. O.
Hi all, I have multisite RGW setup with one zonegroup and two zones. Each zone has one endpoint configured like below: "zonegroups": [ { ... "is_master": "true", "endpoints": ["http://192.168.100.1:80";], "zones": [ { "name": "primary_1", "endpoints": ["http://192.168.100.1:80";]