Hi all,
In my environment with replicated two (mimic 13.2.6) clusters I have
problem with stucked metadata shards.
[Master root@rgw-1]$ radosgw-admin sync status
realm b144111d-8176-47e5-aa3a-85c65032e8a9 (realm)
zonegroup 2ead77cb-f5c2-4d62-9959-12912828fb4b (1_zonegroup)
Hi all,
I have two ceph clusters in RGW multisite environment, with ~1500 bucketes
( 500M objects, 70TB ).
Some of the buckets are very dynamic (objects are constantly changing).
I have problems with large omap objects in bucket indexes, related with
"dynamic buckets".
For example:
[root@rgw ~]#
)?
W dniu środa, 17 lipca 2019 P. O. napisał(a):
> Hi,
>
>
> Is there any mechanism inside the rgw that can detect faulty endpoints for a
> configuration with multiple endpoints?
>
> Is there any advantage related with the number of replication endpoints? Can
> I exp
isite configuration.
>
> On 7/16/19 2:52 PM, P. O. wrote:
>
>> Hi all,
>>
>> I have multisite RGW setup with one zonegroup and two zones. Each zone
>> has one endpoint configured like below:
>>
>> "zonegroups": [
>> {
Hi all,
I have multisite RGW setup with one zonegroup and two zones. Each zone has
one endpoint configured like below:
"zonegroups": [
{
...
"is_master": "true",
"endpoints": ["http://192.168.100.1:80";],
"zones": [
{
"name": "primary_1",
"endpoints": ["http://192.168.100.1:80";]