Any input would be very appreciated.
Thanks.
> George Yil şunları yazdı (9 Şub 2021 16:41):
>
> Hi,
>
> I am sort of newby to RGW multisite. I guess there is an important limitation
> about bucket index sharding if you run multisite. I would like to learn
> better or
Hi,
I am sort of newby to RGW multisite. I guess there is an important limitation
about bucket index sharding if you run multisite. I would like to learn better
or correct myself. And I also want to leave a bookmark here for future cephers
if possible. I apologize if this is asked before howeve
Hi Marc,
Thanks for participating. At first I thought this is an incorrect report and
maybe I need to upgrade to for a bugfix.
But I couldn’t find a such a report and I asked here.
When people shared experiences it appears there may be two causes. Unbalanced
OSDs or Storage Amplification.
As f
May I ask if enabling pool compression helps for the future space amplification?
> George Yil şunları yazdı (27 Oca 2021 18:57):
>
> Thank you. This helps a lot.
>
>> Josh Baergen şunları yazdı (27 Oca 2021 17:08):
>>
>> On Wed, Jan 27, 2021 at 12:24 AM Georg
Thank you. This helps a lot.
> Josh Baergen şunları yazdı (27 Oca 2021 17:08):
>
> On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote:
>> May I ask if it can be dynamically changed and any disadvantages should be
>> expected?
>
> Unless there's some magic I&
I did not. Honestly I was not aware of such a thing. Thanks for the
notification. And I hope this is not bad news.
May I ask if it can be dynamically changed and any disadvantages should be
expected?
> On 27 Jan 2021, at 01:33, Josh Baergen wrote:
>
> > I created radosgw pools. secondaryzone
om/p/c2KQD5CGMV/
<https://pastebin.ubuntu.com/p/c2KQD5CGMV/>
#crush rules
https://pastebin.ubuntu.com/p/X6WsZhV3Zz/
<https://pastebin.ubuntu.com/p/X6WsZhV3Zz/>
Thanks.
> On 26 Jan 2021, at 11:18, Anthony D'Atri wrote:
>
> ceph osd df | sort -nk8
>
>> On Jan 25, 202
Hi,
I have a ceph nautilus (14.2.9) cluster with 10 nodes. Each node has
19x16TB disks attached.
I created radosgw pools. secondaryzone.rgw.buckets.data pool is configured
as EC 8+2 (jerasure).
ceph df shows 2.1PiB MAX AVAIL space.
Then I configured radosgw as a secondary zone and 100TiB of S3 d