FYI, ceph-ansible has a problem due to ceph-volume , so don't do it via the
rolling-upgrade.yml script.
-Brent
-Original Message-
From: David Galloway
Sent: Wednesday, November 18, 2020 9:39 PM
To: ceph-annou...@ceph.io; ceph-users@ceph.io; d...@ceph.io;
ceph-de...@vger.kernel.org; ceph
Hi guys,
I'll have a future Ceph deployment with the following setup :
- 7 powerful nodes running Ceph 15.2.x with mon, rgw and osd daemons
colocated
- 100+ SATA drives with EC 4+2
- every OSD will have a large NVME partition (300GB) for rocksdb
- the storage will be dedicated for rgw traff
Seems like this sharding we need to be plan carefully since the beginning. I'm
thinking to set the shard number by default to the maximum which is 64k and
leave it as is so we will never reach the limit only if we reach the maximum
number of objects.
Would be interesting to know what is the sid
Hello Community.
I need Your help. Few days ago I started manual resharding of one bucket with
large objects. Unfortunately I interrupted this by Ctrl+c. At now I can’t start
this process again.
There is message:
# radosgw-admin bucket reshard --bucket objects --num-shards 2
ERROR: the bucket is
Hello,
Thank You for help. This is done and everything is working now.
Best Regards
Mateusz Skała
> Wiadomość napisana przez Gaël THEROND w dniu
> 13.10.2020, o godz. 14:59:
>
> EXTERNAL EMAIL - Do not click any links or open any attachments unless you
> trust the sender and know the content i