[ceph-users] Re: Adding datacenter level to CRUSH tree causes rebalancing

2023-07-16 Thread Michel Jouvin
Hi Niklas, I am not sure why you are surprised. In a large cluster, you should expect some rebalancing on every crush map or crush map rule change. Ceph doesn't just enforce the failure domain, it also whants to have a "perfect" pseudo-random distribution across the clusters based on the crus

[ceph-users] Re: RBD with PWL cache shows poor performance compared to cache device

2023-07-16 Thread Mark Nelson
On 7/10/23 11:19 AM, Matthew Booth wrote: On Thu, 6 Jul 2023 at 12:54, Mark Nelson wrote: On 7/6/23 06:02, Matthew Booth wrote: On Wed, 5 Jul 2023 at 15:18, Mark Nelson wrote: I'm sort of amazed that it gave you symbols without the debuginfo packages installed. I'll need to figure out a w

[ceph-users] Re: Adding datacenter level to CRUSH tree causes rebalancing

2023-07-16 Thread Christian Wuerdig
Based on my understanding of CRUSH it basically works down the hierarchy and then randomly (but deterministically for a given CRUSH map) picks buckets (based on the specific selection rule) on that level for the object and then it does this recursively until it ends up at the leaf nodes. Given that

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-16 Thread Luis Domingues
Hi, Thanks for your hints. I tries to play a little bit with the configs. And now I want to put the 0.7 value as default. So I configured ceph: mgradvanced mgr/cephadm/autotune_memory_target_ratio0.70

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-16 Thread Sridhar Seshasayee
Hello Luis, Please see my response below: But when I took a look on the memory usage of my OSDs, I was below of that > value, by quite a bite. Looking at the OSDs themselves, I have: > > "bluestore-pricache": { > "target_bytes": 4294967296, > "mapped_bytes": 1343455232, >