Hi Niklas,
I am not sure why you are surprised. In a large cluster, you should
expect some rebalancing on every crush map or crush map rule change.
Ceph doesn't just enforce the failure domain, it also whants to have a
"perfect" pseudo-random distribution across the clusters based on the
crus
On 7/10/23 11:19 AM, Matthew Booth wrote:
On Thu, 6 Jul 2023 at 12:54, Mark Nelson wrote:
On 7/6/23 06:02, Matthew Booth wrote:
On Wed, 5 Jul 2023 at 15:18, Mark Nelson wrote:
I'm sort of amazed that it gave you symbols without the debuginfo
packages installed. I'll need to figure out a w
Based on my understanding of CRUSH it basically works down the hierarchy
and then randomly (but deterministically for a given CRUSH map) picks
buckets (based on the specific selection rule) on that level for the object
and then it does this recursively until it ends up at the leaf nodes.
Given that
Hi,
Thanks for your hints. I tries to play a little bit with the configs. And now I
want to put the 0.7 value as default.
So I configured ceph:
mgradvanced
mgr/cephadm/autotune_memory_target_ratio0.70
Hello Luis,
Please see my response below:
But when I took a look on the memory usage of my OSDs, I was below of that
> value, by quite a bite. Looking at the OSDs themselves, I have:
>
> "bluestore-pricache": {
> "target_bytes": 4294967296,
> "mapped_bytes": 1343455232,
>