root@adminnode:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 30.82903 root default
-16 30.82903 datacenter dc01
-19 30.82903 pod dc01-agg01
-10 17.43365 rack dc01-rack02
-47.20665
Hello everyone,
We are running a small cluster on 5 machines with 48 OSDs / 5 MDSs / 5 MONs
based on Luminous 12.2.10 and Debian Stretch 9.6. When using a single MDS
configuration everything works fine and looking at the active MDS's memory, as
configured, it uses ~1 GByte of memory for cache:
Hi.
What makes us struggle / wonder again and again is the absence of CEPH __man
pages__. On *NIX systems man pages are always the first way to go for help,
right? Or is this considered "old school" from the CEPH makers / community? :O
And as many ppl complain again and again, the same here as
I have straw2, balancer=on, crush-compat and it gives worst spread over
my ssd drives (4 only) being used by only 2 pools. One of these pools
has pg 8. Should I increase this to 16 to create a better result, or
will it never be any better.
For now I like to stick to crush-compat, so I can use
If I understand the balancer correct, it balances PGs not data.
This worked perfectly fine in your case.
I prefer a PG count of ~100 per OSD, you are at 30. Maybe it would
help to bump the PGs.
Kevin
Am Sa., 5. Jan. 2019 um 14:39 Uhr schrieb Marc Roos :
>
>
> I have straw2, balancer=on, crush-co
Thanks for tracking this down. It appears the libvirt needs to check
whether or not the fast-diff map is invalid before attempting to use
it. However, assuming the map is valid, I don't immediately see a
difference between the libvirt and "rbd du" implementation. Can you
provide a pastebin "debug r
On Sat, 5 Jan 2019, 13:38 Marc Roos
> I have straw2, balancer=on, crush-compat and it gives worst spread over
> my ssd drives (4 only) being used by only 2 pools. One of these pools
> has pg 8. Should I increase this to 16 to create a better result, or
> will it never be any better.
>
> For now I
On 1/5/19 4:17 PM, Kevin Olbrich wrote:
root@adminnode:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 30.82903 root default
-16 30.82903 datacenter dc01
-19 30.82903 pod dc01-agg01
-10 17.43365 rack dc