[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread i...@z1storage.com
Hi, Global swappiness and per-cgroup swappiness are managed separately. When you change vm.swappiness sysctl, only /sys/fs/cgroup/memory/memory.swappiness changes, but not memory.swappiness of the services under separate slices (like system.slice where ceph services are running). Check https:

[ceph-users] Re: cephadm: how to create more than 1 rgw per host

2021-04-22 Thread i...@z1storage.com
Does anyone know how to create more than 1 rgw per host? Surely it's not a rare configuration. On 2021/04/19 17:09, i...@z1storage.com wrote: Hi Sebastian, Thank you. Is there a way to create more than 1 rgw per host until this new feature is released? On 2021/04/19 11:39, Seba

[ceph-users] Re: cephadm: how to create more than 1 rgw per host

2021-04-19 Thread i...@z1storage.com
. Sebastian On Fri, Apr 16, 2021 at 10:58 PM i...@z1storage.com <mailto:i...@z1storage.com> <mailto:i...@z1storage.com>> wrote: Hello, According to the documentation, there's count-per-host key to 'ceph orch', but it does not work for me: :~# ceph orch

[ceph-users] cephadm: how to create more than 1 rgw per host

2021-04-16 Thread i...@z1storage.com
Hello, According to the documentation, there's count-per-host key to 'ceph orch', but it does not work for me: :~# ceph orch apply rgw z1 sa-1 --placement='label:rgw count-per-host:2' --port=8000 --dry-run Error EINVAL: Host and label are mutually exclusive Why it says anything about Host i