[ceph-users] SPECIFYING EXPECTED POOL SIZE

2021-10-26 Thread Сергей Цаболов
Hello to community! I need to advise about  SPECIFYING EXPECTED POOL SIZE In the page https://docs.ceph.com/en/latest/rados/operations/placement-groups/ I found Part SPECIFYING EXPECTED POOL SIZE and can to increase some of my pool to have more of TB command is ceph osd pool set mypool tar

[ceph-users] Re: SPECIFYING EXPECTED POOL SIZE

2021-10-26 Thread Yury Kirsanov
Yes, you can, it will rescale PGs same way as autoscaler works. Also you can change policy to start from low number of PGs. Regards, Yury. On Tue, 26 Oct 2021, 20:20 Сергей Цаболов, wrote: > Hello to community! > > I need to advise about SPECIFYING EXPECTED POOL SIZE > In the page > https://do

[ceph-users] Re: mismatch between min-compat-client and connected clients

2021-10-26 Thread Konstantin Shalygin
Hi, Yes, this is fine. Human readable name is only for humans, software uses features bytes k Sent from my iPhone > On 24 Oct 2021, at 23:38, gustavo panizzo wrote: > >  > hello > > in a cluster running octopus, i've set the upmap balancer; in order to > do so i had to set the set-require-

[ceph-users] Re: SPECIFYING EXPECTED POOL SIZE

2021-10-26 Thread Сергей Цаболов
Thanks for answer. For now my pool when I have disk of VMs of PGs: 510 Optimal # of PGs is 256 Can I change the PGs from 510 to 256 ? 26.10.2021 12:21, Yury Kirsanov пишет: Yes, you can, it will rescale PGs same way as autoscaler works. Also you can change policy to start from low number of

[ceph-users] Re: SPECIFYING EXPECTED POOL SIZE

2021-10-26 Thread Yury Kirsanov
Yes, you can do that but if autoscaler is on and its mode set to "scale-down" it will increase number of PGs back to max. See autoscaler mode scale-up, in this case it's sufficient to change to that mode and autoscaler will shrink all PGs to minimal required sizes first and then after you apply max

[ceph-users] Re: Consul as load balancer

2021-10-26 Thread Javier Cacheiro
Hi, I am using something similar for internal clients: consul balances between the different RGW nodes and in each node haproxy balances between the different rgw instances. It is working fine. On Sun, 24 Oct 2021 at 23:31, gustavo panizzo wrote: > Hi > > On Tue, Oct 06, 2020 at 06:36:43AM +0

[ceph-users] MDS and OSD Problems with cephadm@rockylinux solved

2021-10-26 Thread Magnus Harlander
Hi, I solved all my problems mentioned earlier. It boiled down to a minimal ceph.conf that was created by cephadm without network infos. After replacing the minimal config for osd and mds daemons in /var/lib/ceph/UUID/*/config everything was fine and osd and mds containers came up clean and workin

[ceph-users] ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Frank Schilder
Hi all, we deployed a pool with high-performance SSDs and I'm testing aggregated performance. We seem to hit a bottleneck that is not caused by drive performance. My best guess at the moment is, that the effective iodepth of the OSD daemons is too low for these drives. I have 4 OSDs per drive a

[ceph-users] Re: Rebooting one node immediately blocks IO via RGW

2021-10-26 Thread Troels Hansen
All pools are: replicated size 3 min_size 2 failure domain host. On Mon, Oct 25, 2021 at 11:07 AM Eugen Block wrote: > Hi, > > what's the pool's min_size? > > ceph osd pool ls detail > > > Zitat von Troels Hansen : > > > I have a strange issue.. > > Its a 3 node cluster, deployed on Ubuntu,

[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Frank Schilder
It looks like the bottleneck is the bstore_kv_sync thread, there seems to be only one running per OSD daemon independent of shard number. This would imply a rather low effective queue depth per OSD daemon. Are there ways to improve this other than deploying even more OSD daemons per OSD? Thanks

[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Szabo, Istvan (Agoda)
Isn’t it too much for ssd 4 osd? Normally nvme is suitable for 4osd isn’t it? Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com --

[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Frank Schilder
Performance tests I did with recent SAS NVMe SSD drives indicate that these require a very high degree of concurrency to get close to spec performance. I agree that with standard data SSD drives 1-2 OSD daemons are enough. With high-performance drives it is a different story. Pushing a reasonabl

[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Stefan Kooman
On 10/26/21 10:22, Frank Schilder wrote: It looks like the bottleneck is the bstore_kv_sync thread, there seems to be only one running per OSD daemon independent of shard number. This would imply a rather low effective queue depth per OSD daemon. Are there ways to improve this other than deplo

[ceph-users] upgrade OSDs before mon

2021-10-26 Thread Boris Behrens
Hi, I just added new storage to our s3 cluster and saw that ubuntu didn't priortize the nautilus package over the octopus package. Now I have 10 OSDs with octopus in a pure nautilus cluster. Can I leave it this way, or should I remove the OSDs and first upgrade the mons? Cheers Boris -- Die S

[ceph-users] Re: ceph-osd iodepth for high-performance SSD OSDs

2021-10-26 Thread Frank Schilder
Hi Stefan, thanks a lot for this information. I increased osd_op_num_threads_per_shard with little effect (I did restart and checked with config show that the value is applied). I'm afraid I'm bound by the bstore_kv_sync as explained in an this thread discussing this in great detail (http://li

[ceph-users] Re: upgrade OSDs before mon

2021-10-26 Thread Yury Kirsanov
You can downgrade any CEPH packages if you want to. Just specify the number you'd like to go to. On Wed, Oct 27, 2021 at 12:36 AM Boris Behrens wrote: > Hi, > I just added new storage to our s3 cluster and saw that ubuntu didn't > priortize the nautilus package over the octopus package. > > Now

[ceph-users] Re: upgrade OSDs before mon

2021-10-26 Thread Boris Behrens
Hi Yury, unfortunally not. It's a package installation and there are no nautilus packages in ubuntu 20.04 (just realised this). Now the question: downgrade ubuntu to 18.04 and start over, or keep the octopus OSDs in a nautilus cluster? Would be cool if the last one is working properly. Am Di., 26

[ceph-users] Re: 16.2.6 OSD down, out but container running....

2021-10-26 Thread Marco Pizzolo
Thanks for the responses so far. Putting the OSD in is not a problem, but it will not report in the mgr as being online. Podman ps does show the container for osd.13 as online though. I've tried killing it and it relaunches, but the OSD does not report as up in mgr. ceph orch restart osd.all-av

[ceph-users] Re: upgrade OSDs before mon

2021-10-26 Thread Gregory Farnum
On Tue, Oct 26, 2021 at 7:05 AM Boris Behrens wrote: > > Hi Yury, > unfortunally not. It's a package installation and there are no nautilus > packages in ubuntu 20.04 (just realised this). > > Now the question: downgrade ubuntu to 18.04 and start over, or keep the > octopus OSDs in a nautilus clus

[ceph-users] Re: mismatch between min-compat-client and connected clients

2021-10-26 Thread Gregory Farnum
We should probably figure out how to resolve that display bug, if somebody can create an issue. I think what's going on here is just that J->L added a bunch of server-side bits, and the kernel only implemented (and reports) the client-side ones. So when trying to match the features to a version, i

[ceph-users] Re: Rebooting one node immediately blocks IO via RGW

2021-10-26 Thread Eugen Block
Can you share more details about that cluster like the applied crush rules and 'ceph -s' and 'ceph osd tree'? Zitat von Troels Hansen : All pools are: replicated size 3 min_size 2 failure domain host. On Mon, Oct 25, 2021 at 11:07 AM Eugen Block wrote: Hi, what's the pool's min_size?