Hi Johansson-san,
Thank you very much for your detailed explanation. I read some documents in
Ceph community. So I generally understand. Thank you very much for all the
useful advice. The characteristics of distributed storage seem to be quite
complex, so I will investigate various things when I h
Den mån 15 apr. 2024 kl 13:09 skrev Mitsumasa KONDO :
> Hi Menguy-san,
>
> Thank you for your reply. Users who use large IO with tiny volumes are a
> nuisance to cloud providers.
>
> I confirmed my ceph cluster with 40 SSDs. Each OSD on 1TB SSD has about 50
> placement groups in my cluster. Therefo
Hi Anthony-san,
Thank you for your advice. I confirm my settings of my ceph cluster.
Autoscaler mode is on, so I had thought it's the best PGs. But the
autoscaler feature doesn't affect OSD's PGs. It's just for PG_NUM in
storage pools. Is that right?
Regards,
--
Mitsumasa KONDO
2024年4月15日(月) 22
If you're using SATA/SAS SSDs I would aim for 150-200 PGs per OSD as shown by
`ceph osd df`.
If NVMe, 200-300 unless you're starved for RAM.
> On Apr 15, 2024, at 07:07, Mitsumasa KONDO wrote:
>
> Hi Menguy-san,
>
> Thank you for your reply. Users who use large IO with tiny volumes are a
> nu
Hi Menguy-san,
Thank you for your reply. Users who use large IO with tiny volumes are a
nuisance to cloud providers.
I confirmed my ceph cluster with 40 SSDs. Each OSD on 1TB SSD has about 50
placement groups in my cluster. Therefore, each PG has approximately 20GB
of space.
If we create a small
Hi,
Volume size doesn't affect performance, cloud providers apply a limit to ensure
they can deliver expected performances to all their customers.
Étienne
From: Mitsumasa KONDO
Sent: Monday, 15 April 2024 06:06
To: ceph-users@ceph.io
Subject: [ceph-users] Perfo