Hi,
Thank you so much for your kind information. We will review the setting.
One things, If we want to use ssd replica size=2
As failure domain is host, it should make sure two replica in two
different host,
Is there any drawback?
Regards,
Munna
On Thu, 9 Dec 2021, 20:35 Stefan Kooman, wr
Hi,
This is ceph.conf during the cluster deploy. ceph version is mimic.
osd pool default size = 3
osd pool default min size = 1
osd pool default pg num = 1024
osd pool default pgp num = 1024
osd crush chooseleaf type = 1
mon_max_pg_per_osd = 2048
mon_allow_pool_delete = true
mon_pg_warn_min_per_o
Den tors 9 dec. 2021 kl 09:31 skrev Md. Hejbul Tawhid MUNNA
:
> Yes, min_size=1 and size=2 for ssd
>
> for hdd it is min_size=1 and size=3
>
> Could you please advice, about using hdd and ssd in a same ceph cluster. Is
> it okay for production grade openstack?
Mixing ssd and hdd in production is f
Hi,
Yes, min_size=1 and size=2 for ssd
for hdd it is min_size=1 and size=3
Could you please advice, about using hdd and ssd in a same ceph cluster. Is
it okay for production grade openstack?
We have created a new replicated rule for ssd, different pool for ssd and
new disk marking ssd class.
no
Den tors 9 dec. 2021 kl 03:12 skrev Md. Hejbul Tawhid MUNNA
:
>
> Hi,
>
> Yes, we have added new osd. Previously we had only one type disk, hdd. now
> we have added ssd disk separate them with replicated_rule and device class
>
> ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS
> 0
Hi,
Yes, we have added new osd. Previously we had only one type disk, hdd. now
we have added ssd disk separate them with replicated_rule and device class
ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS
0 hdd 5.57100 1.0 5.6 TiB 1.8 TiB 3.8 TiB 31.61 1.04 850
1 hdd 5.5
You should probably stop all client mounts to avoid any more writes,
temporarily raise full ratios just enough to get it online, then
delete something. Never let it get this full.
On Wed, Dec 8, 2021 at 1:27 PM Md. Hejbul Tawhid MUNNA
wrote:
>
> Hi,
>
> Overall status: HEALTH_ERR
> PG_DEGRADED_FU
Hi Munna,
Have you added osd’s in the cluster recently?
If yes, i think you have to re-weight the osd’s which you have added to
lower values and slowly increase the weight one by one.
Also, please share output of ‘ceph osd df’ and ‘ceph health details’
On Wed, 8 Dec 2021 at 11:56 PM, Md. Hejbul
>
> Overall status: HEALTH_ERR
> PG_DEGRADED_FULL: Degraded data redundancy (low space): 19 pgs
> backfill_toofull
> OBJECT_MISPLACED: 12359314/17705640 objects misplaced (69.804%)
> PG_DEGRADED: Degraded data redundancy: 1707105/17705640 objects degraded
> (9.642%), 1979 pgs degraded, 1155 pgs