On 12/03/2021 18:25, Philip Brown wrote:
well that is a very interesting statistic.
Where do you come up with the 30GB partition size limit number?
I believe it is using 28GB SSD per HDD disk :-/
So you are implying that if I "throw away" 1/8 of my HDDs, so that I can get 
that magic number 30GB+ per HDD, things will magically be improved?
Before I do that kind of rework, I would like to better understand the theory 
behind that please :)

I vaguely recall reading something about WAL, ssd, and " db mostly".
I believe there is some way to check the status of that, but google search is 
being difficult without a more specific search term.


----- Original Message -----
From: "Maged Mokhtar" <mmokh...@petasan.org>
To: "Philip Brown" <pbr...@medata.com>
Cc: "ceph-users" <ceph-users@ceph.io>
Sent: Friday, March 12, 2021 8:04:06 AM
Subject: Re: [ceph-users] Question about delayed write IOs, octopus, mixed 
storage



as a side issue, i do not know how cephadm would configure the 2 x 100
GB SSDs for wal/db serving the 8 HDDs, you need over 30 GB partition
size else it would result in db mostly on slow HDDs.

/maged
I do not believe you are currently being slowed by this yet, else your 
cluster will show a  "WARN: BlueFS spillover detected"
but it will eventually happen as you write more and your db expands.

You can read more in this post by Nick Fisk
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030913.html

/maged

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to