[ceph-users] Abysmal performance in Ceph cluster

2020-08-05 Thread Loschwitz,Martin Gerhard
Folks, we’re building a Ceph cluster based on HDDs with SSDs for WAL/DB files. We have four nodes with 8TB disks and two SSDs and four nodes with many small HDDs (1.4-2.7TB) and four SSDs for the journals. HDDs are configured as RAID 0 on the controllers with writethrough enabled. I am writin

[ceph-users] Performance issues in newly deployed Ceph cluster

2020-05-26 Thread Loschwitz,Martin Gerhard
Folks, I am running into a very strange issue with a brand new Ceph cluster during initial testing. Cluster consists of 12 nodes, 4 of them have SSDs only, the other eight have a mixture of SSDs and HDDs. The latter nods are configured so that three or four HDDs use one SSDs for their blockdb.