Christian;

What is your failure domain?  If your failure domain is set to OSD / drive, and 
2 OSDs share a DB / WAL device, and that DB / WAL device dies, then portions of 
the data could drop to read-only (or be lost...).
Ceph is really set up to own the storage hardware directly.  It doesn't 
(usually) make sense to put any kind of RAID / JBOD between Ceph and the 
hardware.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com



-----Original Message-----
From: Christian Wahl [mailto:w...@teco.edu] 
Sent: Thursday, February 27, 2020 12:09 PM
To: ceph-users@ceph.io
Subject: [ceph-users] SSD considerations for block.db and WAL


Hi everyone,

we currently have 6 OSDs with 8TB HDDs split across 3 hosts.
The main usage is KVM-Images.

To improve speed we planned on putting the block.db and WAL onto NVMe-SSDs.
The plan was to put 2x1TB in each host.

One option I thought of was to RAID 1 them for better redundancy, I don't know 
how high the risk is of corrupting the block.db by one failed SSD block.
Or should I just one for WAL+block.db and use the other one as fast storage?

Thank you all very much!

Christian
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to