> Yeah, didn't think about a RAID10 really, although there wouldn't be enough > space for 8x300GB = 2400GB WAL/DBs. 300 is overkill for many applications anyway. > > Also, using a RAID10 for WAL/DBs will: > - make OSDs less movable between hosts (they'd have to be moved all > together - with 2 OSD per NVMe you can move them around in pairs Why would you want to move them between hosts? > - You must really be sure your raid card is dependable. (sorry but I have > seen so much management problems with top-tier RAID cards I avoid them like > the plague). This. _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
- [ceph-users] Hardware for new OSD nodes. Dave Hall
- [ceph-users] Re: Hardware for new OSD nodes. Anthony D'Atri
- [ceph-users] Re: Hardware for new OSD nodes. Eneko Lacunza
- [ceph-users] Re: Hardware for new OSD nodes. Brian Topping
- [ceph-users] Re: Hardware for new OSD nodes. Eneko Lacunza
- [ceph-users] Re: Hardware for new OSD node... Anthony D'Atri
- [ceph-users] Re: Hardware for new OSD... Brian Topping
- [ceph-users] Re: Hardware for new... Eneko Lacunza
- [ceph-users] Re: Hardware for... Brian Topping
- [ceph-users] Re: [External Em... Dave Hall
- [ceph-users] Re: Hardware for new OSD... Eneko Lacunza
- [ceph-users] Re: Hardware for new OSD nodes. Dave Hall
- [ceph-users] Re: Hardware for new OSD nodes. Eneko Lacunza
- [ceph-users] Re: [External Email] Re: Hard... Dave Hall
- [ceph-users] Re: [External Email] Re:... Eneko Lacunza
- [ceph-users] Re: Hardware for new OSD node... Dave Hall