[ceph-users] Re: small cluster HW upgrade

2020-02-01 Thread mrxlazuardin
Hi Philipp, More nodes is better, more availability, more CPU and more RAM. But, I'm agree that your 1GbE link will be most limiting factor, especially if there are some SSDs. I suggest you upgrade your networking to 10GbE (or 25GbE since it will cost you nearly same with 10GbE). Upgrading you

[ceph-users] Re: Changing failure domain

2020-02-01 Thread mrxlazuardin
Hi Francois, I'm afraid that you need more rooms to have such availability. For data pool, you will need 5 rooms due to your 3+2 erasure profile and for metadata you will need 3 rooms due to your 3 replication rule. If you have only 2 rooms, there is possibility of corrupted data whenever you l

[ceph-users] Re: General question CephFS or RBD

2020-02-01 Thread mrxlazuardin
Hi Willi, Since you still need iSCSI/NFS/Samba for Windows clients, I think it better to have virtual ZFS storage backed by Ceph (RBD). I have experience of running FreeNAS virtually with some volumes for making ZFS pool. The performance is pretty satisfying, almost 10Gbps iSCSI throughput on 2