[ceph-users] Re: CephFS performance

2022-11-23 Thread Robert W. Eckert
/ -Rob From: quag...@bol.com.br Sent: Wednesday, November 23, 2022 12:28 PM To: gfar...@redhat.com; dcsysengin...@gmail.com Cc: ceph-users@ceph.io Subject: [ceph-users] Re: CephFS performance Hi Gregory, Thanks for your reply! We are evaluating possibilities to increase storage

[ceph-users] Re: CephFS performance

2022-11-23 Thread quaglio
;Gregory Farnum" Enviada: 2022/11/22 14:49:12 Para: dcsysengin...@gmail.com Cc: quag...@bol.com.br, ceph-users@ceph.io Assunto: Re: [ceph-users] Re: CephFS performance   In addition to not having resiliency by default, my recollection is that BeeGFS also doesn't guarantee metadata durability i

[ceph-users] Re: CephFS performance

2022-11-23 Thread quag...@bol.com.br
Hi David, First of all, thanks for your reply! The resiliency of BeeGFS is in doing RAID on disks (by hardware or software) at the same node as the storage. If there is a need for greater resilience, the maximum possible is through buddy (which would be another storage machine as a fail

[ceph-users] Re: CephFS performance

2022-11-22 Thread Gregory Farnum
In addition to not having resiliency by default, my recollection is that BeeGFS also doesn't guarantee metadata durability in the event of a crash or hardware failure like CephFS does. There's not really a way for us to catch up to their "in-memory metadata IOPS" with our "on-disk metadata IOPS". :

[ceph-users] Re: CephFS performance

2022-11-22 Thread David C
My understanding is BeeGFS doesn't offer data redundancy by default, you have to configure mirroring. You've not said how your Ceph cluster is configured but my guess is you have the recommended 3x replication - I wouldn't be surprised if BeeGFS was much faster than Ceph in this case. I'd be intere