/
-Rob
From: quag...@bol.com.br
Sent: Wednesday, November 23, 2022 12:28 PM
To: gfar...@redhat.com; dcsysengin...@gmail.com
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: CephFS performance
Hi Gregory,
Thanks for your reply!
We are evaluating possibilities to increase storage
;Gregory Farnum"
Enviada: 2022/11/22 14:49:12
Para: dcsysengin...@gmail.com
Cc: quag...@bol.com.br, ceph-users@ceph.io
Assunto: Re: [ceph-users] Re: CephFS performance
In addition to not having resiliency by default, my recollection is
that BeeGFS also doesn't guarantee metadata durability i
Hi David,
First of all, thanks for your reply!
The resiliency of BeeGFS is in doing RAID on disks (by hardware or software) at the same node as the storage. If there is a need for greater resilience, the maximum possible is through buddy (which would be another storage machine as a fail
In addition to not having resiliency by default, my recollection is
that BeeGFS also doesn't guarantee metadata durability in the event of
a crash or hardware failure like CephFS does. There's not really a way
for us to catch up to their "in-memory metadata IOPS" with our
"on-disk metadata IOPS". :
My understanding is BeeGFS doesn't offer data redundancy by default,
you have to configure mirroring. You've not said how your Ceph cluster
is configured but my guess is you have the recommended 3x replication
- I wouldn't be surprised if BeeGFS was much faster than Ceph in this
case. I'd be intere