>> Can you suggest me what is a good cephfs design?

One that uses copious complements of my employer’s components, naturally ;)

>> I've never used it, only
>> rgw and rbd we have, but want to give a try. Howvere in the mail list I saw
>> a huge amount of issues with cephfs

Something to remember about the list is that people are far more likely to post 
when they have a problem than when things are running fine, so it’s easy to 
mistake that for instabiliity.  For every issue posted, there are a bunch of 
clusters humming right along.

>> so would like to go with some let's say
>> bulletproof best practices.
>> 
>> Like separate the mds from mon and mgr?
>> Need a lot of memory?
>> Should be on ssd or nvme?
>> How many cpu/disk ...


Like Peter wrote, that’s very dependent on the scale and nature of your 
workload.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to