No problem. We are a functional mri research institute. We have a fairly
mixed workload. But, I can tell you that we see 60+gbps of throughput when
multiple clients are reading sequencially on large files (1+GB) with 1-4MB
block sizes. IO involving small files and small block sizes are not very
good. Ssd would help a lot with small io, but our hardware architecture is
not designed for that and we don't care too much about throughput when a
person opens a spreadsheet.

One of the greatest benefits we've gained from CephFS that wasn't expected
to be as consequencial as it was is the xattrs. Specifically ceph.dir.* we
use this feature to track usage and it has dramatically reduced the number
of metadata operations we perform while trying to determine statistics
about a directory.

But, we very much miss the ability to perform nightly snapshots. I think
snapshots are supposed to be marked stable soon, but for now it is my
understanding that they are still not listed as stable. The xattrs have
indirectly facilitated this, but it isn't as convenient as a filesystem
snapshot.

All of that said, you could also consider using rbd and zfs or whatever
filesystem you like. That would allow you to gain the benefits of scaleout
while still getting a feature rich fs. But, there are some down sides to
that architecture too.

On Jul 17, 2017 10:21 PM, "许雪寒" <xuxue...@360.cn> wrote:

Thanks, sir☺
You are really a lot of help☺

May I ask what kind of business are you using cephFS for? What's the io
pattern:-)

If answering this may involve any business secret, I really understand if
you don't answer:-)

Thanks again:-)

发件人: Brady Deetz [mailto:bde...@gmail.com]
发送时间: 2017年7月18日 8:01
收件人: 许雪寒
抄送: ceph-users
主题: Re: [ceph-users] How's cephfs going?

I feel that the correct answer to this question is: it depends.

I've been running a 1.75PB Jewel based cephfs cluster in production for
about a 2 years at Laureate Institute for Brain Research. Before that we
had a good 6-8 month planning and evaluation phase. I'm running with
active/standby dedicated mds servers, 3x dedicated mons, and 12 osd nodes
with 24 disks in each server. Every group of 12 disks have journals mapped
to 1x Intel P3700. Each osd node has dual 40gbps ethernet lagged with lacp.
In our evaluation we did find that the rumors are true. Your cpu choice
will influence performance.

Here's why my answer is "it depends." If you expect to get the same
complete feature set as you do with isilon, scale-io, gluster, or other
more established scaleout systems, it is not production ready. But, in
terms of stability, it is. Over the course of the past 2 years I've
triggered 1 mds bug that put my filesystem into read only mode. That bug
was patched in 8 hours thanks to this community. Also that bug was trigger
by a stupid mistake on my part that the application did not validate before
the action was performed.

If you have a couple of people with a strong background in Linux,
networking, and architecture, I'd say Ceph may be a good fit for you. If
not, maybe not.

On Jul 16, 2017 9:59 PM, "许雪寒" <xuxue...@360.cn> wrote:
Hi, everyone.

We intend to use cephfs of Jewel version, however, we don’t know its
status. Is it production ready in Jewel? Does it still have lots of bugs?
Is it a major effort of the current ceph development? And who are using
cephfs now?

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to