Unfortunately no. Using FUSE was discarded due to poor performance.

> 19 июля 2017 г., в 13:45, Blair Bethwaite <blair.bethwa...@gmail.com> 
> написал(а):
> 
> Interesting. Any FUSE client data-points?
> 
> On 19 July 2017 at 20:21, Дмитрий Глушенок <gl...@jet.msk.su> wrote:
>> RBD (via krbd) was in action at the same time - no problems.
>> 
>> 19 июля 2017 г., в 12:54, Blair Bethwaite <blair.bethwa...@gmail.com>
>> написал(а):
>> 
>> It would be worthwhile repeating the first test (crashing/killing an
>> OSD host) again with just plain rados clients (e.g. rados bench)
>> and/or rbd. It's not clear whether your issue is specifically related
>> to CephFS or actually something else.
>> 
>> Cheers,
>> 
>> On 19 July 2017 at 19:32, Дмитрий Глушенок <gl...@jet.msk.su> wrote:
>> 
>> Hi,
>> 
>> I can share negative test results (on Jewel 10.2.6). All tests were
>> performed while actively writing to CephFS from single client (about 1300
>> MB/sec). Cluster consists of 8 nodes, 8 OSD each (2 SSD for journals and
>> metadata, 6 HDD RAID6 for data), MON/MDS are on dedicated nodes. 2 MDS at
>> all, active/standby.
>> - Crashing one node resulted in write hangs for 17 minutes. Repeating the
>> test resulted in CephFS hangs forever.
>> - Restarting active MDS resulted in successful failover to standby. Then,
>> after standby became active and the restarted MDS became standby the new
>> active was restarted. CephFS hanged for 12 minutes.
>> 
>> P.S. Planning to repeat the tests again on 10.2.7 or higher
>> 
>> 19 июля 2017 г., в 6:47, 许雪寒 <xuxue...@360.cn> написал(а):
>> 
>> Is there anyone else willing to share some usage information of cephfs?
>> Could developers tell whether cephfs is a major effort in the whole ceph
>> development?
>> 
>> 发件人: 许雪寒
>> 发送时间: 2017年7月17日 11:00
>> 收件人: ceph-users@lists.ceph.com
>> 主题: How's cephfs going?
>> 
>> Hi, everyone.
>> 
>> We intend to use cephfs of Jewel version, however, we don’t know its status.
>> Is it production ready in Jewel? Does it still have lots of bugs? Is it a
>> major effort of the current ceph development? And who are using cephfs now?
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 
>> --
>> Dmitry Glushenok
>> Jet Infosystems
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 
>> 
>> 
>> --
>> Cheers,
>> ~Blairo
>> 
>> 
>> --
>> Dmitry Glushenok
>> Jet Infosystems
>> 
> 
> 
> 
> -- 
> Cheers,
> ~Blairo

--
Dmitry Glushenok
Jet Infosystems

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to