Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at the same time

2014-07-16 Thread Danny Luhde-Thompson
With 34 x 4TB OSDs over 4 hosts, I had 30% objects moved - about half full and took around 12 hours. Except now I can't use the kclient any more - wish I'd read that first. On 16 July 2014 13:36, Andrija Panic wrote: > For me, 3 nodes, 1MON+ 2x2TB OSDs on each node... no mds used... > I went t

[ceph-users] CephFS problems

2014-04-03 Thread Danny Luhde-Thompson
Hi, Our first attempt at using CephFS in earnest in December ran into a known bug with the kclients hanging in ceph_mdsc_do_request, which I suspected was down to the bug in http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/15838. We were on a default Ubuntu 12.04 3.2 kernel, so recent

[ceph-users] CephFS behaviour for missing objects

2014-04-03 Thread Danny Luhde-Thompson
I accidentally removed some MDS objects (a scary typo in a "rados cleanup"), and when trying the read the files via the kclient I got all zeros instead of some IO failure. Is this expected behaviour? I realise it's generally bad behaviour, but I didn't expect silent zeros. Best regards, Danny _

Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Danny Luhde-Thompson
Congratulations! From reading the Red Hat announcement, you get the impression they will want to push Glustre for files, and focus on Ceph for block/object. As someone who is very excited about CephFS and keen on it becoming supported this year, I hope it doesn't become de-prioritised for some ot

Re: [ceph-users] ceph-deploy doesn't update ceph.conf

2013-05-09 Thread Danny Luhde-Thompson
Hi everyone, After reading all the research papers and docs over the last few months and waiting for Cuttlefish, I finally deployed a test cluster of 18 osds across 6 hosts. It's performing better than I expected so far, all on the default single interface. I was also surprised by the minimal ce

[ceph-users] cephfs command functional on fuse?

2013-05-09 Thread Danny Luhde-Thompson
I don't seem to be able to use the cephfs command via a fuse mount. Is this expected? I saw no mention of it in the doc. This is on the default precise kernel (3.2.0-40-generic #64-Ubuntu SMP Mon Mar 25 21:22:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux). danny@ceph:/ceph$ cephfs . show_layout Er