With 34 x 4TB OSDs over 4 hosts, I had 30% objects moved - about half full
and took around 12 hours. Except now I can't use the kclient any more -
wish I'd read that first.
On 16 July 2014 13:36, Andrija Panic wrote:
> For me, 3 nodes, 1MON+ 2x2TB OSDs on each node... no mds used...
> I went t
Hi,
Our first attempt at using CephFS in earnest in December ran into a known
bug with the kclients hanging in ceph_mdsc_do_request, which I suspected
was down to the bug in
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/15838. We
were on a default Ubuntu 12.04 3.2 kernel, so recent
I accidentally removed some MDS objects (a scary typo in a "rados
cleanup"), and when trying the read the files via the kclient I got all
zeros instead of some IO failure. Is this expected behaviour? I realise
it's generally bad behaviour, but I didn't expect silent zeros.
Best regards,
Danny
_
Congratulations! From reading the Red Hat announcement, you get the
impression they will want to push Glustre for files, and focus on Ceph for
block/object. As someone who is very excited about CephFS and keen on it
becoming supported this year, I hope it doesn't become de-prioritised for
some ot
Hi everyone,
After reading all the research papers and docs over the last few months and
waiting for Cuttlefish, I finally deployed a test cluster of 18 osds across
6 hosts. It's performing better than I expected so far, all on the default
single interface.
I was also surprised by the minimal ce
I don't seem to be able to use the cephfs command via a fuse mount. Is
this expected? I saw no mention of it in the doc. This is on the default
precise kernel (3.2.0-40-generic #64-Ubuntu SMP Mon Mar 25 21:22:10 UTC
2013 x86_64 x86_64 x86_64 GNU/Linux).
danny@ceph:/ceph$ cephfs . show_layout
Er