Hi,
I am experiencing an issue with CephFS with cache tiering where the kernel
clients are reading files filled entirely with 0s.
The setup:
ceph 0.94.3
create cephfs_metadata replicated pool
create cephfs_data replicated pool
cephfs was created on the above two pools, populated with files, then:
Hi John and Zheng,
Thanks for the quick replies!
I'm using kernel 4.2. I'll test out that fix.
Arthur
On Wed, Sep 2, 2015 at 10:29 PM, Yan, Zheng wrote:
> probably caused by http://tracker.ceph.com/issues/12551
>
> On Wed, Sep 2, 2015 at 7:57 PM, Arthur Liu wrote:
&
On Mon, Jan 18, 2016 at 11:34 PM, Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 18.01.2016 10:36, david wrote:
>
>> Hello All.
>> Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a
>> requirement about Ceph Cluster which needs to provide
On Tue, Jan 19, 2016 at 12:36 PM, Gregory Farnum wrote:
> > I've found that using knfsd does not preserve cephfs directory and file
> > layouts, but using nfs-ganesha does. I'm currently using nfs-ganesha
> 2.4dev5
> > and seems stable so far.
>
> Can you expand on that? In what manner is it not