Hi,
I am testing CEPH over RDMA, for one of the tests I had to export ceph
filesystem as NFS share on RDMA transport. For TCP transport, I used ganesha as
NFS server that runs in user space and supports the cephFS FSAL using
libcephfs, and it worked perfectly fine. However, my requirement was t
Hi,
I have been running into some connection issues with the latest ceph-14
version, so we thought the feasible solution would be to roll back the cluster
to previous version (ceph-13.0.1) where things are known to work properly.
I'm wondering if rollback/downgrade is supported at all ?
After
Thanks Greg.
I think I have to re-install ceph v13 from scratch then.
-Raju
From: Gregory Farnum
Sent: 09 August 2018 01:54
To: Raju Rangoju
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] permission errors rolling back ceph cluster to v13
On Tue, Aug 7, 2018 at 6:27 PM Raju Rangoju
Hello,
I'm trying to run iscsi tgtd on ceph cluster. When do 'rbd list' I see below
errors.
[root@ceph1 ceph]# rbd list
2018-05-30 18:19:02.227 2ae7260a8140 -1 librbd::api::Image: list_images: error
listing image in directory: (5) Input/output error
2018-05-30 18:19:02.227 2ae7260a8140 -1 librb
Hi,
Recently I have upgraded my ceph cluster to version 14.0.0 - nautilus(dev) from
ceph version 13.0.1, after this, I noticed some weird data usage numbers on the
cluster.
Here are the issues I'm seeing...
1. The data usage reported is much more than what is available
usage: 16 EiB used,
Igor
On 6/20/2018 6:41 PM, Raju Rangoju wrote:
Hi,
Recently I have upgraded my ceph cluster to version 14.0.0 - nautilus(dev) from
ceph version 13.0.1, after this, I noticed some weird data usage numbers on the
cluster.
Here are the issues I'm seeing...
1. The data usage reported is
is PR will most probably fix that:
https://github.com/ceph/ceph/pull/22610
Also you may try to switch bluestore and bluefs allocators (bluestore_allocator
and bluefs_allocator parameters respectively) to stupid and restart OSDs.
This should help.
Thanks,
Igor
On 6/20/2018 6:41 PM, Raju Ra
Hello All,
I have been collecting performance numbers on our ceph cluster, and I had
noticed a very poor throughput on ceph async+rdma when compared with tcp. I was
wondering what tunings/settings should I do to the cluster that would improve
the ceph rdma (async+rdma) performance.
Currently,