Re: [ceph-users] Luminous cluster - how to find out which clients are still jewel?

2018-05-28 Thread Massimo Sgaravatto
As far as I know the status wrt this issue is still the one reported in this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020585.html See also: http://tracker.ceph.com/issues/21315 Cheers, Massimo On Tue, May 29, 2018 at 8:39 AM, Linh Vu wrote: > Hi all, > > > I

[ceph-users] Luminous cluster - how to find out which clients are still jewel?

2018-05-28 Thread Linh Vu
Hi all, I have a Luminous 12.2.4 cluster. This is what `ceph features` tells me: ... "client": { "group": { "features": "0x7010fb86aa42ada", "release": "jewel", "num": 257 }, "group": { "features": "0x1ffddff8eea4fffb"

Re: [ceph-users] ceph mds memory usage 20GB : is it normal ?

2018-05-28 Thread Yan, Zheng
Could you try path https://github.com/ceph/ceph/pull/22240/files. The leakage of MMDSBeacon messages can explain your issue. Regards Yan, Zheng On Mon, May 28, 2018 at 12:06 PM, Alexandre DERUMIER wrote: >>>could you send me full output of dump_mempools > > # ceph daemon mds.ceph4-2.odiso

Re: [ceph-users] Ceph-fuse getting stuck with "currently failed to authpin local pins"

2018-05-28 Thread Linh Vu
I get the exact opposite to the same error message "currently failed to authpin local pins". Had a few clients on ceph-fuse 12.2.2 and they ran into those issues a lot (evicting works). Upgrading to ceph-fuse 12.2.5 fixed it. The main cluster is on 12.2.4. The cause is user's HPC jobs or even

Re: [ceph-users] Ceph-fuse getting stuck with "currently failed to authpin local pins"

2018-05-28 Thread Oliver Freyermuth
Dear Paul, Am 28.05.2018 um 20:16 schrieb Paul Emmerich: > I encountered the exact same issue earlier today immediately after upgrading > a customer's cluster from 12.2.2 to 12.2.5. > I've evicted the session and restarted the ganesha client to fix it, as I > also couldn't find any obvious probl

[ceph-users] ceph , VMWare , NFS-ganesha

2018-05-28 Thread Steven Vacaroaia
Hi, I need to design and build a storage platform that will be "consumed" mainly by VMWare CEPH is my first choice As far as I can see, there are 3 ways CEPH storage can be made available to VMWare 1. iSCSI 2. NFS-Ganesha 3. mounted rbd to a lInux NFS server Any suggestions / advice as to whic

Re: [ceph-users] ceph , VMWare , NFS-ganesha

2018-05-28 Thread Brady Deetz
You might look into open vstorage as a gateway into ceph. On Mon, May 28, 2018, 2:42 PM Steven Vacaroaia wrote: > Hi, > > I need to design and build a storage platform that will be "consumed" > mainly by VMWare > > CEPH is my first choice > > As far as I can see, there are 3 ways CEPH storage ca

Re: [ceph-users] Ceph-fuse getting stuck with "currently failed to authpin local pins"

2018-05-28 Thread Paul Emmerich
I encountered the exact same issue earlier today immediately after upgrading a customer's cluster from 12.2.2 to 12.2.5. I've evicted the session and restarted the ganesha client to fix it, as I also couldn't find any obvious problem. Paul 2018-05-28 16:38 GMT+02:00 Oliver Freyermuth : > Dear C

Re: [ceph-users] Ceph tech talk on deploy ceph with rook on kubernetes

2018-05-28 Thread Leonardo Vaz
On Fri, May 25, 2018 at 12:14 PM, Brett Niver wrote: > Is the recording available? I wasn't able to attend. The video recording has been uploaded to our YouTube channel: https://youtu.be/IdX53Ddcd9E Kindest regards, Leo > Thanks, > Brett > > > On Thu, May 24, 2018 at 10:04 AM, Sage Weil w

[ceph-users] Ceph-fuse getting stuck with "currently failed to authpin local pins"

2018-05-28 Thread Oliver Freyermuth
Dear Cephalopodians, we just had a "lockup" of many MDS requests, and also trimming fell behind, for over 2 days. One of the clients (all ceph-fuse 12.2.5 on CentOS 7.5) was in status "currently failed to authpin local pins". Metadata pool usage did grow by 10 GB in those 2 days. Rebooting t

Re: [ceph-users] Radosgw

2018-05-28 Thread Janne Johansson
Den mån 28 maj 2018 kl 15:28 skrev Marc-Antoine Desrochers < marc-antoine.desroch...@sogetel.com>: > Hi, > > > > Im new in a business and I took on the ceph project. > > Im still a newbie on that subject and I try to understand what the > previous guy was trying to do. > > > > Is there any reason

[ceph-users] Radosgw

2018-05-28 Thread Marc-Antoine Desrochers
Hi, Im new in a business and I took on the ceph project. Im still a newbie on that subject and I try to understand what the previous guy was trying to do. Is there any reason someone would install radosgw with a cephfs? If not how can I remove all radosgw configuration without restarti

[ceph-users] About "ceph balancer": typo in doc, restrict by class

2018-05-28 Thread Fulvio Galeazzi
Hallo, I am using 12.2.4 and started using "ceph balancer". Indeed it does a great job, thanks! I have few comments: - in the documentation http://docs.ceph.com/docs/master/mgr/balancer/ I think there is an error, since ceph config set mgr mgr/balancer/max_misplaced .07 sh

Re: [ceph-users] Expected performane with Ceph iSCSI gateway

2018-05-28 Thread Jason Dillaman
Since the iSCSI protocol adds an extra hop and offers an added layer of complexity, it should be expected to perform slightly worse as compared to a direct-path solution like krbd. The RBD iSCSI interface is really a workaround for environments that cannot directly access the Ceph cluster via krbd

[ceph-users] Expected performane with Ceph iSCSI gateway

2018-05-28 Thread Frank (lists)
Hi, I an test cluster (3 nodes, 24 osd's) I'm testing the ceph iscsi gateway (with http://docs.ceph.com/docs/master/rbd/iscsi-targets/). For a client I used a seperate server, everything runs Centos 7.5. The iscsi gateway are located on 2 of the existing nodes in the cluster. How does iscsi

[ceph-users] Cluster network failure, osd declared up

2018-05-28 Thread Lorenzo Garuti
Hi, consider the following scenario: - cluster with public and cluster networks - three node cluster - 5 osd per node - 1 mon per node - two node attached at the same 10GB switch - cluster network (room A) - one node attached to another 10GB switch - cluster network (room B)