Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-26 Thread Oliver Dzombic
Hi Greg, thank you for your suggestions. Just let me clearify one little thing: It starts working as soon as i load the kernel rbd/ceph module. I do not need to establish any connection based on that modules to the ceph cluster. Just loading the kernel modules with modprobe (before mounting wi

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-25 Thread Gregory Farnum
On Mon, Jan 25, 2016 at 3:58 PM, Oliver Dzombic wrote: > Hi, > > i switched now debugging to ms = 10 > > when starting the dd i can see in the logs of osd: > > 2016-01-26 00:47:16.530046 7f086f404700 1 -- 10.0.0.1:6806/49658 >> :/0 > pipe(0x1f83 sd=292 :6806 s=0 pgs=0 cs=0 l=0 c=0x1dc2e9e0).a

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-25 Thread Oliver Dzombic
Hi, i switched now debugging to ms = 10 when starting the dd i can see in the logs of osd: 2016-01-26 00:47:16.530046 7f086f404700 1 -- 10.0.0.1:6806/49658 >> :/0 pipe(0x1f83 sd=292 :6806 s=0 pgs=0 cs=0 l=0 c=0x1dc2e9e0).accept sd=292 10.0.0.91:56814/0 2016-01-26 00:47:16.530591 7f086f40470

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-22 Thread Oliver Dzombic
Hi Greg, a lot of: 2016-01-22 03:41:14.203838 7f1bffca3700 0 -- 10.0.0.2:6802/16329 >> 10.0.0.91:0/1536 pipe(0x223f2000 sd=243 :6802 s=0 pgs=0 cs=0 l=1 c=0x8a41fa0).accept replacing existing (lossy) channel (new one lossy=1) 2016-01-22 03:56:14.275055 7f1c0a48f700 0 -- 10.0.0.2:6802/16329 >> 10

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-22 Thread Gregory Farnum
On Fri, Jan 22, 2016 at 2:26 PM, Oliver Dzombic wrote: > Hi Greg, > > from the client the list is huge: > > Thats the situation while the dd's are stuck. > > [root@cn201 ~]# ceph daemon /var/run/ceph/ceph-client.admin.asok > objecter_requests > { > "ops": [ > { > "tid": 12,

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-22 Thread Oliver Dzombic
Hi Greg, from the client the list is huge: Thats the situation while the dd's are stuck. [root@cn201 ~]# ceph daemon /var/run/ceph/ceph-client.admin.asok objecter_requests { "ops": [ { "tid": 12, "pg": "6.7230bd94", "osd": 1, "object_id

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-22 Thread Gregory Farnum
What is the output of the objecter_requests command? It really looks to me like the writes aren't going out and you're backing up on memory, but I can't tell without that. Actually, please grab a dump of the perfcounters while you're at it, that will include info on dirty memory and bytes written o

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-21 Thread Oliver Dzombic
Hi Greg, while running the dd: server: [root@ceph2 ~]# ceph daemon /var/run/ceph/ceph-mds.ceph2.asok status { "cluster_fsid": "", "whoami": 0, "state": "up:active", "mdsmap_epoch": 83, "osdmap_epoch": 12592, "osdmap_epoch_barrier": 12592 } [root@ceph2 ~]# ceph daemon /v

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-21 Thread Gregory Farnum
On Thu, Jan 21, 2016 at 4:24 AM, Oliver Dzombic wrote: > Hi Greg, > > alright. > > After shutting down the whole cluster and start it with "none" as > authentication, i resettet the auth rights and restarted the whole > cluster again after setting back to cephx. > > Now it looks like: > > client.a

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-21 Thread Oliver Dzombic
Hi Greg, alright. After shutting down the whole cluster and start it with "none" as authentication, i resettet the auth rights and restarted the whole cluster again after setting back to cephx. Now it looks like: client.admin key: mysuperkey caps: [mds] allow * caps: [mo

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-21 Thread Oliver Dzombic
Hi Greg, ceph auth list showed client.admin key: mysuperkey caps: [mds] allow caps: [mon] allow * caps: [osd] allow * Then i tried to add the capability for the mds: [root@ceph1 ~]# ceph auth caps client.admin mds 'allow' updated caps for client.admin which was

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-20 Thread Gregory Farnum
On Wed, Jan 20, 2016 at 4:03 PM, Oliver Dzombic wrote: > Hi Greg, > > thank you for your time! > > #ceph-s > >cluster > health HEALTH_WARN > 62 requests are blocked > 32 sec > noscrub,nodeep-scrub flag(s) set > monmap e9: 4 mons at > {ceph1=10.0.0.1:6789/0,ce

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-20 Thread Oliver Dzombic
Hi Greg, thank you for your time! #ceph-s cluster health HEALTH_WARN 62 requests are blocked > 32 sec noscrub,nodeep-scrub flag(s) set monmap e9: 4 mons at {ceph1=10.0.0.1:6789/0,ceph2=10.0.0.2:6789/0,ceph3=10.0.0.3:6789/0,ceph4=10.0.0.4:6789/0}

Re: [ceph-users] ceph fuse closing stale session while still operable

2016-01-20 Thread Gregory Farnum
On Wed, Jan 20, 2016 at 6:58 AM, Oliver Dzombic wrote: > Hi, > > i am testing on centos 6 x64 minimal install. > > i am mounting successfully: > > ceph-fuse -m 10.0.0.1:6789,10.0.0.2:6789,10.0.0.3:6789,10.0.0.4:6789 > /ceph-storage/ > > > [root@cn201 log]# df > Filesystem1K-blocksU

[ceph-users] ceph fuse closing stale session while still operable

2016-01-20 Thread Oliver Dzombic
Hi, i am testing on centos 6 x64 minimal install. i am mounting successfully: ceph-fuse -m 10.0.0.1:6789,10.0.0.2:6789,10.0.0.3:6789,10.0.0.4:6789 /ceph-storage/ [root@cn201 log]# df Filesystem1K-blocksUsed Available Use% Mounted on /dev/sda1 74454192 1228644