[ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Chengwei Yang
Hi List, I read from ceph document[1] that there are several rbd image features - layering: layering support - striping: striping v2 support - exclusive-lock: exclusive locking support - object-map: object map support (requires exclusive-lock) - fast-diff: fast diff calculations (requir

[ceph-users] please help explain about failover

2016-08-15 Thread kpeng
hello, sorry I am new to ceph. Have a question that, we have a cluster of 9 nodes, each with 12 hard disks, one osd per disk. if one node gets down, saying 30 minutes, during this period all replicas it has will be replicated to other OSDes? and, when the node gets started up, how ceph handle

Re: [ceph-users] please help explain about failover

2016-08-15 Thread ceph
Look at http://docs.ceph.com/docs/master/rados/configuration/mon-osd-interaction/, there is a couple of settings about "should I consider that OSD down ?" As soon as an OSD is down, the cluster starts rebalancing, to heal itself (basically, missing object are copied to healthy OSDs) Then, maybe,

[ceph-users] ceph keystone integration

2016-08-15 Thread Niv Azriel
Hey, I have few questions regarding ceph integration with openstack components such as keystone. I'm trying to integrate keystone to work with my ceph cluster, I've been using this guide http://docs.ceph.com/docs/hammer/radosgw/keystone/ Now in my openstack environment we decided to ditch the key

[ceph-users] CephFS: cached inodes with active-standby

2016-08-15 Thread David
Hi All When I compare a 'ceph daemon mds.*id* perf dump mds' on my active MDS with my standby-replay MDS, the inodes count on the standby is a lot less than the active. I would expect to see a very similar number of inodes or have I misunderstood this feature? My understanding was the replay daem

[ceph-users] Testing Ceph cluster for future deployment.

2016-08-15 Thread jan hugo prins
Hello, I'm currently in the fase of testing a Ceph setup to see if it will fit our need for a 3 DC storage sollution. I install Centos 7 with Ceph version 10.2.2 I have a few things that I noticed so far: - In S3 radosgw-admin I see an error: [root@blsceph01-1 ~]# radosgw-admin user info --uid

[ceph-users] PG is in 'stuck unclean' state, but all acting OSD are up

2016-08-15 Thread Heller, Chris
I’d like to better understand the current state of my CEPH cluster. I currently have 2 PG that are in the ‘stuck unclean’ state: # ceph health detail HEALTH_WARN 2 pgs down; 2 pgs peering; 2 pgs stuck inactive; 2 pgs stuck unclean pg 4.2a8 is stuck inactive for 124516.91, current state down+p

[ceph-users] Red Hat Ceph Storage

2016-08-15 Thread Александр Пивушков
Hello, dear community. There were a few questions as we learn Ceph. -How Do you think, whether to buy Red Hat Ceph Storage is necessary if we do not plan to use the technical support. How Red Hat Ceph Storage is needed for beginners? Does it have any hidden optimization settings. https://www.red

Re: [ceph-users] Red Hat Ceph Storage

2016-08-15 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ? Sent: 15 August 2016 13:19 To: ceph-users Subject: [ceph-users] Red Hat Ceph Storage Hello, dear community. There were a few questions as we learn Ceph. -How Do you think, whether to buy Red Hat Ceph

Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Wido den Hollander
> Op 15 augustus 2016 om 9:54 schreef Chengwei Yang > : > > > Hi List, > > I read from ceph document[1] that there are several rbd image features > > - layering: layering support > - striping: striping v2 support > - exclusive-lock: exclusive locking support > - object-map: object map

Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Ilya Dryomov
On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang wrote: > Hi List, > > I read from ceph document[1] that there are several rbd image features > > - layering: layering support > - striping: striping v2 support > - exclusive-lock: exclusive locking support > - object-map: object map support (r

Re: [ceph-users] ceph keystone integration

2016-08-15 Thread Abhishek Lekshmanan
Niv Azriel writes: > Hey, I have few questions regarding ceph integration with openstack > components such as keystone. > > I'm trying to integrate keystone to work with my ceph cluster, I've been > using this guide http://docs.ceph.com/docs/hammer/radosgw/keystone/ > > Now in my openstack enviro

Re: [ceph-users] Red Hat Ceph Storage

2016-08-15 Thread Александр Пивушков
Thanks for the answer >Понедельник, 15 августа 2016, 16:07 +03:00 от Nick Fisk : > >From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >? >Sent: 15 August 2016 13:19 >To: ceph-users < ceph-users@lists.ceph.com > >Subject: [ceph-users] Red Hat Ceph Storag

[ceph-users] /usr/bin/rbdmap: Bad substitution error

2016-08-15 Thread Leo Hernandez
Hi All, I'm trying to get rbdmap to map my block device, but running /usr/bin/rbdmap results in this error: root@cephcl:~# rbdmap map /usr/bin/rbdmap: 32: /usr/bin/rbdmap: Bad substitution Here's what my /etc/ceph/rbdmap file looks like: root@cephcl:~# cat /etc/ceph/rbdmap # RbdDevice

Re: [ceph-users] /usr/bin/rbdmap: Bad substitution error

2016-08-15 Thread Jason Dillaman
You might be hitting this issue [1] if your /bin/sh isn't symlinked to bash. [1] http://tracker.ceph.com/issues/16608 On Mon, Aug 15, 2016 at 1:49 PM, Leo Hernandez wrote: > Hi All, > I'm trying to get rbdmap to map my block device, but running /usr/bin/rbdmap > results in this error: > > root@c

Re: [ceph-users] /usr/bin/rbdmap: Bad substitution error

2016-08-15 Thread Leo Hernandez
Wow... you're right. That was it! root@cephcl:~# ls -al /bin/sh lrwxrwxrwx 1 root root 4 Nov 8 2014 /bin/sh -> dash Relinked to Bash and all is dandy. Almost 3 days of trying to sort this out. Thank you! On Mon, Aug 15, 2016 at 12:01 PM, Jason Dillaman wrote: > You might be hitting this

[ceph-users] rbd readahead settings

2016-08-15 Thread EP Komarla
Team, I am trying to configure the rbd readahead value? Before I increase this value, I am trying to find out the current value that is set to. How do I know the values of these parameters? rbd readahead max bytes rbd readahead trigger requests rbd readahead disable after bytes Thanks, - epk

Re: [ceph-users] MDS crash

2016-08-15 Thread Randy Orr
Hi Patrick, We continue to hit this bug. Just a couple of questions: 1. I see that http://tracker.ceph.com/issues/16983 has been updated and you believe it is related to http://tracker.ceph.com/issues/16013. It looks like this fix is scheduled to be backported to Jewel at some point... is there a

Re: [ceph-users] rbd readahead settings

2016-08-15 Thread Bruce McFarland
You'll need to set it on the monitor too. Sent from my iPhone > On Aug 15, 2016, at 2:24 PM, EP Komarla wrote: > > Team, > > I am trying to configure the rbd readahead value? Before I increase this > value, I am trying to find out the current value that is set to. How do I > know the valu

Re: [ceph-users] rbd readahead settings

2016-08-15 Thread Christian Balzer
Hello, On Mon, 15 Aug 2016 16:28:55 -0700 Bruce McFarland wrote: > You'll need to set it on the monitor too. > Where are you getting that "need" from? As a [client] setting, it should be sufficient to do the changes in the ceph.conf on the client machines to affect things, like the "rbd cache"

Re: [ceph-users] PG is in 'stuck unclean' state, but all acting OSD are up

2016-08-15 Thread Goncalo Borges
Hi Heller... Can you actually post the result of ceph pg dump_stuck ? Cheers G. On 08/15/2016 10:19 PM, Heller, Chris wrote: I’d like to better understand the current state of my CEPH cluster. I currently have 2 PG that are in the ‘stuck unclean’ state: # ceph health detail HEALTH_W

Re: [ceph-users] PG is in 'stuck unclean' state, but all acting OSD are up

2016-08-15 Thread Heller, Chris
Output of `ceph pg dump_stuck` # ceph pg dump_stuck ok pg_stat state up up_primary acting acting_primary 4.2a8 down+peering[79,8,74] 79 [79,8,74] 79 4.c3down+peering[56,79,67] 56 [56,79,67] 56 -Chris From: Goncalo Borges Date: Monday, A

Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Chengwei Yang
On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote: > On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang > wrote: > > Hi List, > > > > I read from ceph document[1] that there are several rbd image features > > > > - layering: layering support > > - striping: striping v2 support > > - e

Re: [ceph-users] MDS crash

2016-08-15 Thread Yan, Zheng
On Tue, Aug 16, 2016 at 6:29 AM, Randy Orr wrote: > Hi Patrick, > > We continue to hit this bug. Just a couple of questions: > > 1. I see that http://tracker.ceph.com/issues/16983 has been updated and you > believe it is related to http://tracker.ceph.com/issues/16013. It looks like > this fix is

Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Jack Makenz
Yes . I have this problem to. Actually the newest kernels don't support this features to. It's a strange problem. On Aug 16, 2016 6:36 AM, "Chengwei Yang" wrote: > On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote: > > On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang > > wrote: > > > H

[ceph-users] ceph map error

2016-08-15 Thread Yanjun Shen
hi, when i run cep map -p pool rbd test, error hdu@ceph-mon2:~$ sudo rbd map -p rbd test rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail" or so. rbd: map failed: (5) Input/output error dmesg |tail [ 4148.672530] libceph: mon1 172.22.111.173:6789 feature

Re: [ceph-users] ceph map error

2016-08-15 Thread kpeng
it seems rbd client can't communicate with ceph monitor. try check if iptables in monitors have blocked the requests. On 2016/8/16 11:18, Yanjun Shen wrote: hi, when i run cep map -p pool rbd test, error hdu@ceph-mon2:~$ sudo rbd map -p rbd test rbd: sysfs write failed In some cases useful i

Re: [ceph-users] PG is in 'stuck unclean' state, but all acting OSD are up

2016-08-15 Thread Goncalo Borges
Hi Chris... The precise osd set you see now [79,8,74] was obtained on epoch 104536 but this was after a lot of tries as showed by the recovery section. Actually, in the first try (on epoch 100767) osd 116 was selected somehow (maybe it was up at the time?) and probably the pg got stuck becau

Re: [ceph-users] ceph map error

2016-08-15 Thread Chengwei Yang
On Tue, Aug 16, 2016 at 11:18:06AM +0800, Yanjun Shen wrote: > hi, >    when i run cep map -p pool rbd test, error > hdu@ceph-mon2:~$ sudo rbd map -p rbd test > rbd: sysfs write failed > In some cases useful info is found in syslog - try "dmesg | tail" or so. > rbd: map failed: (5) Input/output err