Hi Karan,

So that means I can't have RBD on 2.6.32. Do you know where can I find source for rbd.ko for other kernel versions like 2.6.34?

Regards,
Pratik Rupala

On 7/28/2014 12:32 PM, Karan Singh wrote:
Yes you can use other features like CephFS and Object Store on this kernel release that you are running.

- Karan Singh


On 28 Jul 2014, at 07:45, Pratik Rupala <pratik.rup...@calsoftinc.com <mailto:pratik.rup...@calsoftinc.com>> wrote:

Hi Karan,

I have basic setup of Ceph storage cluster in active+clean state on Linux kernel 2.6.32. As per your suggestion, RBD support starts from 2.6.34 kernel. So, can I use other facilities like object store and Cephfs on this setup with 2.6.32 or they are also not supported for this kernel version and is there any way to have Ceph block devices on Linux kernel 2.6.32?

Regards,
Pratik Rupala


On 7/25/2014 5:51 PM, Karan Singh wrote:
Hi Pratik

Ceph RBD support has been added in mainline Linux kernel starting 2.6.34 , The following errors shows that , RBD module is not present in kernel.

Its advisable to run latest stable kernel release if you need RBD to be working.

ERROR: modinfo: could not find module rbd
FATAL: Module rbd not found.
rbd: modprobe rbd failed! (256)


- Karan -

On 25 Jul 2014, at 14:52, Pratik Rupala <pratik.rup...@calsoftinc.com <mailto:pratik.rup...@calsoftinc.com>> wrote:

Hi,

I am deploying firefly version on CentOs 6.4. I am following quick installation instructions available at ceph.com <http://ceph.com/>.
I have my customized kernel version in CentOs 6.4 which is 2.6.32.

I am able to create basic Ceph storage cluster with active+clean state. Now I am trying to create block device image on ceph client but it is giving messages as shown below:

[ceph@ceph-client1 ~]$ rbd create foo --size 1024
2014-07-25 22:31:48.519218 7f6721d43700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x6a7c50 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x6a8050).fault 2014-07-25 22:32:18.536771 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718006310 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f6718006580).fault 2014-07-25 22:33:09.598763 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f67180063e0 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f6718007e70).fault 2014-07-25 22:34:08.621655 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f67180080e0).fault 2014-07-25 22:35:19.581978 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f67180080e0).fault 2014-07-25 22:36:23.694665 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f67180080e0).fault 2014-07-25 22:37:28.868293 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f67180080e0).fault 2014-07-25 22:38:29.159830 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f67180080e0).fault 2014-07-25 22:39:28.854441 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718001db0 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f6718006990).fault 2014-07-25 22:40:14.581055 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718001ac0 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f671800c950).fault 2014-07-25 22:41:03.794903 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718004d30 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f671800c950).fault 2014-07-25 22:42:12.537442 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x6a4640 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x6a4a00).fault 2014-07-25 22:43:18.912430 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718008300 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f67180080e0).fault 2014-07-25 22:44:24.129258 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718008300 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f6718008f80).fault 2014-07-25 22:45:29.174719 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f671800a150 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f671800a620).fault 2014-07-25 22:46:34.032246 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718008390 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f671800a620).fault 2014-07-25 22:47:39.551973 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718008390 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f67180077e0).fault 2014-07-25 22:48:39.342226 7f6721b41700 0 -- 172.17.35.20:0/1003053 >> 172.17.35.22:6800/1875 pipe(0x7f6718001db0 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f6718003040).fault

I am not sure whether block device image has been created or not. Further I tried below command which fails:
[ceph@ceph-client1 ~]$ sudo rbd map foo
ERROR: modinfo: could not find module rbd
FATAL: Module rbd not found.
rbd: modprobe rbd failed! (256)

If I check the health of cluster it looks fine.
[ceph@node1 ~]$ ceph -s
   cluster 98f22f5d-783b-43c2-8ae7-b97a715c9c86
    health HEALTH_OK
monmap e1: 1 mons at {node1=172.17.35.17:6789/0}, election epoch 1, quorum 0 node1
    osdmap e5972: 3 osds: 3 up, 3 in
     pgmap v20011: 192 pgs, 3 pools, 142 bytes data, 2 objects
           190 MB used, 45856 MB / 46046 MB avail
                192 active+clean

Please let me know if I am doing anything wrong.

Regards,
Pratik Rupala
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to