Eric,

When creating RBD images of image format 2 in v0.92, can you try with

rbd create SMB01/smb01_d1 --size 1000 --image-format 2 --image-shared

Without the "--image-shared" option, rbd CLI creates the image with 
RBD_FEATURE_EXCLUSIVE_LOCK, which is not supported by the linux kernel RDB.

Thanks,
Raju


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Eric 
Eastman
Sent: Sunday, February 08, 2015 8:46 AM
To: Ceph Users
Subject: [ceph-users] Problem mapping RBD images with v0.92

Has anything changed in v0.92 that would keep a 3.18 Kernel from mapping a RBD 
image?

I have been using a test script to create RBD images and map them since FireFly 
and the script has worked fine through Ceph v0.91.  It is not working with 
v0.92, so I minimized it to the following 3 commands which fails on the rbd map 
command:

# ceph osd pool create SMB01 256 256
pool 'SMB01' created
# rbd create SMB01/smb01_d1 --size 1000 --image-format 2
# rbd map SMB01/smb01_d1
rbd: sysfs write failed
rbd: map failed: (6) No such device or address

Same commands worked fine with 0.80.8, 0.87, and 0.91

# ceph -v
ceph version 0.92 (00a3ac3b67d93860e7f0b6e07319f11b14d0fec0)

# cat /proc/version
Linux version 3.18.0-031800-generic (apw@gomeisa) (gcc version 4.6.3 
(Ubuntu/Linaro 4.6.3-1ubuntu5) ) #201412071935 SMP Mon Dec 8 00:36:34 UTC 2014

# rbd -p SMB01 ls -l
NAME      SIZE PARENT FMT PROT LOCK
smb01_d1 1000M          2

# ceph -s
    cluster 4488a472-e2f0-11e3-9a32-001e0b4843b4
     health HEALTH_OK
     monmap e1: 1 mons at {t10=172.16.30.10:6789/0<http://172.16.30.10:6789/0>}
            election epoch 1, quorum 0 t10
     osdmap e39: 6 osds: 6 up, 6 in
      pgmap v105: 320 pgs, 2 pools, 16 bytes data, 3 objects
            219 MB used, 233 GB / 233 GB avail
                320 active+clean

# lsmod | grep rbd
rbd                    74870  0
libceph               247326  1 rbd

I made sure the permissions were wide open on the /etc/ceph directory:
# ls -la /etc/ceph/
total 20
drwxrwxrwx  2 root root 4096 Feb  7 20:44 .
drwxr-xr-x 90 root root 4096 Feb  7 21:39 ..
-rwxrwxrwx  1 root root   63 Feb  7 20:44 ceph.client.admin.keyring
-rwxrwxrwx  1 root root 1980 Feb  7 20:44 ceph.conf
-rwxrwxrwx  1 root root   92 Feb  2 17:02 rbdmap

Using strace on the rbd map command shows

# strace -s 60 rbd map SMB01/smb01_d1
execve("/usr/bin/rbd", ["rbd", "map", "SMB01/smb01_d1"], [/* 21 vars */]) = 0
brk(0)                                  = 0x40fe000
...
bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000002}, 12) = 0
getsockname(3, {sa_family=AF_NETLINK, pid=2737, groups=00000002}, [12]) = 0
setsockopt(3, SOL_SOCKET, SO_PASSCRED, [1], 4) = 0
open("/sys/bus/rbd/add_single_major", O_WRONLY) = -1 ENOENT (No such file or 
directory)
open("/sys/bus/rbd/add", O_WRONLY)      = 4
write(4, "172.16.30.10:6789<http://172.16.30.10:6789> 
name=admin,key=client.admin SMB01 smb01_d1"..., 62) = -1 ENXIO (No such device 
or address)
close(4)                                = 0
write(2, "rbd: sysfs write failed", 23rbd: sysfs write failed) = 23
write(2, "\n", 1
)                       = 1
close(3)                                = 0
write(2, "rbd: map failed: ", 17rbd: map failed: )       = 17
write(2, "(6) No such device or address", 29(6) No such device or address) = 29
write(2, "\n", 1
)                       = 1
exit_group(6)                           = ?
+++ exited with 6 +++

Thanks

Eric


________________________________

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to