> > When I used CentOS osds, and tried to rbd map from arch linux or fedora,
> > I would get "rbd: add failed: (34) Numerical result out of range". It
> > seemed to happen when the tool was writing to /sys/bus/rbd/add_single_major.
> > If I rebuild the osds using fedora (20 in this case), everythi
On Mon, Jun 9, 2014 at 11:48 AM, wrote:
> I was building a small test cluster and noticed a difference with trying
> to rbd map depending on whether the cluster was built using fedora or
> CentOS.
>
> When I used CentOS osds, and tried to rbd map from arch linux or fedora,
> I would get "rbd: add
I was building a small test cluster and noticed a difference with trying
to rbd map depending on whether the cluster was built using fedora or
CentOS.
When I used CentOS osds, and tried to rbd map from arch linux or fedora,
I would get "rbd: add failed: (34) Numerical result out of range". It
se
Yes llya. From that command output only, I assumed this must be
rados_classes issue. After that copied in exact location and restarted all
the nodes.
Thanks,
Srinivas.
On Wed, Apr 16, 2014 at 5:50 PM, Ilya Dryomov wrote:
> On Wed, Apr 16, 2014 at 4:00 PM, Srinivasa Rao Ragolu
> wrote:
> > Hi
On Wed, Apr 16, 2014 at 4:00 PM, Srinivasa Rao Ragolu
wrote:
> Hi All,
>
> Thanks a lot to one and all.. Thank you so much for your support. I found
> the issue with your clues.
>
> Issue is : root filesystem does not have /usr/lib64/rados_classes
>
> After adding rados_classes and restarting all
Hi All,
Thanks a lot to one and all.. Thank you so much for your support. I found
the issue with your clues.
*Issue is : root filesystem does not have /usr/lib64/rados_classes*
After adding rados_classes and restarting all the nodes, I could able map
the block devices.
Thanks,
Srinivas.
On We
root@node1:/etc/ceph# ceph daemon osd.0 config get osd_class_dir
{ "osd_class_dir": "\/usr\/lib64\/rados-classes"}
Thanks,
Srinivas.
On Wed, Apr 16, 2014 at 4:37 PM, Ilya Dryomov wrote:
> On Wed, Apr 16, 2014 at 2:45 PM, Srinivasa Rao Ragolu
> wrote:
> > root@mon:/etc/ceph# find / -name "libcl
On Wed, Apr 16, 2014 at 2:45 PM, Srinivasa Rao Ragolu
wrote:
> root@mon:/etc/ceph# find / -name "libcls_rbd.so"
> /usr/lib64/rados-classes/libcls_rbd.so
> root@mon:/etc/ceph# echo $osd_class_dir
>
> root@mon:/etc/ceph#
>
> Please let me know how to find osd_class_dir value
ceph daemon osd.0 confi
root@mon:/etc/ceph# find / -name "libcls_rbd.so"
/usr/lib64/rados-classes/libcls_rbd.so
root@mon:/etc/ceph# echo $osd_class_dir
root@mon:/etc/ceph#
Please let me know how to find osd_class_dir value
Thanks,
Srinivas.
On Wed, Apr 16, 2014 at 4:00 PM, Ilya Dryomov wrote:
> On Wed, Apr 16, 2014
On Wed, Apr 16, 2014 at 2:13 PM, Srinivasa Rao Ragolu
wrote:
> Thanks. Please see the output of above command
>
> root@mon:/etc/ceph# rbd ls -l
> rbd: error opening blk2: (95) Operation not supported2014-04-16
> 10:12:13.947625 7f3a2a0c7780 -1 librbd: Error listing snapshots: (95)
> Operation not
You ceph.conf please
Karan Singh
Systems Specialist , Storage Platforms
CSC - IT Center for Science,
Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
mobile: +358 503 812758
tel. +358 9 4572001
fax +358 9 4572302
http://www.c
Thanks. Please see the output of above command
root@mon:/etc/ceph# rbd ls -l
rbd: error opening blk2: (95) Operation not supported2014-04-16
10:12:13.947625 7f3a2a0c7780 -1 librbd: Error listing snapshots: (95)
Operation not supported
rbd: error opening blk3: (95) Operation not supported2014-04-1
Show command output rbd ls -l.
2014-04-16 13:59 GMT+04:00 Srinivasa Rao Ragolu :
> Hi Wido,
>
> Output of info command is given below
>
> root@mon:/etc/ceph#
> * rbd info samplerbd: error opening image sample: (95) Operation not
> supported2014-04-16 09:57:24.575279 7f661c6e5780 -1 librbd: Error
Hi Wido,
Output of info command is given below
root@mon:/etc/ceph#
* rbd info samplerbd: error opening image sample: (95) Operation not
supported2014-04-16 09:57:24.575279 7f661c6e5780 -1 librbd: Error listing
snapshots: (95) Operation not supported*
root@mon:/etc/ceph# ceph status
cluster a
On 04/16/2014 11:41 AM, Srinivasa Rao Ragolu wrote:
HI all,
I have created ceph cluster with 1 monitor node and 2 OSd nodes. Cluster
health is OK and Active.
My deployment is on our private distribution of Linux kernel 3.10.33 and
ceph version is 0.72.2
I could able to create image with comman
HI all,
I have created ceph cluster with 1 monitor node and 2 OSd nodes. Cluster
health is OK and Active.
My deployment is on our private distribution of Linux kernel 3.10.33 and
ceph version is 0.72.2
I could able to create image with command " rbd create sample --size 200".
inserted rbd.ko suc
16 matches
Mail list logo