Thanks Greg,

Do you mean ceph osd map command is not displaying accurate information ?

I guess, either of these things are happening with my cluster
- ceph osd map is not printing true information
- Object to PG mapping is not correct ( one object is mapped to multiple
PG's )

This is happening for several objects , but the cluster is Healthy.

Need expert suggestion.


On Tue, Feb 23, 2016 at 7:20 PM, Gregory Farnum <gfar...@redhat.com> wrote:

> This is not a bug. The map command just says which PG/OSD an object maps
> to; it does not go out and query the osd to see if there actually is such
> an object.
> -Greg
>
>
> On Tuesday, February 23, 2016, Vickey Singh <vickey.singh22...@gmail.com>
> wrote:
>
>> Hello Guys
>>
>> I am getting wired output from osd map. The object does not exists on
>> pool but osd map still shows its PG and OSD on which its stored.
>>
>> So i have rbd device coming from pool 'gold' , this image has an object
>> 'rb.0.10f61.238e1f29.000000002ac5'
>>
>> The below commands verifies this
>>
>> *[root@ceph-node1 ~]# rados -p gold ls | grep -i
>> rb.0.10f61.238e1f29.000000002ac5*
>> *rb.0.10f61.238e1f29.000000002ac5*
>> *[root@ceph-node1 ~]#*
>>
>> This object lives on pool gold and OSD 38,0,20 , which is correct
>>
>> *[root@ceph-node1 ~]# ceph osd map gold rb.0.10f61.238e1f29.000000002ac5*
>> *osdmap e1357 pool 'gold' (1) object 'rb.0.10f61.238e1f29.000000002ac5'
>> -> pg 1.11692600 (1.0) -> up ([38,0,20], p38) acting ([38,0,20], p38)*
>> *[root@ceph-node1 ~]#*
>>
>>
>> Since i don't have object 'rb.0.10f61.238e1f29.000000002ac5' in data and
>> rbd pools , rados ls will not list it. Which is expected.
>>
>> *[root@ceph-node1 ~]# rados -p data ls | grep -i
>> rb.0.10f61.238e1f29.000000002ac5*
>> *[root@ceph-node1 ~]# rados -p rbd ls | grep -i
>> rb.0.10f61.238e1f29.000000002ac5*
>>
>>
>> But , how come the object is showing in osd map of pool data and rbd.
>>
>> *[root@ceph-node1 ~]# ceph osd map data rb.0.10f61.238e1f29.000000002ac5*
>> *osdmap e1357 pool 'data' (2) object 'rb.0.10f61.238e1f29.000000002ac5'
>> -> pg 2.11692600 (2.0) -> up ([3,51,29], p3) acting ([3,51,29], p3)*
>> *[root@ceph-node1 ~]#*
>>
>> *[root@ceph-node1 ~]# ceph osd map rbd rb.0.10f61.238e1f29.000000002ac5*
>> *osdmap e1357 pool 'rbd' (0) object 'rb.0.10f61.238e1f29.000000002ac5' ->
>> pg 0.11692600 (0.0) -> up ([41,20,3], p41) acting ([41,20,3], p41)*
>> *[root@ceph-node1 ~]#*
>>
>>
>> In ceph, object is unique and belongs to only one pool. So why does it
>> shows up in all pool's osd map.
>>
>> Is this some kind of BUG in Ceph
>>
>> Ceph Hammer 0.94.5
>> CentOS 7.2
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to