Hi

 

Checking our cluster logs we found tons of this lines in the osd.

 

One osd

<cls>
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x8
6_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rp
m/el7/BUILD/ceph-14.2.1/src/cls/rgw/cls_rgw.cc:3461: couldn't find tag in
name index tag=48efb8c3-693c-4fe0-bbe4-fdc16f590a82.9710765.5817269

 

Other osd

<cls>
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x8
6_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rp
m/el7/BUILD/ceph-14.2.1/src/cls/rgw/cls_rgw.cc:979:
rgw_bucket_complete_op():
entry.name=_multipart_MBS-25c5afb5-f8f1-43cc-91ee-f49a3258012b/CBB_SRVCLASS2
/CBB_DiskImage/Disk_00000000-0000-0000-0000-000000000000/Volume_NTFS_0000000
0-0000-0000-0000-000000000000$/20190605210028/102.cbrevision.2~65Mi-_pt5OPiV
6ULDxpScrmPlrD7yEz.208 entry.instance= entry.meta.category=1

 

All 44 ssd got lines like those with different information but refer to the
same cls_rgw.cc 

 

Of course is related to rgw or rgw index I think so but .

 

Are this entries ok? If yes how can we disable it? 

 

Best Regards

Manuel

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to