Thanks for reply Have you migrated all filestore OSDs from filestore backend to bluestore backend? Or Have you upgraded from Luminious 12.2.11 to 14.x?
What helped here? On Tue, Nov 26, 2019 at 8:03 AM Fyodor Ustinov <u...@ufm.su> wrote: > Hi! > > I had similar errors in pools on SSD until I upgraded to nautilus (clean > bluestore installation) > > ----- Original Message ----- > > From: "M Ranga Swami Reddy" <swamire...@gmail.com> > > To: "ceph-users" <ceph-users@lists.ceph.com>, "ceph-devel" < > ceph-de...@vger.kernel.org> > > Sent: Monday, 25 November, 2019 12:04:46 > > Subject: [ceph-users] scrub errors on rgw data pool > > > Hello - We are using the ceph 12.2.11 version (upgraded from Jewel > 10.2.12 to > > 12.2.11). In this cluster, we are having mix of filestore and bluestore > OSD > > backends. > > Recently we are seeing the scrub errors on rgw buckets.data pool every > day, > > after scrub operation performed by Ceph. If we run the PG repair, the > errors > > will go way. > > > > Anyone seen the above issue? > > Is the mix of filestore backend has bug/issue with 12.2.11 version (ie > > Luminous). > > Is the mix of filestore and bluestore OSDs cause this type of issue? > > > > Thanks > > Swami > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com