Go to the zeroth object of the inode and there will be a “backtrace” xattr that contains the path. It will be somewhat mangled so you may need to he dump it or something. On Tue, Feb 13, 2018 at 3:14 AM Josef Zelenka < josef.zele...@cloudevelops.com> wrote:
> Oh, sorry, forgot to mention - this cluster is running jewel :( > > > On 13/02/18 12:10, John Spray wrote: > > On Tue, Feb 13, 2018 at 10:38 AM, Josef Zelenka > > <josef.zele...@cloudevelops.com> wrote: > >> Hi everyone, one of the clusters we are running for a client recently > had a > >> power outage, it's currently in a working state, however 3 pgs were left > >> inconsistent atm, with this type of error in the log(when i attempt to > ceph > >> pg repair it) > >> > >> 2018-02-13 09:47:17.534912 7f3735626700 -1 log_channel(cluster) log > [ERR] : > >> repair 15.1e32 15:4c7eed31:::10002110e12.0000004b:head on disk size (0) > does > >> not match object info size (4194304) adjusted for ondisk to (4194304) > >> > >> i know this can be fixed by truncating the ondisk object to the expected > >> size, but it clearly means we've lost some data. This cluster is used > for > >> cephfs only, so i'd like to find which files on the cephfs were > affected. I > >> know the OSDs for that pg, i know which pg and which object was > affected, so > >> i hope it's possible. I found a 2015 entry in the mailing list, that > does > >> the reverse thing > >> ( > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-October/005384.html > ), > >> as in - map file to pg/object. I have 230TB of data in that cluster in > a lot > >> of files, so mapping them all would take a long time. I hope there is a > way > >> to do this, if people here have any idea/experience with this, it'd be > >> great. > > We added a tool in luminous that does this: > > > http://docs.ceph.com/docs/master/cephfs/disaster-recovery/#finding-files-affected-by-lost-data-pgs > > > > John > > > > > > > >> Thanks > >> > >> Josef Zelenka > >> > >> _______________________________________________ > >> ceph-users mailing list > >> ceph-users@lists.ceph.com > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com