please send me crash log

On Tue, Sep 17, 2019 at 12:56 AM Guilherme Geronimo
<guilherme.geron...@gmail.com> wrote:
>
> Thank you, Yan.
>
> It took like 10 minutes to execute the scan_links.
> I believe the number of Lost+Found decreased in 60%, but the rest of
> them are still causing the MDS crash.
>
> Any other suggestion?
>
> =D
>
> []'s
> Arthur (aKa Guilherme Geronimo)
>
> On 10/09/2019 23:51, Yan, Zheng wrote:
> > On Wed, Sep 4, 2019 at 6:39 AM Guilherme <guilherme.geron...@gmail.com> 
> > wrote:
> >> Dear CEPHers,
> >> Adding some comments to my colleague's post: we are running Mimic 13.2.6  
> >> and struggling with 2 issues (that might be related):
> >> 1) After a "lack of space" event we've tried to remove a 40TB file. The 
> >> file is not there anymore, but no space was released. No process is using 
> >> the file either.
> >> 2) There are many files in /lost+found (~25TB|~5%). Every time we try to 
> >> remove a file, MDS crashes ([1,2]).
> >>
> >> The Dennis Kramer's case [3] led me to believe that I need to recreate the 
> >> FS.
> >> But I refuse to (dis)believe that CEPH hasn't a  repair tool for it.
> >> I thought   "cephfs-table-tool take_inos" could be the answer for my 
> >> problem, but the message [4] were not clear enough.
> >> Can I run the command without resetting the inodes?
> >>
> >> [1] Error at ceph -w - https://pastebin.com/imNqBdmH
> >> [2] Error at mds.log - https://pastebin.com/rznkzLHG
> > For the mds crash issue. 'cephfs-data-scan scan_link' of nautilus
> > version (14.2.2) should fix it.
> > snaptable.  You don't need to upgrade whole cluster. Just install nautilus 
> > in a
> > temp machine or compile ceph from source.
> >
> >> [3] Discussion - 
> >> http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2018-July/027845.html
> >> [4] Discussion - 
> >> http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2018-July/027935.html
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to