compile ceph from source
https://github.com/ukernel/ceph/tree/jewel-cephfs-scan-links

run ' ceph daemon mds.xxx flush journal' to flush MDS journal
stop all mds
run 'cephfs-data-scan scan_links'
restart mds
run 'ceph daemon mds.x scrub_path / recursive repair'


On Wed, Oct 12, 2016 at 9:51 PM, Davie De Smet
<davie.des...@nomadesk.com> wrote:
> Hi,
>
> That sounds great. I'll certainly try it out.
>
> Kind regards,
>
> Davie De Smet
>
> -----Original Message-----
> From: Yan, Zheng [mailto:uker...@gmail.com]
> Sent: Wednesday, October 12, 2016 3:41 PM
> To: Davie De Smet <davie.des...@nomadesk.com>
> Cc: Gregory Farnum <gfar...@redhat.com>; ceph-users <ceph-us...@ceph.com>
> Subject: Re: [ceph-users] CephFS: No space left on device
>
> I have written a tool that fixes this type of error. I'm currently testing 
> it. Will push it out tomorrow
>
> Regards
> Yan, Zheng
>
> On Wed, Oct 12, 2016 at 9:18 PM, Davie De Smet <davie.des...@nomadesk.com> 
> wrote:
>> Hi Gregory,
>>
>> Thanks for the help! I've been looping over all trashcan files and the 
>> amount of strays is lowering. This is going to take quite some time as it 
>> are a lot of files but so far so good. If I should encounter any further 
>> problems regarding this topic, I'll give this thread a heads up.
>>
>> Kind regards,
>>
>> Davie De Smet
>> Director Technical Operations and Customer Services, Nomadesk
>> +32 9 240 10 31 (Office)
>>
>> -----Original Message-----
>> From: Gregory Farnum [mailto:gfar...@redhat.com]
>> Sent: Wednesday, October 12, 2016 2:11 AM
>> To: Davie De Smet <davie.des...@nomadesk.com>
>> Cc: Mykola Dvornik <mykola.dvor...@gmail.com>; John Spray
>> <jsp...@redhat.com>; ceph-users <ceph-us...@ceph.com>
>> Subject: Re: [ceph-users] CephFS: No space left on device
>>
>> On Tue, Oct 11, 2016 at 12:20 AM, Davie De Smet <davie.des...@nomadesk.com> 
>> wrote:
>>> Hi,
>>>
>>> We do use hardlinks a lot. The application using the cluster has a build in 
>>> 'trashcan' functionality based on hardlinks. Obviously, all removed files 
>>> and hardlinks are not visible anymore on the CephFS mount itself. Can I 
>>> manually remove the strays on the OSD's themselves?
>>
>> No, definitely not. At least part of the problem is:
>> *) Ceph stores file metadata organized by its *path* location, not in a 
>> separate on-disk inode data structure like local FSes do.
>> *) When you hard link a file in CephFS, its "primary" location increments 
>> the link counter and its "remote" location just records the inode number 
>> (and it has to look up metadata later on-demand).
>> *) When you unlink the primary link, the inode data gets moved into the 
>> stray directory until one of the remote links comes calling.
>>
>>>Or do you mean that I'm required to do a small touch/write on all files that 
>>>have not yet been deleted (this would be painfull as the cluster is 200TB+)?
>>
>> Luckily, it doesn't take quite that much work. It looks like just doing a 
>> getattr on all the remote links in your system should do it.
>> If it's just your trash can, "ls -l" on that directory will probably
>> pull them in. Or you could delete the whole trashcan folder (set of
>> folders?) and they'll go away as well.
>> -Greg
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to