And if you put a big file in CephFS and then deleted it, the data will
be deleted from the RADOS cluster asynchronously in the background (by
the MDS), so it can take a while to actually get removed. :) If this
wasn't the behavior then a file delete would require you to wait for
each of those (10GB/4MB=)2560 objects to get deleted in separate
network calls before returning!
Software Engineer #42 @ http://inktank.com | http://ceph.com

On Wed, Apr 3, 2013 at 9:47 AM, John Wilkins <john.wilk...@inktank.com> wrote:
> Can you elaborate on "manually deleted"? If you used an interface like RBD
> or REST to upload the file, and then just deleted the upload from the file
> system directly, your cluster map doesn't update. So you'd have a lost
> object.
>
>
> On Tue, Apr 2, 2013 at 2:35 AM, Adam Iwanowski <a.iwanow...@ogicom.pl>
> wrote:
>>
>> Hello.
>>
>> Today i upgraded my cluster to version 0.60 and i noticed strange thing.
>> I mounted to cepfs using kernel module, uploaded 10G file to test upload
>> speed and then manually deleted file. From mountpoint i have no data in
>> cluster but "ceph -w" still shows 10G data. Any idea how to clear that
>> non-existing data from cluster, or is it bug in new version?
>>
>> Regards,
>> Adam
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to