On Wed, Feb 22, 2017 at 1:52 PM, Florent B <flor...@coppint.com> wrote:
> On 02/21/2017 06:43 PM, John Spray wrote:
>> On Tue, Feb 21, 2017 at 5:20 PM, Florent B <flor...@coppint.com> wrote:
>>> Hi everyone,
>>>
>>> I use a Ceph Jewel cluster.
>>>
>>> I have a CephFS with some directories at root, on which I defined some
>>> layouts :
>>>
>>> # getfattr -n ceph.dir.layout maildata1/
>>> # file: maildata1/
>>> ceph.dir.layout="stripe_unit=1048576 stripe_count=3 object_size=4194304
>>> pool=cephfs.maildata1"
>>>
>>>
>>> My problem is that the default "data" pool contains 44904 EMPTY objects
>>> (size of pool=0), and duplicates of my pool cephfs.maildata1.
>> This is normal: the MDS stores a "backtrace" for each file, that
>> allows it to find the file by inode number when necessary.  Usually,
>> when files are in the first data pool, the backtrace is stored along
>> with the data.  When your files are in a different data pool, the
>> backtrace is stored on an otherwise-empty object in the first data
>> pool.
>>
>
> Ok, and what is the purpose of these empty objects ? is there any
> documentation on this ?

As mentioned above, it's for situations where the MDS needs to find a
file by its inode number.

Backtraces are part of the internal on-disk format, which is not
currently part of what's documented.

John
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to