You've probably got some issues with the exact commands you're running and
how they interact with read-only caching — that's a less-common cache type.
You'll need to get somebody who's experienced using those cache types or
who has worked with it recently to help out, though.
-Greg

On Tue, Apr 26, 2016 at 4:38 AM, Benoît LORIOT <benoit.lor...@aevoo.fr>
wrote:

> Hi Greg,
>
> yes directory is hashed four levels deep and contain files
>
>
> # ls -l /var/lib/ceph/osd/ceph-1/current/1.0_head/DIR_0/DIR_0/DIR_0/DIR_0/
>
> total 908
>
> -rw-r--r--. 1 root root 601 Mar 15 15:01
> 10000021bdf.00000000__head_E5BD0000__1
>
> -rw-r--r--. 1 root root 178571 Mar 15 15:06
> 10000026de5.00000000__head_7E280000__1
>
> -rw-r--r--. 1 root root 7853 Mar 15 15:16
> 10000030a58.00000000__head_0DA40000__1
>
> -rw-r--r--. 1 root root 0 Mar 17 12:51
> 100000c1ccc.00000000__head_D0730000__1
>
> -rw-r--r--. 1 root root 0 Mar 17 17:23
> 100000fc00c.00000000__head_83020000__1
>
> -rw-r--r--. 1 root root 0 Mar 17 17:23
> 100000fcae7.00000000__head_C3A70000__1
>
> [...]
>
>
>
> 2016-04-25 17:27 GMT+02:00 Gregory Farnum <gfar...@redhat.com>:
>
>> On Thursday, April 21, 2016, Benoît LORIOT <benoit.lor...@aevoo.fr>
>> wrote:
>>
>>> Hello,
>>>
>>> we want to disable readproxy cache tier but before doing so we would
>>> like to make sure we won't loose data.
>>>
>>> Is there a way to confirm that flush actually write objects to disk ?
>>>
>>> We're using ceph version 0.94.6.
>>>
>>>
>>> I tried that, with cephfs_data_ro_cache being the hot storage pool and
>>> cephfs_data being the cold storage pool
>>>
>>> # rados -p cephfs_data_ro_cache ls
>>>
>>> then choose a random object in the list : 100004d6142.00000000
>>>
>>> Find the object on cache disk :
>>>
>>> # ceph osd map cephfs_data_ro_cache 100004d6142.00000000
>>> osdmap e301 pool 'cephfs_data_ro_cache' (6) object
>>> '100004d6142.00000000' -> pg 6.d010000 (6.0) -> up ([4,5,8], p4) acting
>>> ([4,5,8], p4)
>>>
>>> Object is in the pg 6.0 on OSD 4, 5 and 8, I can find the file on disk.
>>>
>>> # ls -l
>>> /var/lib/ceph/osd/ceph-4/current/6.0_head/DIR_0/DIR_0/DIR_0/DIR_0/100004d6142.00000000__head_0D010000__6
>>> -rw-r--r--. 1 root root 0 Apr  8 19:36
>>> /var/lib/ceph/osd/ceph-4/current/6.0_head/DIR_0/DIR_0/DIR_0/DIR_0/100004d6142.00000000__head_0D010000__6
>>>
>>> Flush the object :
>>>
>>> # rados -p cephfs_data_ro_cache cache-try-flush 100004d6142.00000000
>>>
>>> Find the object on disk :
>>>
>>> # ceph osd map cephfs_data 100004d6142.00000000
>>> osdmap e301 pool 'cephfs_data' (1) object '100004d6142.00000000' -> pg
>>> 1.d010000 (1.0) -> up ([1,7,2], p1) acting ([1,7,2], p1)
>>>
>>> Object is in the pg 1.0 on OSD 1, 7 and 2, I can't find the file on disk
>>> on any of the 3 OSDs
>>>
>>> # ls -l
>>> /var/lib/ceph/osd/ceph-1/current/1.0_head/DIR_0/DIR_0/DIR_0/DIR_0/100004d6142.*
>>> ls: cannot access
>>> /var/lib/ceph/osd/ceph-1/current/1.0_head/DIR_0/DIR_0/DIR_0/DIR_0/100004d6142.*:
>>> No such file or directory
>>>
>>>
>>>
>>> What am I doing wrong ? To me it seems that nothing is actually flushed
>>> to disk.
>>>
>>>
>> Is the directory actually hashed four levels deep? Or is it shallower and
>> you're looking too far down?
>> -Greg
>>
>>
>>> Thank you,
>>> Ben.
>>>
>>
>
>
> --
> ________________________________________________________
>
> Cordialement,
>
>
>
> *Benoit LORIOT*
>
>
>
> *Ligne directe: 07 85 22 44 57*
> ________________________________________________________
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to