It isn’t clear to me what could cause a loop there.  Just to be sure you don’t 
have a filesystem corruption please try to run a “find” or “ls -R” on the 
filestore root directory to be sure it completes.

Can you send the log you generated?  Also, what version of Ceph are you running?

David Zafman
Senior Developer
http://www.inktank.com

On May 16, 2014, at 6:20 AM, Jeff Bachtel <jbach...@bericotechnologies.com> 
wrote:

> Overnight, I tried to use ceph_filestore_dump to export a pg that is missing 
> from other osds from an osd, with the intent of manually copying the export 
> to the osds in the pg map and importing.
> 
> Unfortunately, what is on-disk 59gb of data had filled 1TB when I got in this 
> morning, and still hadn't completed. Is it possible for a loop to develop in 
> a ceph_filestore_dump export?
> 
> My C++ isn't the best. I can see in ceph_filestore_dump.cc int export_files a 
> loop could occur if a broken collection was read, possibly. Maybe.
> 
> --debug output seems to confirm?
> 
> grep '^read' /tmp/ceph_filestore_dump.out  | sort | wc -l ; grep '^read' 
> /tmp/ceph_filestore_dump.out  | sort | uniq | wc -l
> 2714
> 258
> 
> (only 258 unique reads are being reported, but each repeated > 10 times so 
> far)
> 
> From start of debug output
> 
> Supported features: compat={},rocompat={},incompat={1=initial feature 
> set(~v.18),2=pginfo object,3=object 
> locator,4=last_epoch_clean,5=categories,6=hobjectpool,7=biginfo,8=leveldbinfo,9=leveldblog,10=snapmapper,11=sharded
>  objects}
> On-disk features: compat={},rocompat={},incompat={1=initial feature 
> set(~v.18),2=pginfo object,3=object 
> locator,4=last_epoch_clean,5=categories,6=hobjectpool,7=biginfo,8=leveldbinfo,9=leveldblog,10=snapmapper}
> Exporting 0.2f
> read 8210002f/1000000d228.00019150/head//0
> size=4194304
> data section offset=1048576 len=1048576
> data section offset=2097152 len=1048576
> data section offset=3145728 len=1048576
> data section offset=4194304 len=1048576
> attrs size 2
> 
> then at line 1810
> ead 8210002f/1000000d228.00019150/head//0
> size=4194304
> data section offset=1048576 len=1048576
> data section offset=2097152 len=1048576
> data section offset=3145728 len=1048576
> data section offset=4194304 len=1048576
> attrs size 2
> 
> 
> If this is a loop due to a broken filestore, is there any recourse on 
> repairing it? The osd I'm trying to dump from isn't in the pg map for the 
> cluster, I'm trying to save some data by exporting this version of the pg and 
> importing it on an osd that's mapped. If I'm failing at a basic premise even 
> trying to do that, please let me know so I can wave off (in which case, I 
> believe I'd use ceph_filestore_dump to delete all copies of this pg in the 
> cluster so I can force create it, which is failing at this time).
> 
> Thanks,
> 
> Jeff
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to