Hello,
Long story short...last night I did something similar to what Edwin did here:

http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/16314

In order to begin to try and fix my mistake, I created another rbd image (in 
the same pool) and mounted it on the server as well:

root@cephmount1:~# rbd showmapped
id pool        image                   snap device
0  npr_archive npr_archive_img         -    /dev/rbd0
1  npr_archive npr_archive_science_img -    /dev/rbd1

root@cephmount1:~# df -h
/dev/rbd0                     80T   76T  4.7T  95% 
/mnt/ceph-block-device-archive
/dev/rbd1                     15T  176M   15T   1% /mnt/science

The problem is whenever I try to copy (either via cp or rsync) files from 
(/mnt/ceph-block-device-archive) to (mnt/science) after only a very short 
period of time, one of the cpu's becomes pegged at 100% and all file copying 
stops until I kill the process.

Last time...the best I could tell...a total of 62 MB was copied before the cpu 
got stuck at 100%.

Is this the proper way I should be trying to copy files from one rbd image to 
another?

Thanks,

Shain
Shain Miley | Manager of Systems and Infrastructure, Digital Media | 
smi...@npr.org | 202.513.3649
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to