Many others I’m sure will comment on the snapshot specifics.

However running a cluster with some 8TB drives I have noticed huge differences 
between 4TB and 8TB drives and their peak latency’s when busy. So along with 
the known snapshot performance you may find the higher seek time and higher 
TB/disk ratio not helping much.

,Ashley

Sent from my iPhone
29 Jun 2017, at 10:43 PM, Stanislav Kopp 
<stask...@gmail.com<mailto:stask...@gmail.com>> wrote:

Hi,

we're testing ceph cluster as storage backend for our virtualization
(proxmox), we're using RBD for raw VM images. If I'm trying to restore
some snapshot with  "rbd snap rollback", the whole cluster becomes
really slow, the "apply_latency" goes to 4000-6000ms from normally
0-10ms, I see load on OSDs with many read/writes and many blocked
processes in "vmstat", after restore is  finished everything is fine
again.
My question is, is it possibly to set some "priority" for snapshot
restore, like "nice"  so it doesn't stresses OSDs so much?

BTW, I'm using ceph 11.2 on Ubuntu 16.04, 4 nodes, with 16 OSDs (8TB
each) + Intel 3710 SSD per 4 OSDs for journals.

Best,
Stan
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to