On 03/06/2014 08:38 PM, Dan van der Ster wrote:
Hi all,
We're about to go live with some qemu rate limiting to RBD, and I
wanted to crosscheck our values with this list, in case someone can
chime in with their experience or known best practices.
The only reasonable, non test-suite, values I found on the web are:
iops_wr 200
iops_rd 400
bps_wr 40000000
bps_rd 80000000
and those seem (to me) to offer a "pretty good" service level, with more
iops than a typical disk yet lower throughput (which is good considering
our single gigabit NICs on the hypervisors).
Our main goal for the rate limiting is to protect the cluster from
abusive users running fio, etc., while not overly restricting our varied
legitimate applications.
Any opinions here?
I normally only limit the writes since those are the most expensive in a
Ceph cluster due to replication. With reads you can't really kill the
disks since at some point all the objects will probably be in the page
cache of the OSDs.
I don't see any good reason to limit reads, but if you do, I'd set it to
something like 2.5k reads and 200MB/sec or so. Just to give the VM to
boost with reads when needed.
You'll probably see that your cluster does a lot of writes and not so
many reads.
Wido
Cheers, Dan
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com