Hi all,
One goal of the storage system is to achieve certain durability SLAs, so that 
we replicate data with multiple copies, and check consistency on regular basis 
(e.g. scrubbing), however, replication could increase cost (tradeoff between 
cost & durability), and cluster wide consistency checking could bring 
performance impact (tradeoff between performance & durability).

Most recently I am trying to figure out the best configuration for such, 
including:
  1) how many copies do I need? (pool min_size and size)
  2) how frequency should I run scrubbing and deep scrubbing?

Can someone share your experience tuning those numbers and what is the 
durability you can achieve with that?

BTW, S3 claims they have 99.999999999% durability of objects over a given year, 
that seems super high on commodity hardware.

Thanks,
Guang
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to