That is correct, you make a tradeoff between space, performance and
resiliency. By reducing replication from 3 to 2, you will get more space
and likely more performance (less overhead from third copy), but it comes
at the expense of being able to recover your data when there are multiple
failures.
Ok, now if I run a lab and the data is somewhat important but I can bare
losing the data, couldn't I shrink the pool replica count and that
increases the amount of storage I can use without using erasure coding?
So for 145TB with a replica of 3 = ~41 TB total in the cluster
But if that same clust
For example, here is my confuguration:
superuser@admin:~$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
242T 209T 20783G 8.38
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
ec_backup-storage 4 9629G 3.88
Thank you! That helps alot.
On Mar 12, 2015 10:40 AM, "Steve Anthony" wrote:
> Actually, it's more like 41TB. It's a bad idea to run at near full
> capacity (by default past 85%) because you need some space where Ceph can
> replicate data as part of its healing process in the event of disk or n
Actually, it's more like 41TB. It's a bad idea to run at near full
capacity (by default past 85%) because you need some space where Ceph
can replicate data as part of its healing process in the event of disk
or node failure. You'll get a health warning when you exceed this ratio.
You can use erasu
Hello,
On Thu, Mar 12, 2015 at 3:07 PM, Thomas Foster
wrote:
> I am looking into how I can maximize my space with replication, and I am
> trying to understand how I can do that.
>
> I have 145TB of space and a replication of 3 for the pool and was thinking
> that the max data I can have in the c