On 12/22/2013 1:57 AM, shacky wrote:
>
>     Replication is set on a per pool basis. You can set some, or all,
>     pools to replica size of 2 instead of 3.
>
>
> Thank you very much. I saw this is to be setted in the global
> configuration (osd pool default size).
> So it's up to me to configure Ceph to be rendundant and fault tolerant?

The default size is already 2, so it will be redundant and somewhat
fault tolerant by default. You can learn quite a bit more about pools by
reviewing the information in the docs here:

http://ceph.com/docs/master/rados/operations/pools/

Just know that replication does not have to be the same across all pools.

> If I set "osd pool default size" to 2, I will be sure that if a
> cluster node goes down my data will be safe?

That is just the default value. Another default is replication being
done so that we never place a copy of the data on the same host.
Depending on your infrastructure you may want to expand this to racks or
rows. In any case losing a single node generally means that your data
will be safe (according to the defaults). The potential for data loss
would be there if you lost two drives in different hosts BEFORE the
cluster finished recovery from the first drive. The documentation
regarding the CRUSH map is your friend when it comes to understanding
all of this:

http://ceph.com/docs/master/rados/operations/crush-map/

>  
>
>     Ceph uses replication not erasure coding (unlike RAID). So data is
>     completely duplicated in multiple copies. Erasure coding is
>     scheduled for the Firefly release, according to the roadmap.
>
>
> Ad I said, I expect having something similar to RAID5: if one hard
> drive per cluster node fails, my data will be safe. If an entire
> cluster node fails, my data will be safe. Could you help me to
> understand the correct configuration for this situation?

Losing an entire cluster node, by default, your data would be safe.
However if you were to lose one drive on each node, quickly enough
(before recovery completes), it would be possible to have lost some
data. To avoid that you could setup RAID sets behind each OSD but that
will drive up your cost per gigabyte and, depending on the RAID
configuration, could mean having to replicate larger amounts of data
when you do lose an OSD. Also, that sort of setup could have performance
implications involved.

I prefer to see drives setup in JBOD, pools with a replication level 3,
a properly implemented infrastructure in the CRUSH map, all sitting on
(at least) a 10Gbps cluster network.

It might be a bit daunting at first, as there is a lot to learn when it
comes to Ceph, but the documentation really is going to be worth the
read. From your questions I would suggest going through the Architecture
documentation which explains RADOS and how data is stored. Understanding
how Ceph stores data will give you a better idea of how replication and
failures are handled.

http://ceph.com/docs/master/architecture/

>
> Thank you very much for your help!
> Bye.

Good luck!

-- 
JuanJose "JJ" Galvez
Professional Services
Inktank Storage, Inc.
LinkedIn: http://www.linkedin.com/in/jjgalvez

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to