Hi,

this doesn't sound like a good idea: two hosts is usually a poor
configuration for Ceph.
Also, fewer disks on more servers is typically better than lots of disks in
few servers.

But to answer your question: you could use a crush rule like this:

min_size 4
max_size 4
step take default
step choose firstn 2 type host
step choose firstn 2 type osd
step emit

And then create a pool with 4/2 and assign this crush rule to it.


See http://docs.ceph.com/docs/jewel/rados/operations/crush-map/


Paul


2018-04-23 17:17 GMT+02:00 Christopher Meadors <
christopher.mead...@twrcommunications.com>:

> I'm starting to get a small Ceph cluster running. I'm to the point where
> I've created a pool, and stored some test data in it, but I'm having
> trouble configuring the level of replication that I want.
>
> The goal is to have two OSD host nodes, each with 20 OSDs. The target
> replication will be:
>
> osd_pool_default_size = 4
> osd_pool_default_min_size = 2
>
> That is, I want two copies on each host, allowing for OSD failures or host
> failures without data loss.
>
> How to best achieve this replication? Is this strictly a CRUSH map rule,
> or can it be done with the cluster conf? Pointers or examples would be
> greatly appreciated.
>
> Thanks!
>
> --
> Chris
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to