On Wed, Jun 11, 2014 at 5:18 AM, Davide Fanciola <dfanci...@gmail.com> wrote:
> Hi,
>
> we have a similar setup where we have SSD and HDD in the same hosts.
> Our very basic crushmap is configured as follows:
>
> # ceph osd tree
> # id weight type name up/down reweight
> -6 3 root ssd
> 3 1 osd.3 up 1
> 4 1 osd.4 up 1
> 5 1 osd.5 up 1
> -5 3 root platters
> 0 1 osd.0 up 1
> 1 1 osd.1 up 1
> 2 1 osd.2 up 1
> -1 3 root default
> -2 1 host chgva-srv-stor-001
> 0 1 osd.0 up 1
> 3 1 osd.3 up 1
> -3 1 host chgva-srv-stor-002
> 1 1 osd.1 up 1
> 4 1 osd.4 up 1
> -4 1 host chgva-srv-stor-003
> 2 1 osd.2 up 1
> 5 1 osd.5 up 1
>
>
> We do not seem to have problems with this setup, but i'm not sure if it's a
> good practice to have elements appearing multiple times in different
> branches.
> On the other hand, I see no way to follow the physical hierarchy of a
> datacenter for pools, since a pool can be spread among
> servers/racks/rooms...
>
> Can someone confirm this crushmap is any good for our configuration?

If you accidentally use the "default" node anywhere, you'll get data
scattered across both classes of device. If you try and use both the
"platters" and "ssd" nodes within a single CRUSH rule, you might end
up with copies of data on the same host (reducing your data
resiliency). Otherwise this is just fine.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to