I've found the problem.
The command "ceph osd crush rule create-simple ssd_ruleset ssd root" should
be "ceph osd crush rule create-simple ssd_ruleset ssd host"
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-
Hi,
I want to test ceph cache tire. The test cluster has three machines, each
has a ssd and a sata. I've created a crush rule ssd_ruleset to place ssdpool
on ssd osd, but cannot assign pgs to ssds.
root@ceph10:~# ceph osd crush rule list
[
"replicated_ruleset",
"ssd_ruleset"]
root@ceph10
On Tue, 15 Jul 2014 14:58:28 Marc wrote:
> [global]
> osd crush update on start = false
IMHO setting OSD crush map host may be easier from OSD configuration section
as follows:
[osd.5]
host = realhost
osd crush location = host=crushost
--
Best wishes,
Dmitry
Hi,
to avoid confusion I would name the "host" entries in the crush map
differently. Make sure these host names can be resolved to the correct
boxes though (/etc/hosts on all the nodes). You're also missing a new
rule entry (also shown in the link you mentioned).
Lastly, and this is *extremely* i
Hello to all,
I was wondering is it possible to place different pools on different
OSDs, but using only two physical servers?
I was thinking about this: http://tinypic.com/r/30tgt8l/8
I would like to use osd.0 and osd.1 for Cinder/RBD pool, and osd.2 and
osd.3 for Nova instances. I was follo