Re: [ceph-users] crushmap rule for not using all buckets

2017-09-04 Thread David Turner
I am unaware of any way to accomplish having 1 pool with all 3 racks and another pool with only 2 of them. If you could put the same osd in 2 different roots or have a crush rule choose from 2 different roots, then this might work out. To my knowledge neither of these is possible. What is your rea

[ceph-users] crushmap rule for not using all buckets

2017-09-04 Thread Andreas Herrmann
Hello, I'm building a 5 server cluster over three rooms/racks. Each server has 8 960GB SSDs used as bluestore OSDs. Ceph version 12.1.2 is used. rack1: server1(mon) server2 rack2: server3(mon) server4 rack3: server5(mon) The crushmap was built this way: ceph osd