Thanks Janne for your reply.
Here are the reasons which made me think to "physically" split the pools :
1) A different usage of the pools : the first one will be used for user
home directories, with an intensive read/write access. And the second
one will be used for data storage/backup, with e
Den tis 12 juni 2018 kl 15:06 skrev Hervé Ballans <
herve.ball...@ias.u-psud.fr>:
> Hi all,
>
> I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120
> disks are strictly identical (model and size).
> (The cluster is also composed of 3 MON servers on 3 other machines)
>
> For design
Hi all,
I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120
disks are strictly identical (model and size).
(The cluster is also composed of 3 MON servers on 3 other machines)
For design reason, I would like to separate my cluster storage into 2
pools of 60 disks.
My idea is
Hallo Bradley, additionally to your question, I'm interesting in the following:
5) can I change all 'type' Ids because adding a new type "host-slow" to
distinguish between OSD's with journal on the same HDD / separate SSD? E.g.
from
type 0 osd
type 1 host
I have a test cluster that is up and running. It consists of three mons, and
three OSD servers, with each OSD server having eight OSD's and two SSD's for
journals. I'd like to move from the flat crushmap to a crushmap with typical
depth using most of the predefined types. I have the current c
[Please keep conversations on the list.]
On Mon, May 13, 2013 at 9:15 AM, Gandalf Corvotempesta
wrote:
> 2013/5/13 Gregory Farnum :
>> What's your goal here? If the switches are completely isolated from each
>> other than Ceph is going to have trouble (it expects a fully connected
>> network), so
On Wednesday, May 8, 2013, Gandalf Corvotempesta wrote:
> Let's assume 20 OSDs servers and 4x 12 ports switches, 2 for public
> network and 2 for cluster netowork
>
> No link between public switches and no link between cluster switches.
>
> first 10 OSD servers connected to public switch1 and the
Let's assume 20 OSDs servers and 4x 12 ports switches, 2 for public
network and 2 for cluster netowork
No link between public switches and no link between cluster switches.
first 10 OSD servers connected to public switch1 and the other 10 OSDs
connected to public switch2. The same apply for clust