> On 24 Nov 2018, at 18.09, Anton Aleksandrov wrote
> We plan to have data on dedicate disk in each node and my question is about
> WAL/DB for Bluestore. How bad would it be to place it on system-consumer-SSD?
> How big risk is it, that everything will get "slower than using spinning HDD
> fo
On Sun, Nov 25, 2018 at 07:43:30AM +0700, Lazuardi Nasution wrote:
> Hi Robin,
>
> Do you mean that Cumulus quagga fork is FRRouting (https://frrouting.org/)?
> As long as I know Cumulus using it now.
I started this before Cumulus was fully shipping FRRouting; and used
their binaries.
Earlier vers
As it’s consumer hardware / old I am guessing your only be using 1Gbps for
the network.
If so that will definitely be your bottle neck across the whole environment
having both client and replication data sharing a single 1Gbps.
Your SSD’s will sit mostly idle, if you have 10Gbps then different st
Hi Robin,
Do you mean that Cumulus quagga fork is FRRouting (https://frrouting.org/)?
As long as I know Cumulus using it now. What dummy interfaces do you mean?
Why did you use it instead of loopback address? Anyway, how can you isolate
between some kind of traffic to be not routable? On L2 implem
Hello community,
We are building CEPH cluster on pretty old (but free) hardware. We will
have 12 nodes with 1 OSD per node and migrate data from single RAID5
setup, so our traffic is not very intense, we basically need more space
and possibility to expand it.
We plan to have data on dedicate
On 23/11/18 18:00, ST Wong (ITSC) wrote:
Hi all,
We've 8 osd hosts, 4 in room 1 and 4 in room2.
A pool with size = 3 using following crush map is created, to cater
for room failure.
rule multiroom {
id 0
type replicated
min_size 2
max_size 4
step