The use case is for KVM RBD volumes.
Our enviroment will be 80% random reads/writes probably 40/60 or 30/70 is a
good estimate. All 4k-8k IO sizes. We currently run on a Nimble Hybrid array
which runs in the 5k-15k IOPS range with some spikes up to 20-25k IOPS (Capable
of 100k iops per Nimb
The RocksDB rings are 256MB, 2.5GB, 25GB, and 250GB. Unless you have a
workload that uses a lot of metadata, taking care of the first 3 and providing
room for compaction should be fine. To allow for compaction room, 60GB should
be sufficient. Add 4GB to accommodate WAL and you're at a nice m
vitalifï¼ yourcmc.ru wrote:
> I think 800 GB NVMe per 2 SSDs is an overkill. 1 OSD usually only
> requires 30 GB block.db, so 400 GB per an OSD is a lot. On the other
> hand, does 7300 have twice the iops of 5300? In fact, I'm not sure if a
> 7300 + 5300 OSD will perform better than just a 5300 OS
January 31, 2020 8:49:29 AM
Subject: Re: [ceph-users] Re: Micron SSD/Basic Config
On Fri, Jan 31, 2020 at 2:06 PM EDH - Manuel Rios
wrote:
>
> Hmm change 40Gbps to 100Gbps networking.
>
> 40Gbps technology its just a bond of 4x10 Links with some latency due link
> aggreg
Hello Adam,
Can you describe what performance values you want to gain out of your
cluster?
What's the use case?
EC oder Replica?
In general, more disks are preferred over bigger ones.
As Micron has not provided us with demo hardware, we can't say how fast
these disks are in reality. Before I thin
I think 800 GB NVMe per 2 SSDs is an overkill. 1 OSD usually only
requires 30 GB block.db, so 400 GB per an OSD is a lot. On the other
hand, does 7300 have twice the iops of 5300? In fact, I'm not sure if a
7300 + 5300 OSD will perform better than just a 5300 OSD at all.
It would be interestin
On Fri, Jan 31, 2020 at 2:06 PM EDH - Manuel Rios
wrote:
>
> Hmm change 40Gbps to 100Gbps networking.
>
> 40Gbps technology its just a bond of 4x10 Links with some latency due link
> aggregation.
> 100 Gbps and 25Gbps got less latency and Good performance. In ceph a 50% of
> the latency comes fr
Please check that you Support RDMA for improve Access.
40Gbps transceiver are internally a 4x10 Ports . Thats why you can Split 40
gbps switches port in 4x10 multiports over the same link
25GG is a new base technology with improvemenets over 10Gbps in latency.
Regards
Manuel
De: Adam Boyhan
Appreciate the input.
Looking at those articles they make me feel like the 40G they are talking about
is 4x Bonded 10G connections.
Im looking at 40Gbps without bonding for throughput. Is that still the same?
[ https://www.fs.com/products/29126.html |
https://www.fs.com/products/29126.html
Hmm change 40Gbps to 100Gbps networking.
40Gbps technology its just a bond of 4x10 Links with some latency due link
aggregation.
100 Gbps and 25Gbps got less latency and Good performance. In ceph a 50% of the
latency comes from Network commits and the other 50% from disk commits.
A fast graph :
10 matches
Mail list logo