he whole system.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Bailey Allison
Sent: Thursday, January 18, 2024 12:36 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Performance impact of Heterogeneous environment
st regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Bailey Allison
Sent: Thursday, January 18, 2024 12:36 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Performance impact of Heterogeneous environment
+1 to this, gre
7, 2024 4:59 PM
To: Mark Nelson ; ceph-users@ceph.io
Subject: [ceph-users] Re: Performance impact of Heterogeneous
environment
Very informative article you did Mark.
IMHO if you find yourself with very high per-OSD core count, it may be logical
to just pack/add more nvmes per host, you'd
ph.io
> Subject: [ceph-users] Re: Performance impact of Heterogeneous
> environment
>
> Very informative article you did Mark.
>
> IMHO if you find yourself with very high per-OSD core count, it may be logical
> to just pack/add more nvmes per host, you'd be getti
Very informative article you did Mark.
IMHO if you find yourself with very high per-OSD core count, it may be
logical to just pack/add more nvmes per host, you'd be getting the best
price per performance and capacity.
/Maged
On 17/01/2024 22:00, Mark Nelson wrote:
It's a little tricky. In
It's a little tricky. In the upstream lab we don't strictly see an IOPS
or average latency advantage with heavy parallelism by running muliple
OSDs per NVMe drive until per-OSD core counts get very high. There does
seem to be a fairly consistent tail latency advantage even at moderately
low c
Conventional wisdom is that with recent Ceph releases there is no longer a
clear advantage to this.
> On Jan 17, 2024, at 11:56, Peter Sabaini wrote:
>
> One thing that I've heard people do but haven't done personally with fast
> NVMes (not familiar with the IronWolf so not sure if they qualif
On 17.01.24 11:13, Tino Todino wrote:
> Hi folks.
>
> I had a quick search but found nothing concrete on this so thought I would
> ask.
>
> We currently have a 4 host CEPH cluster with an NVMe pool (1 OSD per host)
> and an HDD Pool (1 OSD per host). Both OSD's use a separate NVMe for DB/WAL.