Hi Felix,
Better use fio.
Like fio -ioengine=rbd -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128
-rw=randwrite -pool=rpool_hdd -runtime=60 -rbdname=testimg (for peak parallel
random iops)
Or the same with -iodepth=1 for the latency test. Here you usually get
Or the same with -ioengine=
nder),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> -
> -
>
>
> Von: John Petrini
> Datum: Freitag, 7. Juni 2
t; > > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> > > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt
> (Vorsitzender),
> > > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
>
gt; > > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen
Huthmacher
> > > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt
(Vorsitzender),
> > > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald
Bolt,
> > > Prof. Dr
; >
> >you do tests before and after this change, and know what the
> > difference
> >is iops? And is the advantage more or less when your sata hdd's
are
> >slower?
> >
> >
>
Did
> >
> > you do tests before and after this change, and know what the
> > difference
> >is iops? And is the advantage more or less when your sata hdd's are
> >slower?
> >
> >
> >-Origina
---Original Message-
>From: Stolte, Felix [mailto:f.sto...@fz-juelich.de]
>Sent: donderdag 6 juni 2019 10:47
>To: ceph-users
>Subject: [ceph-users] Expected IO in luminous Ceph Cluster
>
>Hello folks,
>
>we are running a cep
slower?
>
>
>-Original Message-----
>From: Stolte, Felix [mailto:f.sto...@fz-juelich.de]
>Sent: donderdag 6 juni 2019 10:47
>To: ceph-users
>Subject: [ceph-users] Expected IO in luminous Ceph Cluster
>
>Hello folks,
>
>we
d know what the difference
is iops? And is the advantage more or less when your sata hdd's are
slower?
-Original Message-
From: Stolte, Felix [mailto:f.sto...@fz-juelich.de]
Sent: donderdag 6 juni 2019 10:47
To: ceph-users
Subject: [ceph-users] Expected
.sto...@fz-juelich.de]
Sent: donderdag 6 juni 2019 10:47
To: ceph-users
Subject: [ceph-users] Expected IO in luminous Ceph Cluster
Hello folks,
we are running a ceph cluster on Luminous consisting of 21 OSD Nodes
with 9 8TB SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB
(1:3 Ratio). OSDs
Hello folks,
we are running a ceph cluster on Luminous consisting of 21 OSD Nodes with 9 8TB
SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB (1:3 Ratio). OSDs
have 10Gb for Public and Cluster Network. The cluster is running stable for
over a year. We didn’t had a closer look on IO un
11 matches
Mail list logo