l -
De: "Christian Balzer" >
À: "Gregory Farnum" >,
ceph-users@lists.ceph.com
Envoyé: Jeudi 8 Mai 2014 04:49:16
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and
backing devices
On Wed, 7 May 2014 18:37:48 -0700 Gregory Farnum wrote:
On Wed, May 7,
without --direct=1
and also enable rbd_cache
ceph.conf
[client]
rbd cache = true
- Mail original -
De: "Christian Balzer" >
À: "Gregory Farnum" >,
ceph-users@lists.ceph.com
Envoyé: Jeudi 8 Mai 2014 04:49:16
Objet: Re: [ceph-users] Slow IOPS on RBD compared t
low is that the latency happens within the OSD processes.
> > >>
> > >> Regards,
> > >>
> > >> Christian
> > >>> When I suggested other tests, I meant with and without Ceph. One
> > >>> particular one is OSD bench. That should b
29.58
>>>>>>>
>>>>>>> Device: rrqm/s wrqm/s r/s w/srkB/swkB/s
>>>>>>> avgrq-sz avgqu-sz await r_await w_await svctm %util
>>>>>>> sda 0.00 51.500.00 1633.50 0.00 7460.00
0
>>>>>> 0.00 2468.50 0.00 13419.0010.87 0.240.10 0.00
>>>>>> 0.10 0.09 22.00 sdd 0.00 6.500.00 1913.00
>>>>>> 0.00 10313.0010.78 0.200.100.000.10 0.09 16.60
>>>>>&
te the nearly complete absence of iowait.
>>>>>
>>>>> sda and sdb are the OSDs RAIDs, sdc and sdd are the journal SSDs.
>>>>> Look at these numbers, the lack of queues, the low wait and service
>>>>> times (this is in ms) plus overall utilizat
On Thu, May 8, 2014 at 9:37 AM, Gregory Farnum wrote:
>
> Hmm, with 128 IOs at a time (I believe I'm reading that correctly?)
> that's about 40ms of latency per op (for userspace RBD), which seems
> awfully long.
Maybe this is off the topic, AFAIK "--iodepth=128" doesn't submits 128
IOs at a time
inal -----
De: "Christian Balzer" >
À: "Gregory Farnum" >,
ceph-users@lists.ceph.com
Envoyé: Jeudi 8 Mai 2014 04:49:16
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and
backing devices
On Wed, 7 May 2014 18:37:48 -0700 Gregory Farnum wrote:
On Wed, May 7
sted other tests, I meant with and without Ceph. One
> >>> particular one is OSD bench. That should be interesting to try at a
> >>> variety of block sizes. You could also try runnin RADOS bench and
> >>> smalliobench at a few different sizes.
> >>> -Greg
> >>>
> >&g
resting to try at a
>>>> variety of block sizes. You could also try runnin RADOS bench and
>>>> smalliobench at a few different sizes.
>>>> -Greg
>>>>
>>>> On Wednesday, May 7, 2014, Alexandre DERUMIER
>>>> wrote:
>>>&
smalliobench at a few different sizes.
>>> -Greg
>>>
>>> On Wednesday, May 7, 2014, Alexandre DERUMIER
>>> wrote:
>>>
>>>> Hi Christian,
>>>>
>>>> Do you have tried without raid6, to have more osd ?
>>>> (
s mailing, with a full
> ssd cluster, having almost same results)
>
>
>
> - Mail original -
>
> De: "Alexandre DERUMIER"
> À: "Christian Balzer"
> Cc: ceph-users@lists.ceph.com
> Envoyé: Mardi 13 Mai 2014 17:16:25
> Objet: Re: [cep
;
> À: "Christian Balzer"
> Cc: ceph-users@lists.ceph.com
> Envoyé: Mardi 13 Mai 2014 17:16:25
> Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
> devices
>
> >>Actually check your random read output again, you gave it the wrong
&
"Christian Balzer"
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 13 Mai 2014 17:16:25
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
devices
>>Actually check your random read output again, you gave it the wrong
>>parameter, it needs to be randre
e DERUMIER"
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 13 Mai 2014 16:39:58
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
devices
On Tue, 13 May 2014 16:09:28 +0200 (CEST) Alexandre DERUMIER wrote:
> >>For what it's worth, my cluster gives me
t; > In short, highest density (thus replication of 2 and to make that safe
> > based on RAID6) and operational maintainability (it is a remote data
> > center, so replacing broken disks is a pain).
> >
> > That cluster is fast enough for my purposes and that fio test isn't
90 88 Fax : 03
20 68 90 81 45 Bvd du Général Leclerc 59100 Roubaix 12 rue Marivaux 75002 Paris
MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de
trafic ----- Mail original -
>
> De: "Christian Balzer"
> À: ceph-users@lists.ceph.com
>
ank could comment about the 4000iops by osd ?
>
>
> - Mail original -
>
> De: "Christian Balzer"
> À: ceph-users@lists.ceph.com
> Cc: "Alexandre DERUMIER"
> Envoyé: Mardi 13 Mai 2014 11:51:37
> Objet: Re: [ceph-users] Slow IOP
h.com
Cc: "Alexandre DERUMIER"
Envoyé: Mardi 13 Mai 2014 11:51:37
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
devices
Hello,
On Tue, 13 May 2014 11:33:27 +0200 (CEST) Alexandre DERUMIER wrote:
> Hi Christian,
>
> I'm going to test a full
re 32+ cores.
Christian
>
>
>
> - Mail original -
>
> De: "Christian Balzer"
> À: ceph-users@lists.ceph.com
> Envoyé: Mardi 13 Mai 2014 11:03:47
> Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
> devices
>
>
-
De: "Christian Balzer"
À: ceph-users@lists.ceph.com
Envoyé: Mardi 13 Mai 2014 11:03:47
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
devices
I'm clearly talking to myself, but whatever.
For Greg, I've played with all the pertinent journal
d6, to have more osd ?
> > > (how many disks do you have begin the raid6 ?)
> > >
> > >
> > > Aslo, I known that direct ios can be quite slow with ceph,
> > >
> > > maybe can you try without --direct=1
> > >
> > > and also enable rb
il original -
> >
> > De: "Christian Balzer" >
> > À: "Gregory Farnum" >,
> > ceph-users@lists.ceph.com
> > Envoyé: Jeudi 8 Mai 2014 04:49:16
> > Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and
> > backing de
Hello,
On Thu, 08 May 2014 17:20:59 +0200 Udo Lembke wrote:
> Hi,
> I think not that's related, but how full is your ceph-cluster? Perhaps
> it's has something to do with the fragmentation on the xfs-filesystem
> (xfs_db -c frag -r device)?
>
As I wrote, this cluster will go into production next
Hi again,
sorry, too fast - but this can't be an problem due to your 4GB cache...
Udo
Am 08.05.2014 17:20, schrieb Udo Lembke:
> Hi,
> I think not that's related, but how full is your ceph-cluster? Perhaps
> it's has something to do with the fragmentation on the xfs-filesystem
> (xfs_db -c frag -r
Hi,
I think not that's related, but how full is your ceph-cluster? Perhaps
it's has something to do with the fragmentation on the xfs-filesystem
(xfs_db -c frag -r device)?
Udo
Am 08.05.2014 02:57, schrieb Christian Balzer:
>
> Hello,
>
> ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs
get busy (CPU, not IOWAIT), but there still are 1-2 cores idle at
that point.
>
>
> - Mail original -
>
> De: "Christian Balzer"
> À: ceph-users@lists.ceph.com
> Cc: "Alexandre DERUMIER"
> Envoyé: Jeudi 8 Mai 2014 08:52:15
> Objet: Re
es you have)
- Mail original -
De: "Christian Balzer"
À: ceph-users@lists.ceph.com
Cc: "Alexandre DERUMIER"
Envoyé: Jeudi 8 Mai 2014 08:52:15
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
devices
On Thu, 08 May 2014 08:41:54 +0200
;
>
> - Mail original -
>
> De: "Christian Balzer"
> À: ceph-users@lists.ceph.com
> Envoyé: Jeudi 8 Mai 2014 08:26:33
> Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
> devices
>
>
> Hello,
>
> On Wed, 7
Stupid question : Is your areca 4GB cache shared between ssd journal and osd ?
or only use by osds ?
- Mail original -
De: "Christian Balzer"
À: ceph-users@lists.ceph.com
Envoyé: Jeudi 8 Mai 2014 08:26:33
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal a
ve begin the raid6 ?)
> >
> >
> > Aslo, I known that direct ios can be quite slow with ceph,
> >
> > maybe can you try without --direct=1
> >
> > and also enable rbd_cache
> >
> > ceph.conf
> > [client]
> > rbd cache = true
> &
> - Mail original -
>
> De: "Christian Balzer"
> À: "Gregory Farnum" , ceph-users@lists.ceph.com
> Envoyé: Jeudi 8 Mai 2014 04:49:16
> Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
> devices
>
> On Wed, 7
uot;Christian Balzer" >
> À: "Gregory Farnum" >,
> ceph-users@lists.ceph.com
> Envoyé: Jeudi 8 Mai 2014 04:49:16
> Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
> devices
>
> On Wed, 7 May 2014 18:37:48 -0700 Gregory Farnum
original -
De: "Christian Balzer"
À: "Gregory Farnum" , ceph-users@lists.ceph.com
Envoyé: Jeudi 8 Mai 2014 04:49:16
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
devices
On Wed, 7 May 2014 18:37:48 -0700 Gregory Farnum wrote:
> On Wed,
On Wed, 7 May 2014 18:37:48 -0700 Gregory Farnum wrote:
> On Wed, May 7, 2014 at 5:57 PM, Christian Balzer wrote:
> >
> > Hello,
> >
> > ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs each. The
> > journals are on (separate) DC 3700s, the actual OSDs are RAID6 behind
> > an Areca 1882 wi
On Wed, May 7, 2014 at 5:57 PM, Christian Balzer wrote:
>
> Hello,
>
> ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs each. The journals
> are on (separate) DC 3700s, the actual OSDs are RAID6 behind an Areca 1882
> with 4GB of cache.
>
> Running this fio:
>
> fio --size=400m --ioengine=l
Hello,
ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs each. The journals
are on (separate) DC 3700s, the actual OSDs are RAID6 behind an Areca 1882
with 4GB of cache.
Running this fio:
fio --size=400m --ioengine=libaio --invalidate=1 --direct=1 --numjobs=1
--rw=randwrite --name=fiojob
37 matches
Mail list logo