like to known if it's
scale more than 20k iops with more nodes.
but clearly, the cpu is the limit.
- Mail original -
De: "Christian Balzer"
À: ceph-users@lists.ceph.com
Envoyé: Jeudi 25 Septembre 2014 06:50:31
Objet: Re: [ceph-users] [Single OSD performance on SSD]
l -
>
> De: "Sebastien Han"
> À: "Jian Zhang"
> Cc: "Alexandre DERUMIER" ,
> ceph-users@lists.ceph.com Envoyé: Mardi 23 Septembre 2014 17:41:38
> Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3,
> 2K IOPS
>
---
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> Of Sebastien Han
> Sent: Tuesday, September 16, 2014 9:33 PM
> To: Alexandre DERUMIER
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go
> over 3, 2K I
"Sebastien Han"
> À: "Jian Zhang"
> Cc: "Alexandre DERUMIER" , ceph-users@lists.ceph.com
> Envoyé: Mardi 23 Septembre 2014 17:41:38
> Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
> IOPS
>
> What about writes
>
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Sebastien Han
> Sent: Tuesday, September 16, 2014 9:33 PM
> To: Alexandre DERUMIER
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Single OSD performance on
t; - Mail original -
>
> De: "Jian Zhang"
> À: "Sebastien Han" , "Alexandre DERUMIER"
>
> Cc: ceph-users@lists.ceph.com
> Envoyé: Jeudi 18 Septembre 2014 08:12:32
> Objet: RE: [ceph-users] [Single OSD performance on SSD] Can't go o
users-boun...@lists.ceph.com] On Behalf Of
> Sebastien Han
> Sent: Tuesday, September 16, 2014 9:33 PM
> To: Alexandre DERUMIER
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
> IOPS
>
> Hi,
>
> Th
results with 6 osd : bw=118129KB/s, iops=29532
- Mail original -
De: "Jian Zhang"
À: "Alexandre DERUMIER"
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 19 Septembre 2014 10:21:38
Objet: RE: [ceph-users] [Single OSD performance on SSD] Can't go over 3,
B/s, iops=29532
- Mail original -
De: "Jian Zhang"
À: "Alexandre DERUMIER"
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 19 Septembre 2014 10:21:38
Objet: RE: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Thanks for this gre
ginal -
De: "Alexandre DERUMIER"
À: "Jian Zhang"
Cc: ceph-users@lists.ceph.com
Envoyé: Jeudi 18 Septembre 2014 15:36:48
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
>>Have anyone ever testing multi volume performance on a
an Zhang"
Cc: ceph-users@lists.ceph.com
Envoyé: Jeudi 18 Septembre 2014 15:36:48
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
>>Have anyone ever testing multi volume performance on a *FULL* SSD setup?
I known that Stefan Priebe run full
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Thursday, September 18, 2014 11:06 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Couple of questions: Are
and"
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 12 Septembre 2014 08:15:08
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over
3, 2K IOPS
results of fio on rbd with kernel patch
fio rbd crucial m550 1 osd 0.85 (osd_enable_op_tracker true or false, same
result):
-
ly or Giant ?
I'll do benchs with 6 osd dc3500 tomorrow to compare firefly and giant.
- Mail original -
De: "Jian Zhang"
À: "Sebastien Han" , "Alexandre DERUMIER"
Cc: ceph-users@lists.ceph.com
Envoyé: Jeudi 18 Septembre 2014 08:12:32
Objet: RE:
@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hi,
Thanks for keeping us updated on this subject.
dsync is definitely killing the ssd.
I don't have much to add, I'm just surprised that you're only getting 5299 with
0.85 since I
my ssds model for my production
cluster (target 2015),I'll have a look for this optimus drives
- Mail original -
De: "Somnath Roy"
À: "Mark Kirkwood" , "Alexandre DERUMIER"
, "Sebastien Han"
Cc: ceph-users@lists.ceph.com
Envoyé: Mer
lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On 17/09/14 08:39, Alexandre DERUMIER wrote:
> Hi,
>
>>> I’m just surprised that you’re only getting 5299 with 0.85 since
>>> I’ve been able to get 6,4K, well I was u
On 17/09/14 08:39, Alexandre DERUMIER wrote:
Hi,
I’m just surprised that you’re only getting 5299 with 0.85 since I’ve been able
to get 6,4K, well I was using the 200GB model
Your model is
DC S3700
mine is DC s3500
with lower writes, so that could explain the difference.
Interesting - I
kB/s wkB/s avgrq-sz avgqu-sz await r_await
> w_await svctm %util
> sdb 0,00 1563,00 0,00 9880,00 0,00 75223,50 15,23 2,09 0,21 0,00 0,21 0,07
> 80,00
>
>
> - Mail original -
>
> De: "Alexandre DERUMIER"
> À: &quo
KB/s, iops=5299
> Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz
> avgqu-sz await r_await w_await svctm %util
> sdb 0,00 1563,000,00 9880,00 0,00 75223,5015,23
> 2,09 0,210,000,21 0,07 80,00
>
>
> -
embre 2014 08:15:08
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
results of fio on rbd with kernel patch
fio rbd crucial m550 1 osd 0.85 (osd_enable_op_tracker true or false, same
result):
---
bw=12327KB/s, iops=3081
So n
UMIER"
À: "Cedric Lemarchand"
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 12 Septembre 2014 07:58:05
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
>>For crucial, I'll try to apply the patch from stefan priebe, to ign
neck seem to be in ceph.
I will send s3500 result today
- Mail original -
De: "Cedric Lemarchand"
À: ceph-users@lists.ceph.com
Envoyé: Jeudi 11 Septembre 2014 21:23:23
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Le 11/09/2014 19:3
/2013-November/035707.html
- Mail original -
De: "Cedric Lemarchand"
À: ceph-users@lists.ceph.com
Envoyé: Jeudi 11 Septembre 2014 21:23:23
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Le 11/09/2014 19:33, Cedric Lemarchand a écrit :
11,03 0,19 99,70
>>
>>
>> (I don't understand what exactly is %util, 100% in the 2 cases, because 10x
>> slower with ceph)
> It would be interesting if you could catch the size of writes on SSD
> during the bench through librbd (I know nmon can do that)
R
ev/sdb bs=4k count=65536 oflag=direct
> 65536+0 enregistrements lus
> 65536+0 enregistrements écrits
> 268435456 octets (268 MB) copiés, 2,77433 s, 96,8 MB/s
>
>
> # dd if=rand.file of=/dev/sdb bs=4k count=65536 oflag=dsync,direct
> ^C17228+0 enregistrements lus
> 17228+0 e
27;ll do tests with intel s3500 tomorrow to compare
- Mail original -
De: "Sebastien Han"
À: "Warren Wang"
Cc: ceph-users@lists.ceph.com
Envoyé: Lundi 8 Septembre 2014 22:58:25
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2
he test parameters you gave
> below, but they're worth keeping in mind.
>
> --
> Warren Wang
> Comcast Cloud (OpenStack)
>
>
> From: Cedric Lemarchand
> Date: Wednesday, September 3, 2014 at 5:14 PM
> To: "ceph-users@lists.ceph.com"
> Subject: Re: [cep
2014 at 5:14 PM
To: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>"
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Le 03/09/2014 22:11, Sebastien Han a écrit :
Hi Warren,
What d
arren
>>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Sebastien Han
>> Sent: Thursday, August 28, 2014 12:12 PM
>> To: ceph-users
>> Cc: Mark Nelson
>> Subject: [ceph-users] [Single OSD perfo
oks pretty good for your test. Something to consider though.
>>
>> Warren
>>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Sebastien Han
>> Sent: Thursday, August 28, 2014 12:12 PM
>> To
> Sebastien Han
> Sent: Thursday, August 28, 2014 12:12 PM
> To: ceph-users
> Cc: Mark Nelson
> Subject: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS
>
> Hey all,
>
> It has been a while since the last thread performance related on the ML :p
ark Nelson
Subject: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS
Hey all,
It has been a while since the last thread performance related on the ML :p I've
been running some experiment to see how much I can get from an SSD on a Ceph
cluster.
To achieve that I
s@lists.ceph.com, "Cédric Lemarchand"
> Envoyé: Mardi 2 Septembre 2014 15:25:05
> Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
> IOPS
>
> Well the last time I ran two processes in parallel I got half the total
> amount available so
De: "Sebastien Han"
À: "Alexandre DERUMIER"
Cc: ceph-users@lists.ceph.com, "Cédric Lemarchand"
Envoyé: Mardi 2 Septembre 2014 15:25:05
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Well the last time I ran two processes
Mail original -
>
> De: "Sebastien Han"
> À: "Cédric Lemarchand"
> Cc: "Alexandre DERUMIER" , ceph-users@lists.ceph.com
> Envoyé: Mardi 2 Septembre 2014 13:59:13
> Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3
ldn't it better to have 2 partitions, 1 for journal and 1 for datas ?
>>>
>>> (I'm thinking about filesystem write syncs)
>>>
>>>
>>>
>>>
>>> - Mail original -
>>>
>>> De: "Sebasti
27;m thinking about filesystem write syncs)
>>>
>>>
>>>
>>>
>>> - Mail original -
>>>
>>> De: "Sebastien Han"
>>> À: "Somnath Roy"
>>> Cc: cep
t;>
>>
>>
>>
>> - Mail original -
>>
>> De: "Sebastien Han"
>> À: "Somnath Roy"
>> Cc: ceph-users@lists.ceph.com
>> Envoyé: Mardi 2 Septembre 2014 02:19:16
>> Objet: Re: [ceph-user
On 02/09/14 19:38, Alexandre DERUMIER wrote:
Hi Sebastien,
I got 6340 IOPS on a single OSD SSD. (journal and data on the same partition).
Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ?
(I'm thinking about filesystem write syncs)
Oddly enough, it does not seem to
p:
>>>
>>> ---
>>>
>>> 40 cpu core server as a cluster node (single node cluster) with 64 GB RAM.
>>> 8 SSDs -> 8 OSDs. One similar node for monitor and rgw. Another node for
>>> client running fio/vdbench. 4 rbds are configured
lesystem write syncs)
>
>
>
>
> - Mail original -
>
> De: "Sebastien Han"
> À: "Somnath Roy"
> Cc: ceph-users@lists.ceph.com
> Envoyé: Mardi 2 Septembre 2014 02:19:16
> Objet: Re: [ceph-users] [Single OSD performance on SSD] Can&
an"
À: "Somnath Roy"
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 2 Septembre 2014 02:19:16
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Mark and all, Ceph IOPS performance has definitely improved with Giant.
Wi
gt;> Sent: Sunday, August 31, 2014 8:54 PM
>> To: Somnath Roy
>> Cc: Haomai Wang; ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3,
>> 2K IOPS
>>
>> Somnath,
>> on the small workload performance,
.com
> Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
> IOPS
>
> Somnath,
> on the small workload performance, 107k is higher than the theoretical IOPS
> of 520, any idea why?
>
>
>
> >>Single client is ~14K iops,
As I said, 107K with IOs serving from memory, not hitting the disk..
From: Jian Zhang [mailto:amberzhan...@gmail.com]
Sent: Sunday, August 31, 2014 8:54 PM
To: Somnath Roy
Cc: Haomai Wang; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over
On 01/09/14 17:10, Alexandre DERUMIER wrote:
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
(running fio on the filesystem I've seen 70K IOPS so is reasonably
believable). So anyway we are not getting anywhere near the max IOPS
>from our devices.
Hi,
Just check this:
htt
high-end DC ssd, slc, can provide consistent results around
> 40K-50K
>
>
>
>
>
>
>
> - Mail original -
>
> De: "Mark Kirkwood"
> À: "Sebastien Han" , "ceph-users"
>
> Envoyé: Lundi 1 Septembre 2014 02:36:45
> Objet:
uot;Mark Kirkwood"
À: "Sebastien Han" , "ceph-users"
Envoyé: Lundi 1 Septembre 2014 02:36:45
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On 31/08/14 17:55, Mark Kirkwood wrote:
> On 29/08/14 22:17, Sebastien Han wrote:
On 01/09/14 12:36, Mark Kirkwood wrote:
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
(running fio on the filesystem I've seen 70K IOPS so is reasonably
believable). So anyway we are not getting anywhere near the max IOPS
from our devices.
We use the Intel S3700 for prod
gt;>
>>
>>
>> Results from Firefly:
>>
>>
>>
>>
>>
>> Aggregated output while 4 rbd clients stressing the cluster in parallel
>> is *~20-25K IOPS* , cpu cores used ~8-10 cores (may be less can’t
>> remember pre
Aggregated output while 4 rbd clients stressing the cluster in parallel is
> *~120K
> IOPS* , cpu is 7% idle i.e ~37-38 cpu cores.
>
>
>
> Hope this helps.
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
> -Original Message-
> From: Haomai
On 31/08/14 17:55, Mark Kirkwood wrote:
On 29/08/14 22:17, Sebastien Han wrote:
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the
journal (plus your ceph setting) didn’t bring much, now I can reach
3,5K IOPS.
By any chance, would it be possible for you
On 29/08/14 22:17, Sebastien Han wrote:
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the journal (plus
your ceph setting) didn’t bring much, now I can reach 3,5K IOPS.
By any chance, would it be possible for you to test with a single OSD SSD?
Funny
Hi Somnath,
we're in the process evaluating sandisk ssds for ceph (fs and journal on each).
8 osds / ssds per host xeon e3 1650
Which one can you recommend?
Greets,
Stefan
Excuse my typo sent from my mobile phone.
> Am 29.08.2014 um 18:33 schrieb Somnath Roy :
>
> Somnath
___
August 29, 2014 3:17 AM
To: ceph-users
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Thanks a lot for the answers, even if we drifted from the main subject a little
bit.
Thanks Somnath for sharing this, when can we expect any codes that might
impro
users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On Fri, Aug 29, 2014 at 10:37 AM, Somnath Roy wrote:
> Thanks Haomai !
>
> Here is some of the
Hi Mark,
Yeah. The application defines portals which are active threaded, then the
transport layer is servicing the portals with EPOLL.
Matt
- "Mark Nelson" wrote:
> Excellent, I've been meaning to check into how the TCP transport is
> going. Are you using a hybrid threadpool/epoll app
Excellent, I've been meaning to check into how the TCP transport is
going. Are you using a hybrid threadpool/epoll approach? That I
suspect would be very effective at reducing context switching,
especially compared to what we do now.
Mark
On 08/28/2014 10:40 PM, Matt W. Benjamin wrote:
Hi,
@Dan: thanks for sharing your config, with all your flags I don’t seem to get
more that 3,4K IOPS and they even seem to slow me down :( This is really weird.
Yes I already tried to run to simultaneous processes and only half of 3,4K for
each of them.
@Kasper: thanks for these results, I believe
On 08/29/2014 06:10 AM, Dan Van Der Ster wrote:
Hi Sebastien,
Here’s my recipe for max IOPS on a _testing_ instance with SSDs:
osd op threads = 2
With SSDs, In the past I've seen increasing the osd op thread count can
help random reads.
osd disk threads = 2
journal max write bytes
Hi Sébastien,
On Thu, Aug 28, 2014 at 06:11:37PM +0200, Sebastien Han wrote:
> Hey all,
(...)
> We have been able to reproduce this on 3 distinct platforms with some
> deviations (because of the hardware) but the behaviour is the same.
> Any thoughts will be highly appreciated, only getting 3,2k
Hi Sebastien,
Here’s my recipe for max IOPS on a _testing_ instance with SSDs:
osd op threads = 2
osd disk threads = 2
journal max write bytes = 1048576
journal queue max bytes = 1048576
journal max write entries = 1
journal queue max ops = 5
filestore op threads = 2
Thanks a lot for the answers, even if we drifted from the main subject a little
bit.
Thanks Somnath for sharing this, when can we expect any codes that might
improve _write_ performance?
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the journal (plus
y
On Fri, Aug 29, 2014 at 4:03 PM, Andrey Korolyov wrote:
> On Fri, Aug 29, 2014 at 10:37 AM, Somnath Roy
> wrote:
> > Thanks Haomai !
> >
> > Here is some of the data from my setup.
> >
> >
> >
> >
> -
On Fri, Aug 29, 2014 at 10:37 AM, Somnath Roy wrote:
> Thanks Haomai !
>
> Here is some of the data from my setup.
>
>
>
> ---
14 8:01 PM
To: Somnath Roy
Cc: Andrey Korolyov; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hi Roy,
I already scan your merged codes about "fdcache" and "optimizing for
lfn_find/lfn_open", could you
Hi,
There's also an early-stage TCP transport implementation for Accelio, also
EPOLL-based. (We haven't attempted to run Ceph protocols over it yet, to my
knowledge, but it should be straightforward.)
Regards,
Matt
- "Haomai Wang" wrote:
> Hi Roy,
>
>
> As for messenger level, I have
ent: Thursday, August 28, 2014 12:57 PM
>> To: Somnath Roy
>> Cc: David Moreau Simard; Mark Nelson; ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3,
>> 2K IOPS
>>
>> On Thu, Aug 28, 2014 at 10:48 PM,
ov [mailto:and...@xdel.ru]
> Sent: Thursday, August 28, 2014 12:57 PM
> To: Somnath Roy
> Cc: David Moreau Simard; Mark Nelson; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
> IOPS
>
> On Thu, Aug 28, 2014
On 29/08/14 14:06, Mark Kirkwood wrote:
... mounting (xfs) with nobarrier seems to get
much better results. The run below is for a single osd on an xfs
partition from an Intel 520. I'm using another 520 as a journal:
...and adding
filestore_queue_max_ops = 2
improved IOPS a bit more:
On 29/08/14 04:11, Sebastien Han wrote:
Hey all,
See my fio template:
[global]
#logging
#write_iops_log=write_iops_log
#write_bw_log=write_bw_log
#write_lat_log=write_lat_lo
time_based
runtime=60
ioengine=rbd
clientname=admin
pool=test
rbdname=fio
invalidate=0# mandatory
#rw=randwrite
r
:and...@xdel.ru]
Sent: Thursday, August 28, 2014 12:57 PM
To: Somnath Roy
Cc: David Moreau Simard; Mark Nelson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On Thu, Aug 28, 2014 at 10:48 PM, Somnath Roy wrote:
> Nope, this will
On Thu, Aug 28, 2014 at 10:48 PM, Somnath Roy wrote:
> Nope, this will not be back ported to Firefly I guess.
>
> Thanks & Regards
> Somnath
>
Thanks for sharing this, the first thing in thought when I looked at
this thread, was your patches :)
If Giant will incorporate them, both the RDMA suppo
3 AM
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over
>3, 2K IOPS
>
>On 08/28/2014 12:39 PM, Somnath Roy wrote:
>> Hi Sebastian,
>> If you are trying with the latest Ceph master, there are some changes
>>we made t
f optimizing that. For now, optracker enabled/disabled code
>>introduced. Also, there are several bottlenecks in the filestore level
>>are removed.
>> Unfortunately, we are yet to optimize the write path. All of these
>>should help the write path as well, but, write path impr
0:43 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On 08/28/2014 12:39 PM, Somnath Roy wrote:
> Hi Sebastian,
> If you are trying with the latest Ceph master, there are some changes we made
> that will be in
& Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Sebastien Han
Sent: Thursday, August 28, 2014 9:12 AM
To: ceph-users
Cc: Mark Nelson
Subject: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS
Hey all,
ject: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS
Hey all,
It has been a while since the last thread performance related on the ML :p I've
been running some experiment to see how much I can get from an SSD on a Ceph
cluster.
To achieve that I did something
Hey all,
It has been a while since the last thread performance related on the ML :p
I’ve been running some experiment to see how much I can get from an SSD on a
Ceph cluster.
To achieve that I did something pretty simple:
* Debian wheezy 7.6
* kernel from debian 3.14-0.bpo.2-amd64
* 1 cluster, 3
80 matches
Mail list logo