Hi all,
just an update - but an important one - of the previous benchmark with 2
new "10 DWPD class" contenders :
- Seagate 1200 - ST200FM0053 - SAS 12Gb/s
- Intel DC S3700 - SATA 6Gb/s
The graph :
http://www.4shared.com/download/yaeJgJiFce/Perf-SSDs-Toshiba-Seagate-Inte.png?lgfp=3000
Speaking of SSD IOPs. Running the same tests on my SSDs (LiteOn
ECT-480N9S 480GB SSDs):
The lines at the bottom are a single 6TB spinning disk for comparison's sake.
http://imgur.com/a/fD0Mh
Based on these numbers, there is a minimum latency per operation, but
multiple operations can be performed
Hi,
in our quest to get the right SSD for OSD journals, I managed to
benchmark two kind of "10 DWPD" SSDs :
- Toshiba M2 PX02SMF020
- Samsung 845DC PRO
I wan't to determine if a disk is appropriate considering its absolute
performances, and the optimal number of ceph-osd processes using the S
be 128/4 = 32x write amplification.
> >
> > (of course ssd algorithms and optimisations reduce this write
> > amplification).
> >
> > Now, it could be great to see if it's coming from osd journal or osd
> > datas.
> >
> > (not tested,
educe amplification.
>
> Don't known how ssd internal algorithms work for this.
>
>
> - Mail original -
> De: "aderumier"
> À: "Christian Balzer"
> Cc: "ceph-users"
> Envoyé: Lundi 23 Mars 2015 07:36:48
> Objet: Re: [
7;t known how ssd internal algorithms work for this.
- Mail original -
De: "aderumier"
À: "Christian Balzer"
Cc: "ceph-users"
Envoyé: Lundi 23 Mars 2015 07:36:48
Objet: Re: [ceph-users] SSD Hardware recommendation
Hi,
Isn't it in the nature of ssd to ha
7;s coming from osd journal or osd datas.
(not tested, but I think with journal and O_DSYNC writes, it can give use ssd
write amplification)
- Mail original -
De: "Christian Balzer"
À: "ceph-users"
Envoyé: Lundi 23 Mars 2015 03:11:39
Objet: Re: [ceph-users] SSD Hardw
On Mon, 23 Mar 2015 02:33:20 +0100 Francois Lafont wrote:
> Hi,
>
> Sorry Christian for my late answer. I was a little busy.
>
> Christian Balzer a wrote:
>
> > You're asking the wrong person, as I'm neither a Ceph or kernel
> > developer. ^o^
>
> No, no, the rest of the message proves to me t
Hi,
Sorry Christian for my late answer. I was a little busy.
Christian Balzer a wrote:
> You're asking the wrong person, as I'm neither a Ceph or kernel
> developer. ^o^
No, no, the rest of the message proves to me that I talk to the
right person. ;)
> Back then Mark Nelson from the Ceph team
> On 19 Mar 2015, at 08:17, Christian Balzer wrote:
>
> On Wed, 18 Mar 2015 08:59:14 +0100 Josef Johansson wrote:
>
>> Hi,
>>
>>> On 18 Mar 2015, at 05:29, Christian Balzer wrote:
>>>
>>>
>>> Hello,
>>>
>>> On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote:
>>
> [snip]
We thou
Hello,
On Wed, 18 Mar 2015 11:41:17 +0100 Francois Lafont wrote:
> Hi,
>
> Christian Balzer wrote :
>
> > Consider what you think your IO load (writes) generated by your
> > client(s) will be, multiply that by your replication factor, divide by
> > the number of OSDs, that will give you the ba
On Wed, 18 Mar 2015 08:59:14 +0100 Josef Johansson wrote:
> Hi,
>
> > On 18 Mar 2015, at 05:29, Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote:
>
[snip]
> >> We though of doing a cluster with 3 servers, and any recommendation of
> >
Hi,
Christian Balzer wrote :
> Consider what you think your IO load (writes) generated by your client(s)
> will be, multiply that by your replication factor, divide by the number of
> OSDs, that will give you the base load per OSD.
> Then multiply by 2 (journal on OSD) per OSD.
> Finally based o
Johansson"
À: "aderumier"
Cc: "ceph-users"
Envoyé: Mercredi 18 Mars 2015 09:04:23
Objet: Re: [ceph-users] SSD Hardware recommendation
Hi Alexandre,
I actually have been searching for this information a couple of times in the ML
now.
Was hoping that you would
Hi Alexandre,
I actually have been searching for this information a couple of times in the ML
now.
Was hoping that you would’ve been done with it before I ordered :)
I will most likely order this week so I will see it when the stuff is being
assembled :o
Do you feel that there something in th
Hi,
> On 18 Mar 2015, at 05:29, Christian Balzer wrote:
>
>
> Hello,
>
> On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote:
>
>> Hi,
>>
>> I’m planning a Ceph SSD cluster, I know that we won’t get the full
>> performance from the SSD in this case, but SATA won’t cut it as backend
>> s
Hi Josef,
I'm going to benchmark a 3nodes cluster with 6ssd each node (2x10 cores 3,1ghz).
From my previous bench, you need fast cpus if you need a lot of iops, and
writes are lot more expansive than reads.
Now i'm you are doing only small iops (big blocks / big throughput), you don't
need too
Hello,
On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote:
> Hi,
>
> I’m planning a Ceph SSD cluster, I know that we won’t get the full
> performance from the SSD in this case, but SATA won’t cut it as backend
> storage and SAS is the same price as SSD now.
>
Have you actually tested SAT
18 matches
Mail list logo