Please never use the Datasheet values to select your SSD. We never had a
single one that that delivers the shown perfomance in a Ceph Journal use
case.

However, do not use Filestore anymore. Especialy with newer kernel
versions. Use Bluestore instead.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx

Am Mi., 14. Nov. 2018, 05:46 hat <dave.c...@dell.com> geschrieben:

> Thanks Merrick!
>
>
>
> I checked with Intel spec [1], the performance Intel said is,
>
>
>
> ·  Sequential Read (up to) 500 MB/s
>
> ·  Sequential Write (up to) 330 MB/s
>
> ·  Random Read (100% Span) 72000 IOPS
>
> ·  Random Write (100% Span) 20000 IOPS
>
>
>
> I think these indicator should be must better than general HDD, and I have
> run read/write commands with “rados bench” respectively,   there should be
> some difference.
>
>
>
> And is there any kinds of configuration that could give us any performance
> gain with this SSD (Intel S4500)?
>
>
>
> [1]
> https://ark.intel.com/products/120521/Intel-SSD-DC-S4500-Series-480GB-2-5in-SATA-6Gb-s-3D1-TLC-
>
>
>
> Best Regards,
>
> Dave Chen
>
>
>
> *From:* Ashley Merrick <singap...@amerrick.co.uk>
> *Sent:* Wednesday, November 14, 2018 12:30 PM
> *To:* Chen2, Dave
> *Cc:* ceph-users
> *Subject:* Re: [ceph-users] Benchmark performance when using SSD as the
> journal
>
>
>
> [EXTERNAL EMAIL]
> Please report any suspicious attachments, links, or requests for sensitive
> information.
>
> Only certain SSD's are good for CEPH Journals as can be seen @
> https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
>
>
>
> The SSD your using isn't listed but doing a quick search online it appears
> to be a SSD designed for read workloads as a "upgrade" from a HD so
> probably is not designed for the high write requirements a journal demands.
>
> Therefore when it's been hit by 3 OSD's of workloads your not going to get
> much more performance out of it than you would just using the disk as your
> seeing.
>
>
>
> On Wed, Nov 14, 2018 at 12:21 PM <dave.c...@dell.com> wrote:
>
> Hi all,
>
>
>
> We want to compare the performance between HDD partition as the journal
> (inline from OSD disk) and SSD partition as the journal, here is what we
> have done, we have 3 nodes used as Ceph OSD,  each has 3 OSD on it.
> Firstly, we created the OSD with journal from OSD partition, and run “rados
> bench” utility to test the performance, and then migrate the journal from
> HDD to SSD (Intel S4500) and run “rados bench” again, the expected result
> is SSD partition should be much better than HDD, but the result shows us
> there is nearly no change,
>
>
>
> The configuration of Ceph is as below,
>
> pool size: 3
>
> osd size: 3*3
>
> pg (pgp) num: 300
>
> osd nodes are separated across three different nodes
>
> rbd image size: 10G (10240M)
>
>
>
> The utility I used is,
>
> rados bench -p rbd $duration write
>
> rados bench -p rbd $duration seq
>
> rados bench -p rbd $duration rand
>
>
>
> Is there anything wrong from what I did?  Could anyone give me some
> suggestion?
>
>
>
>
>
> Best Regards,
>
> Dave Chen
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to