Dear Marek,

I would expect a higher performance, but did you measure this? with
rados bench? Please note, ceph is build for parallel access, so the
combined speed increases with more threads, if this is a single thread
measurement, I wonder how well it reflects the performance of the
platform. With rados bench you can specify how many threads you want to
use.

Regards,

Mart



On 11/24/2015 04:37 PM, Marek Dohojda wrote:
> Yeah they are, that is one thing I was planning on changing, What I am
> really interested at the moment, is vague expected performance.  I
> mean is 100MB around normal, very low, or "could be better"?
>
> On Tue, Nov 24, 2015 at 8:02 AM, Alan Johnson <al...@supermicro.com
> <mailto:al...@supermicro.com>> wrote:
>
>     Are the journals on the same device – it might be better to use
>     the SSDs for journaling since you are not getting better
>     performance with SSDs?
>
>      
>
>     *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com
>     <mailto:ceph-users-boun...@lists.ceph.com>] *On Behalf Of *Marek
>     Dohojda
>     *Sent:* Monday, November 23, 2015 10:24 PM
>     *To:* Haomai Wang
>     *Cc:* ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>     *Subject:* Re: [ceph-users] Performance question
>
>      
>
>      Sorry I should have specified SAS is the 100 MB :) , but to be
>     honest SSD isn't much faster.
>
>      
>
>     On Mon, Nov 23, 2015 at 7:38 PM, Haomai Wang <haomaiw...@gmail.com
>     <mailto:haomaiw...@gmail.com>> wrote:
>
>     On Tue, Nov 24, 2015 at 10:35 AM, Marek Dohojda
>     <mdoho...@altitudedigital.com
>     <mailto:mdoho...@altitudedigital.com>> wrote:
>     > No SSD and SAS are in two separate pools.
>     >
>     > On Mon, Nov 23, 2015 at 7:30 PM, Haomai Wang
>     <haomaiw...@gmail.com <mailto:haomaiw...@gmail.com>> wrote:
>     >>
>     >> On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda
>     >> <mdoho...@altitudedigital.com
>     <mailto:mdoho...@altitudedigital.com>> wrote:
>     >> > I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs. 
>     7 of which
>     >> > are
>     >> > SSD and 7 of which are SAS 10K drives.  I get typically about
>     100MB IO
>     >> > rates
>     >> > on this cluster.
>
>     So which pool you get with 100 MB?
>
>
>     >>
>     >> You mixed up sas and ssd in one pool?
>     >>
>     >> >
>     >> > I have a simple question.  Is 100MB within my configuration
>     what I
>     >> > should
>     >> > expect, or should it be higher? I am not sure if I should be
>     looking for
>     >> > issues, or just accept what I have.
>     >> >
>     >> > _______________________________________________
>     >> > ceph-users mailing list
>     >> > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>     >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     >> >
>     >>
>     >>
>     >>
>     >> --
>     >> Best Regards,
>     >>
>     >> Wheat
>     >
>     >
>
>
>     --
>     Best Regards,
>
>     Wheat
>
>      
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Mart van Santen
Greenhost
E: m...@greenhost.nl
T: +31 20 4890444
W: https://greenhost.nl

A PGP signature can be attached to this e-mail,
you need PGP software to verify it. 
My public key is available in keyserver(s)
see: http://tinyurl.com/openpgp-manual

PGP Fingerprint: CA85 EB11 2B70 042D AF66  B29A 6437 01A1 10A3 D3A5

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to