My journals are on-disk, each disk being a SSD. The reason I didn't go with dedicated drives for journals is that when designing the setup, I was told that having dedicated journal SSDs on a full-SSD setup would not give me performance increases.

So that makes the journal disk to data disk ratio 1:1.

The replication size is 3, yes. The pools are replicated.

On 4/20/2015 10:43 AM, Barclay Jameson wrote:
Are your journals on separate disks? What is your ratio of journal disks to data disks? Are you doing replication size 3 ?

On Mon, Apr 20, 2015 at 9:30 AM, J-P Methot <jpmet...@gtcomm.net <mailto:jpmet...@gtcomm.net>> wrote:

    Hi,

    This is similar to another thread running right now, but since our
    current setup is completely different from the one described in
    the other thread, I thought it may be better to start a new one.

    We are running Ceph Firefly 0.80.8 (soon to be upgraded to
    0.80.9). We have 6 OSD hosts with 16 OSD each (so a total of 96
    OSDs). Each OSD is a Samsung SSD 840 EVO on which I can reach
    write speeds of roughly 400 MB/sec, plugged in jbod on a
    controller that can theoretically transfer at 6gb/sec. All of that
    is linked to openstack compute nodes on two bonded 10gbps links
    (so a max transfer rate of 20 gbps).

    When I run rados bench from the compute nodes, I reach the network
    cap in read speed. However, write speeds are vastly inferior,
    reaching about 920 MB/sec. If I have 4 compute nodes running the
    write benchmark at the same time, I can see the number plummet to
    350 MB/sec . For our planned usage, we find it to be rather slow,
    considering we will run a high number of virtual machines in there.

    Of course, the first thing to do would be to transfer the journal
    on faster drives. However, these are SSDs we're talking about. We
    don't really have access to faster drives. I must find a way to
    get better write speeds. Thus, I am looking for suggestions as to
    how to make it faster.

    I have also thought of options myself like:
    -Upgrading to the latest stable hammer version (would that really
    give me a big performance increase?)
    -Crush map modifications? (this is a long shot, but I'm still
    using the default crush map, maybe there's a change there I could
    make to improve performances)

    Any suggestions as to anything else I can tweak would be strongly
    appreciated.

    For reference, here's part of my ceph.conf:

    [global]
    auth_service_required = cephx
    filestore_xattr_use_omap = true
    auth_client_required = cephx
    auth_cluster_required = cephx
    osd pool default size = 3


    osd pg bits = 12
    osd pgp bits = 12
    osd pool default pg num = 800
    osd pool default pgp num = 800

    [client]
    rbd cache = true
    rbd cache writethrough until flush = true

    [osd]
    filestore_fd_cache_size = 1000000
    filestore_omap_header_cache_size = 1000000
    filestore_fd_cache_random = true
    filestore_queue_max_ops = 5000
    journal_queue_max_ops = 1000000
    max_open_files = 1000000
    osd journal size = 10000

-- ======================
    Jean-Philippe Méthot
    Administrateur système / System administrator
    GloboTech Communications
    Phone: 1-514-907-0050 <tel:1-514-907-0050>
    Toll Free: 1-(888)-GTCOMM1
    Fax: 1-(514)-907-0750 <tel:1-%28514%29-907-0750>
    jpmet...@gtcomm.net <mailto:jpmet...@gtcomm.net>
    http://www.gtcomm.net

    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
======================
Jean-Philippe Méthot
Administrateur système / System administrator
GloboTech Communications
Phone: 1-514-907-0050
Toll Free: 1-(888)-GTCOMM1
Fax: 1-(514)-907-0750
jpmet...@gtcomm.net
http://www.gtcomm.net

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to