Hi
Only one node, and only one nvme SSD, the SSD has 12 partitions, every three 
for one OSD
And fio is 4k randwrite, iodepth is 128
No snapshot

Thanks

发件人: Jan Schermer [mailto:j...@schermer.cz]
发送时间: 2016年8月23日 14:52
收件人: Zhiyuan Wang <zhiyuan.w...@istuary.com>
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] BlueStore write amplification

Is that 400MB on all nodes or on each node? If it's on all nodes then 10:1 is 
not that surprising.
What what the block size in your fio benchmark?
We had much higher amplification on our cluster with snapshots and stuff...

Jan

On 23 Aug 2016, at 08:38, Zhiyuan Wang 
<zhiyuan.w...@istuary.com<mailto:zhiyuan.w...@istuary.com>> wrote:

Hi
I have test bluestore on SSD, and I found that the BW from fio is about 40MB, 
but
the write BW from iostat of SSD is about 400MB, nearly ten times.
Could someone help to explain this?
Thanks a lot.

Below are my configuration file:
[global]
        fsid = 31e77e3c-447c-4745-a91a-58bda80a868c
        enable experimental unrecoverable data corrupting features = bluestore 
rocksdb
        osd objectstore = bluestore

        bluestore default buffered read = true
        bluestore_min_alloc_size=4096
        osd pool default size = 1

        osd pg bits = 8
        osd pgp bits = 8
        auth supported = none
        log to syslog = false
        filestore xattr use omap = true
        auth cluster required = none
        auth service required = none
        auth client required = none

        public network = 192.168.200.233/24
        cluster network = 192.168.100.233/24

        mon initial members = node3
        mon host = 192.168.200.233
        mon data = /etc/ceph/mon.node3

        filestore merge threshold = 40
        filestore split multiple = 8
        osd op threads = 8

        debug_bluefs = "0/0"
        debug_bluestore = "0/0"
        debug_bdev = "0/0"
        debug_lockdep = "0/0"
        debug_context = "0/0"
        debug_crush = "0/0"
        debug_mds = "0/0"
        debug_mds_balancer = "0/0"
        debug_mds_locker = "0/0"
        debug_mds_log = "0/0"
        debug_mds_log_expire = "0/0"
        debug_mds_migrator = "0/0"
        debug_buffer = "0/0"
        debug_timer = "0/0"
        debug_filer = "0/0"
        debug_objecter = "0/0"
        debug_rados = "0/0"
        debug_rbd = "0/0"
        debug_journaler = "0/0"
        debug_objectcacher = "0/0"
        debug_client = "0/0"
        debug_osd = "0/0"
        debug_optracker = "0/0"
        debug_objclass = "0/0"
        debug_filestore = "0/0"
        debug_journal = "0/0"
        debug_ms = "0/0"
        debug_mon = "0/0"
        debug_monc = "0/0"
        debug_paxos = "0/0"
        debug_tp = "0/0"
        debug_auth = "0/0"
        debug_finisher = "0/0"
        debug_heartbeatmap = "0/0"
        debug_perfcounter = "0/0"
        debug_rgw = "0/0"
        debug_hadoop = "0/0"
        debug_asok = "0/0"
        debug_throttle = "0/0"

[osd.0]
        host = node3
        osd data = /etc/ceph/osd-device-0-data
        bluestore block path = /dev/disk/by-partlabel/osd-device-0-block
        bluestore block db path = /dev/disk/by-partlabel/osd-device-0-db
        bluestore block wal path = /dev/disk/by-partlabel/osd-device-0-wal

[osd.1]
        host = node3
        osd data = /etc/ceph/osd-device-1-data
        bluestore block path = /dev/disk/by-partlabel/osd-device-1-block
        bluestore block db path = /dev/disk/by-partlabel/osd-device-1-db
        bluestore block wal path = /dev/disk/by-partlabel/osd-device-1-wal
[osd.2]
        host = node3
        osd data = /etc/ceph/osd-device-2-data
        bluestore block path = /dev/disk/by-partlabel/osd-device-2-block
        bluestore block db path = /dev/disk/by-partlabel/osd-device-2-db
        bluestore block wal path = /dev/disk/by-partlabel/osd-device-2-wal


[osd.3]
        host = node3
        osd data = /etc/ceph/osd-device-3-data
        bluestore block path = /dev/disk/by-partlabel/osd-device-3-block
        bluestore block db path = /dev/disk/by-partlabel/osd-device-3-db
        bluestore block wal path = /dev/disk/by-partlabel/osd-device-3-wal
Email Disclaimer & Confidentiality Notice
This message is confidential and intended solely for the use of the recipient 
to whom they are addressed. If you are not the intended recipient you should 
not deliver, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail and delete this e-mail from your system. Copyright © 2016 
by Istuary Innovation Labs, Inc. All rights reserved.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Email Disclaimer & Confidentiality Notice
This message is confidential and intended solely for the use of the recipient 
to whom they are addressed. If you are not the intended recipient you should 
not deliver, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail and delete this e-mail from your system. Copyright © 2016 
by Istuary Innovation Labs, Inc. All rights reserved.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to