Hi Zhenshi,

I did try with bigger block size. Interestingly, the one whose 4KB osd bench 
was lower performed slightly better in 4MB osd bench.


Let me try some other bigger block sizes, e.g. 16K, 64K, 128K, 1M etc, to see 
if there is any pattern.


Moreover, I did compare two SSDs, they respectively are INTEL SSDSC2KB480G8 and 
INTEL SSDSC2KB960G8. Performance wise, there is no much difference.


Thanks,
Ning





------------------ ???????? ------------------
??????:                                                                         
                                               "Zhenshi Zhou"                   
                                                                 
<deader...@gmail.com&gt;;
????????:&nbsp;2020??7??16??(??????) ????9:24
??????:&nbsp;"rainning"<tweety...@qq.com&gt;;
????:&nbsp;"ceph-users"<ceph-users@ceph.io&gt;;
????:&nbsp;[ceph-users] Re: osd bench with or without a separate WAL device 
deployed



Maybe you can try writing with bigger block size and compare the results.
For bluestore, the write operations contain two modes. One is COW, the
other is RMW. AFAIK only RMW uses wal in order to prevent data from
being interrupted.

rainning <tweety...@qq.com&gt; ??2020??7??15?????? ????11:04??????

&gt; Hi Zhenshi, thanks very much for the reply.
&gt;
&gt; Yes I know it is ood that the bluestore is deployed only with a separate
&gt; db device&nbsp; but no a WAL device. The cluster was deployed in k8s using 
rook.
&gt; I was told it was because the rook we used didn't support that.
&gt;
&gt; Moreover, the comparison was made on osd bench, so the network should not
&gt; be the case. As far as the storage node hardware, although two clusters are
&gt; indeed different, their CPUs and HDDs do have almost same performance
&gt; numbers. I haven't compared SSDs that are used as db/WAL devices, it might
&gt; cause difference, but I am not sure if it can make two times difference.
&gt;
&gt; ---Original---
&gt; *From:* "Zhenshi Zhou"<deader...@gmail.com&gt;
&gt; *Date:* Wed, Jul 15, 2020 18:39 PM
&gt; *To:* "rainning"<tweety...@qq.com&gt;;
&gt; *Cc:* "ceph-users"<ceph-users@ceph.io&gt;;
&gt; *Subject:* [ceph-users] Re: osd bench with or without a separate WAL
&gt; device deployed
&gt;
&gt; I deployed the cluster either with separate db/wal or put db/wal/data
&gt; together. Never tried to have only a seperate db.
&gt; AFAIK wal does have an effect on writing but I'm not sure if it could be
&gt; two times of the bench value. Hardware and
&gt; network environment are also important factors.
&gt;
&gt; rainning <tweety...@qq.com&gt; ??2020??7??15?????? ????4:35??????
&gt;
&gt; &gt; Hi all,
&gt; &gt;
&gt; &gt;
&gt; &gt; I am wondering if there is any performance comparison done on osd 
bench
&gt; &gt; with and without a separate WAL device deployed given that there is
&gt; always
&gt; &gt; a separate db device deployed on SSD in both cases.
&gt; &gt;
&gt; &gt;
&gt; &gt; The reason I am asking this question is that we have two clusters and
&gt; osds
&gt; &gt; in one have separate db and WAL device deployed on SSD but osds in
&gt; another
&gt; &gt; only have a separate db device deployed. And we found 4KB osd bench 
(i.e.
&gt; &gt; ceph tell osd.X bench 12288000 4096) for the ones having a separate 
WAL
&gt; &gt; device was two times of the ones without a separate WAL device. Is the
&gt; &gt; performance difference caused by the separate WAL device?
&gt; &gt;
&gt; &gt;
&gt; &gt; Thanks,
&gt; &gt; Ning
&gt; &gt; _______________________________________________
&gt; &gt; ceph-users mailing list -- ceph-users@ceph.io
&gt; &gt; To unsubscribe send an email to ceph-users-le...@ceph.io
&gt; &gt;
&gt; _______________________________________________
&gt; ceph-users mailing list -- ceph-users@ceph.io
&gt; To unsubscribe send an email to ceph-users-le...@ceph.io
&gt;
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to