Hi Craig,

Good day to you, and thank you for your enquiry.

As per your suggestion, I have created a 3rd partition on the SSDs and did
the dd test directly into the device, and the result is very slow.

====
root@ceph-osd-08:/mnt# dd bs=1M count=128 if=/dev/zero of=/dev/sdg3
conv=fdatasync oflag=direct
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 19.5223 s, 6.9 MB/s

root@ceph-osd-08:/mnt# dd bs=1M count=128 if=/dev/zero of=/dev/sdf3
conv=fdatasync oflag=direct
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 5.34405 s, 25.1 MB/s
====

I did a test onto another server with exactly similar specification and
similar SSD drive (Seagate SSD 100 GB) but not added into the cluster yet
(thus no load), and the result is fast:

====
root@ceph-osd-09:/home/indra# dd bs=1M count=128 if=/dev/zero of=/dev/sdf1
conv=fdatasync oflag=direct
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 0.742077 s, 181 MB/s
====

Is the Ceph journal load really takes up a lot of the SSD resources? I
don't understand how come the performance can drop significantly.
Especially since the two Ceph journals are only taking the first 20 GB out
of the 100 GB of the SSD total capacity.

Any advice is greatly appreciated.

Looking forward to your reply, thank you.

Cheers.



On Sat, Apr 26, 2014 at 2:52 AM, Craig Lewis <cle...@centraldesktop.com>wrote:

>
>  I am not able to do a dd test on the SSDs since it's not mounted as
>> filesystem, but dd on the OSD (non-SSD) drives gives normal result.
>>
>
> Since you have free space on the SSDs, you could add a 3rd 10G partition
> to one of the SSDs.  Then you could put a filesystem on that partition, or
> just dd the third partition directly with dd if=/dev/zero of=/dev/sdf3
> bs=... count=...
>
> Be careful not to make a typo. of=/dev/sdf or of=/dev/sdf2 would destroy
> your other journals.
>
>
> What does the manufacturer claim for the SSD performance specs?
>
> --
> Craig
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to