On 2017-10-01 16:47, Alexander Kushnirenko wrote:
> Hi, Gregory!
>
> Thanks for the comment. I compiled simple program to play with write speed
> measurements (from librados examples). Underline "write" functions are:
> rados_write(io, "hw", read_res, 1048576, i*1048576);
> rados_aio_write(io, "foo", comp, read_res, 1048576, i*1048576);
>
> So I consecutively put 1MB blocks on CEPH. What I measured is that
> rados_aio_write gives me about 5 times the speed of rados_write. I make 128
> consecutive writes in for loop to create object of maximum allowed size of
> 132MB.
>
> Now if I do consecutive write from some client into CEPH storage, then what
> is the recommended buffer size? (I'm trying to debug very poor Bareos write
> speed of just 3MB/s to CEPH)
>
> Thank you,
> Alexander
>
> On Fri, Sep 29, 2017 at 5:18 PM, Gregory Farnum <gfar...@redhat.com> wrote:
> It sounds like you are doing synchronous reads of small objects here. In that
> case you are dominated by the per-op already rather than the throughout of
> your cluster. Using aio or multiple threads will let you parallelism requests.
> -Greg
>
> On Fri, Sep 29, 2017 at 3:33 AM Alexander Kushnirenko <kushnire...@gmail.com>
> wrote:
>
> Hello,
>
> We see very poor performance when reading/writing rados objects. The speed
> is only 3-4MB/sec, compared to 95MB rados benchmarking.
>
> When you look on underline code it uses librados and linradosstripper
> libraries (both have poor performance) and the code uses rados_read and
> rados_write functions. If you look on examples they recommend
> rados_aio_read/write.
>
> Could this be the reason for poor performance?
>
> Thank you,
> Alexander. _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [1]
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Even the 95MB/s rados benchmark may still be indicative of a problem, it
defaults to creating 16 (or maybe 32) threads so it can be writing to 16
different OSDs simultaneously. To get a more accurate value to what you
are doing try the rados bench with 1 thread and 1M block size (default
it 4M) such as
rados bench -p testpool -b 1048576 30 write -t 1 --no-cleanup
Links:
------
[1] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com