Re: [ceph-users] rados_read versus rados_aio_read performance
Hi, Gregory! Thanks for the comment. I compiled simple program to play with write speed measurements (from librados examples). Underline "write" functions are: rados_write(io, "hw", read_res, 1048576, i*1048576); rados_aio_write(io, "foo", comp, read_res, 1048576, i*1048576); So I consecutively put 1MB blocks on CEPH. What I measured is that rados_aio_write gives me about 5 times the speed of rados_write. I make 128 consecutive writes in for loop to create object of maximum allowed size of 132MB. Now if I do consecutive write from some client into CEPH storage, then what is the recommended buffer size? (I'm trying to debug very poor Bareos write speed of just 3MB/s to CEPH) Thank you, Alexander On Fri, Sep 29, 2017 at 5:18 PM, Gregory Farnum wrote: > It sounds like you are doing synchronous reads of small objects here. In > that case you are dominated by the per-op already rather than the > throughout of your cluster. Using aio or multiple threads will let you > parallelism requests. > -Greg > On Fri, Sep 29, 2017 at 3:33 AM Alexander Kushnirenko < > kushnire...@gmail.com> wrote: > >> Hello, >> >> We see very poor performance when reading/writing rados objects. The >> speed is only 3-4MB/sec, compared to 95MB rados benchmarking. >> >> When you look on underline code it uses librados and linradosstripper >> libraries (both have poor performance) and the code uses rados_read and >> rados_write functions. If you look on examples they recommend >> rados_aio_read/write. >> >> Could this be the reason for poor performance? >> >> Thank you, >> Alexander. >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] nfs-ganesha / cephfs issues
Cephfs does have repair tools but I wouldn't jump the gun, your metadata pool is probably fine. Unless you're getting health errors or seeing errors in your MDS log? Are you exporting a fuse or kernel mount with Ganesha (i.e using the vfs FSAL) or using the Ceph FSAL? Have you tried any tests directly on a CephFS mount (taking Ganesha out of the equation)? On Sat, Sep 30, 2017 at 11:09 PM, Marc Roos wrote: > > > I have on luminous 12.2.1 on a osd node nfs-ganesha 2.5.2 (from ceph > download) running. And when I rsync on a vm that has the nfs mounted, I > get stalls. > > I thought it was related to the amount of files of rsyncing the centos7 > distro. But when I tried to rsync just one file it also stalled. It > looks like it could not create the update of the 'CentOS_BuildTag' file. > > Could this be a problem in the meta data pool of cephfs? Does this sound > familiar? Is there something like an fsck for cephfs? > > drwxr-xr-x 1 500 500 7 Jan 24 2016 .. > -rw-r--r-- 1 500 50014 Dec 5 2016 CentOS_BuildTag > -rw-r--r-- 1 500 50029 Dec 5 2016 .discinfo > -rw-r--r-- 1 500 500 946 Jan 12 2017 .treeinfo > drwxr-xr-x 1 500 500 1 Sep 5 15:36 LiveOS > drwxr-xr-x 1 500 500 1 Sep 5 15:36 EFI > drwxr-xr-x 1 500 500 3 Sep 5 15:36 images > drwxrwxr-x 1 500 50010 Sep 5 23:57 repodata > drwxrwxr-x 1 500 500 9591 Sep 19 20:33 Packages > drwxr-xr-x 1 500 500 9 Sep 19 20:33 isolinux > -rw--- 1 500 500 0 Sep 30 23:49 .CentOS_BuildTag.PKZC1W > -rw--- 1 500 500 0 Sep 30 23:52 .CentOS_BuildTag.gM1C1W > drwxr-xr-x 1 500 50015 Sep 30 23:52 . > > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] rados_read versus rados_aio_read performance
On 2017-10-01 16:47, Alexander Kushnirenko wrote: > Hi, Gregory! > > Thanks for the comment. I compiled simple program to play with write speed > measurements (from librados examples). Underline "write" functions are: > rados_write(io, "hw", read_res, 1048576, i*1048576); > rados_aio_write(io, "foo", comp, read_res, 1048576, i*1048576); > > So I consecutively put 1MB blocks on CEPH. What I measured is that > rados_aio_write gives me about 5 times the speed of rados_write. I make 128 > consecutive writes in for loop to create object of maximum allowed size of > 132MB. > > Now if I do consecutive write from some client into CEPH storage, then what > is the recommended buffer size? (I'm trying to debug very poor Bareos write > speed of just 3MB/s to CEPH) > > Thank you, > Alexander > > On Fri, Sep 29, 2017 at 5:18 PM, Gregory Farnum wrote: > It sounds like you are doing synchronous reads of small objects here. In that > case you are dominated by the per-op already rather than the throughout of > your cluster. Using aio or multiple threads will let you parallelism requests. > -Greg > > On Fri, Sep 29, 2017 at 3:33 AM Alexander Kushnirenko > wrote: > > Hello, > > We see very poor performance when reading/writing rados objects. The speed > is only 3-4MB/sec, compared to 95MB rados benchmarking. > > When you look on underline code it uses librados and linradosstripper > libraries (both have poor performance) and the code uses rados_read and > rados_write functions. If you look on examples they recommend > rados_aio_read/write. > > Could this be the reason for poor performance? > > Thank you, > Alexander. ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [1] ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com Even the 95MB/s rados benchmark may still be indicative of a problem, it defaults to creating 16 (or maybe 32) threads so it can be writing to 16 different OSDs simultaneously. To get a more accurate value to what you are doing try the rados bench with 1 thread and 1M block size (default it 4M) such as rados bench -p testpool -b 1048576 30 write -t 1 --no-cleanup Links: -- [1] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph luminous repo not working on Ubuntu xenial
Dear, Thanks for help. I am able to install on single node. Now going to install on multiple nodes. Just want to clarify one small thing. Is Ceph key and Ceph repository need to add on every node or it is required only on admin node where we execute ceph-deploy command ? On Friday, September 29, 2017 9:57 AM, Stefan Kooman wrote: Quoting Kashif Mumtaz (kashif.mum...@yahoo.com): > > Dear User, > I am striving had to install Ceph luminous version on Ubuntu 16.04.3 ( > xenial ). > Its repo is available at https://download.ceph.com/debian-luminous/ > I added it like sudo apt-add-repository 'deb > https://download.ceph.com/debian-luminous/ xenial main' > # more sources.list > deb https://download.ceph.com/debian-luminous/ xenial main ^^ That looks good. > It say no package available. Did anybody able to install Luminous on Xenial > by using repo? Just checkin': you did a "apt update" after adding the repo? The repo works fine for me. Is the Ceph gpg key installed? apt-key list |grep Ceph uid Ceph.com (release key) Make sure you have "apt-transport-https" installed (as the repos uses TLS). Gr. Stefan -- | BIT BV http://www.bit.nl/ Kamer van Koophandel 09090351 | GPG: 0xD14839C6 +31 318 648 688 / i...@bit.nl ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph luminous repo not working on Ubuntu xenial
Quoting Kashif Mumtaz (kashif.mum...@yahoo.com): > Dear, Thanks for help. I am able to install on single node. Now going > to install on multiple nodes. Just want to clarify one small thing. > Is Ceph key and Ceph repository need to add on every node or it is > required only on admin node where we execute ceph-deploy command ? You need to have the key installed on every node. The ceph-deploy command just remotely executes the commands on the nodes. Gr. Stefan -- | BIT BV http://www.bit.nl/Kamer van Koophandel 09090351 | GPG: 0xD14839C6 +31 318 648 688 / i...@bit.nl ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com