I had a similar issue when migrating from SSD to NVMe using Ubuntu. Read
performance tanked using NVMe. Iostat showed each NVMe performing 30x more
physical reads compared to SSD, but the MB/s was 1/6 the speed of the SSD. I
set "blockdev --setra 128 /dev/nvmeX” and now performance is much bette
I have seen this and some of our big customers have also seen this. I was using
8TB HDDs and when running small tests using a fresh HDD setup, these tests
resulted in very good performance. I then loaded the ceph cluster so each of
the 8TB HDD used 4TB and reran the same tests. performance was c
Mirror the OS disks, use 10 disks for 10 OSD's
> On Aug 12, 2016, at 7:41 AM, Félix Barbeira wrote:
>
> Hi,
>
> I'm planning to make a ceph cluster but I have a serious doubt. At this
> moment we have ~10 servers DELL R730xd with 12x4TB SATA disks. The official
> ceph docs says:
>
> "We recom
In my testing, using RBD-NBD is faster than using RBD or CephFS.
For a MySQL/sysbench test using 25 threads using OLTP, using a 40G network
between the client and Ceph, here are some of my results:
Using ceph-rbd: transactions per sec: 8620
using ceph rbd-nbd: transaction per sec: 9359
using c
; Best regards,
>
> On Wed, Aug 31, 2016 at 10:11 PM, RDS mailto:rs3...@me.com>>
> wrote:
> In my testing, using RBD-NBD is faster than using RBD or CephFS.
> For a MySQL/sysbench test using 25 threads using OLTP, using a 40G network
> between the client and Ceph, here
If I use slow HDD, I can get the same outcome. Placing journals on fast SAS or
NVMe SSD will make a difference. If you are using SATA SSD, those SSD are much
slower. Instead of guessing why Ceph is lagging, have you looked at ceph -w and
iostat and vmstat reports during your tests? Io stat will
Maxime
I forgot to mention a couple more things that you can try when using SMR HDD.
You could try to use ext4 with the “lazy” initialization. Another option is
specifying the “lazytime” ext4 mount option. Depending on your workload, you
could possibly see some big improvements.
Rick
> On Feb 18
There is 1 more thing that I noticed when using cephfs instead of RBD for
MySQL, and that is CPU usage on the client.
When using RBD, I was using 99% of the CPU’s. When I switched to cephfs, the
same tests were using 60% of the CPU. Performance was about equal. This test
was an OLTP sysbench usi
A couple of suggestions:
1) # of pgs per OSD should be 100-200
2) When dealing with SSD or Flash, performance of these devices hinge on how
you partition them and how you tune linux:
a) if using partitions, did you align the partitions on a 4k
boundary? I start at sector 2048 using e
Is there documentation on all the steps showing how to upgrade from .94 to
10.0.5?
Thanks
Rick
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
10 matches
Mail list logo