Hello Robert,
OK. I already tried this but as you said performances decrease. I just
built the 10.0.0 version and it seems that there are some regressions in
there. I've now 3.5 Kiops instead of 21 Kiops in 9.2.0 :-/
Thanks.
Rémi
Le 2015-11-25 18:54, Robert LeBlanc a écrit :
-BEGIN PGP
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'm really surprised that you are getting 100K IOPs from the Intel
S3610s. We are already in the process of ordering some to test along
side other drives, so I should be able to verify that as well. With
the S3700 and S3500, I was able to only get 20
Hello Hugo,
Yes you're right. With Sebastien Han fio command I manage to see that my
disks can finally handle 100 Kiops, so the theoritical value is then: 2
x 2 x 100 / 2 = 200k.
I put the journal on the SSDSC2BX016T4R which is then supposed to double
my IOs, but it's not the case.
Rémi
Hello Robert,
Sorry for late answer.
Thanks for your reply. I updated to infernalis and I applied all your
recommendations but it doesn't change anything, with or without cache
tiering :-/
I also compared XFS to EXT4 and BTRFS but it doesn't make the
difference.
The fio command from Seba
On 11/07/2015 09:44 AM, Oliver Dzombic wrote:
> setting inode64 in osd_mount_options_xfs might help a little.
sorry, inode64 is the default mount option with xfs.
Björn
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
On Sat 2015-Nov-07 09:24:06 +0100, Rémi BUISSON wrote:
Hi guys,
I would need your help to figure out performance issues on my ceph cluster.
I've read pretty much every thread on the net concerning this topic
but I didn't manage to have acceptable performances.
In my company, we are planning
You most likely did the wrong test to get baseline Ceph IOPS or of your
ssds. Ceph is really hard on SSDS and it does direct sync writes which
drives handle very different even between models of the same brand. Start
with
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-s
Hello,
I just saw the release announce of infernalis. I will test it in the
meantime.
Rémi
On 07/11/2015 09:24, Rémi BUISSON wrote:
Hi guys,
I would need your help to figure out performance issues on my ceph
cluster.
I've read pretty much every thread on the net concerning this topic
but
Hi Remi,
setting inode64 in osd_mount_options_xfs might help a little.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi guys,
I would need your help to figure out performance issues on my ceph cluster.
I've read pretty much every thread on the net concerning this topic but
I didn't manage to have acceptable performances.
In my company, we are planning to replace our existing virtualization
infrastucture NAS b
10 matches
Mail list logo