Silly question to all though -

Akin to the problems that Linus Tech Tips experienced with ZFS and a multi-disk 
NVMe SSD array -- is GlusterFS written so that it takes how NVMe SSDS operate 
in mind?

(i.e. that the code itself might have wait and/or wait for synchronous commands 
to finish first before executing the next command?)

cf. 
https://forum.level1techs.com/t/fixing-slow-nvme-raid-performance-on-epyc/151909
[https://forum.level1techs.com/uploads/default/original/4X/a/6/f/a6f72ef3c2adffa5161619926007f716a4459c6e.png]<https://forum.level1techs.com/t/fixing-slow-nvme-raid-performance-on-epyc/151909>
Fixing Slow NVMe Raid Performance on 
Epyc<https://forum.level1techs.com/t/fixing-slow-nvme-raid-performance-on-epyc/151909>
Linus had this weird problem where, when we built his array, the NVMe 
performance wasn’t that great. It was very slow – trash, basically. This was a 
24-drive NVMe array. These error messages arent too serious, normally, but are 
a sign of a missed interrupt. There is some traffic I’m aware of on the LKML 
that there are (maybe) some latent bugs around the NVMe driver, so as a 
fallback it’ll poll the device if something takes unusually long. This many 
polling events, though, means the perf is ...
forum.level1techs.com

I'm not a programmer nor a developer, so I don't really understand programming 
software, but I am just wondering that if this might be a similar issue with 
GlusterFS as it is with ZFS with NVMe storage devices because the underlying 
code/system was written with mechanically rotating disks in mind and/or, at 
best, SATA 3.0 6 Gbps SSDs in mind, as opposed to NVMe SSDs.

Could this be a possible reason/cause, ad simile?



________________________________
From: [email protected] <[email protected]> on 
behalf of Dmitry Antipov <[email protected]>
Sent: November 26, 2020 8:36 AM
To: [email protected] List <[email protected]>
Subject: Re: [Gluster-users] Poor performance on a server-class system vs. 
desktop

To whom it may be interesting, this paper says that ~80K IOPS (4K random 
writes) is real:

https://archive.fosdem.org/2018/schedule/event/optimizing_sds/attachments/slides/2300/export/events/attachments/optimizing_sds/slides/2300/GlusterOnNVMe_FOSDEM2018.pdf

On the same-class server hardware, following their tuning recommendations, etc. 
I just run 8 times slower.
So it seems that RH insiders are the only people knows how to setup real 
GlusterFS installation properly :(.

Dmitry
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to