On March 6, 2020 6:02:03 PM GMT+02:00, Jayme <[email protected]> wrote:
>I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
>disks).
>Small file performance inner-vm is pretty terrible compared to a
>similar
>spec'ed VM using NFS mount (10GBe network, SSD disk)
>
>VM with gluster storage:
>
># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
>1000+0 records in
>1000+0 records out
>512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
>
>VM with NFS:
>
># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
>1000+0 records in
>1000+0 records out
>512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
>
>This is a very big difference, 2 seconds to copy 1000 files on NFS VM
>VS 53
>seconds on the other.
>
>Aside from enabling libgfapi is there anything I can tune on the
>gluster or
>VM side to improve small file performance? I have seen some guides by
>Redhat in regards to small file performance but I'm not sure what/if
>any of
>it applies to oVirt's implementation of gluster in HCI.

You can use the rhgs-random-io tuned  profile from 
ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm
  and try with that on your hosts.
In my case, I have  modified  it so it's a mixture between rhgs-random-io and 
the profile for Virtualization Host.

Also,ensure that your bricks are  using XFS with relatime/noatime mount option 
and your scheduler for the SSDs is either  'noop' or 'none' .The  default  I/O 
scheduler for RHEL7 is deadline which is giving preference to reads and  your  
workload  is  definitely 'write'.

Ensure that the virt settings are  enabled for your gluster volumes:
'gluster volume set <volname> group virt'

Also, are you running  on fully allocated disks for the VM or you started thin ?
I'm asking as creation of new shards  at gluster  level is a slow task.

Have you checked  gluster  profiling the volume?  It can clarify what is going 
on.


Also are you comparing apples to apples ?
For example, 1 ssd  mounted  and exported  as NFS and a replica 3 volume  of 
the same type of ssd ? If not,  the NFS can have more iops due to multiple 
disks behind it, while Gluster has to write the same thing on all nodes.

Best Regards,
Strahil Nikolov
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/RKTJTCP7CMO2HEC2AWID7OXM4F3IIKU2/

Reply via email to