The virt settings (highly recommended for Virtual usage) enabled
SHARDING.

ONCE ENABLED, NEVER EVER DISABLE SHARDING !!!


Best Regards,
Strahil Nikolov

В 16:34 -0800 на 25.11.2020 (ср), WK написа:
> 
>     No, that doesn't look right.
> 
>     
> 
>     I have a testbed cluster that has a single 1G network (1500 mtu)
> 
>     
> 
>     it is replica 2 + arbiter on top of 7200 rpms spinning drives
>       formatted with XFS
> 
>     
> 
>     This cluster runs Gluster 6.10 on Ubuntu 18 on some Dell i5-2xxx
>       boxes that were lying around.
> 
>     
> 
>     it uses a stock 'virt' group tuning which provides the following:
> 
>     
> 
>     root@onetest2:~/datastores/101# cat /var/lib/glusterd/groups/virt
> 
>       performance.quick-read=off
> 
>       performance.read-ahead=off
> 
>       performance.io-cache=off
> 
>       performance.low-prio-threads=32
> 
>       network.remote-dio=enable
> 
>       cluster.eager-lock=enable
> 
>       cluster.quorum-type=auto
> 
>       cluster.server-quorum-type=server
> 
>       cluster.data-self-heal-algorithm=full
> 
>       cluster.locking-scheme=granular
> 
>       cluster.shd-max-threads=8
> 
>       cluster.shd-wait-qlength=10000
> 
>       features.shard=on
> 
>       user.cifs=off
> 
>       cluster.choose-local=off
> 
>       client.event-threads=4
> 
>       server.event-threads=4
> 
>       performance.client-io-threads=on
> 
>     I show the following results on your test. Note: the cluster is
>       actually doing some work with 3 Vms running doing monitoring
>       things.
> 
>     
> 
>     The bare metal performance is as follows:
> 
>     root@onetest2:/# dd if=/dev/zero of=/test12.img bs=1G count=1
>       oflag=dsync
> 
>       1+0 records in
> 
>       1+0 records out
> 
>       1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.0783 s, 96.9 MB/s
> 
>       root@onetest2:/# dd if=/dev/zero of=/test12.img bs=1G count=1
>       oflag=dsync
> 
>       1+0 records in
> 
>       1+0 records out
> 
>       1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.5047 s, 93.3 MB/s
> 
>     
> 
>     Moving over to the Gluster mount I show the following:
> 
>     
> 
>     root@onetest2:~/datastores/101# dd if=/dev/zero of=/test12.img
>       bs=1G count=1 oflag=dsync
> 
>       1+0 records in
> 
>       1+0 records out
> 
>       1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.4582 s, 93.7 MB/s
> 
>       root@onetest2:~/datastores/101# dd if=/dev/zero of=/test12.img
>       bs=1G count=1 oflag=dsync
> 
>       1+0 records in
> 
>       1+0 records out
> 
>       1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.2034 s, 88.0 MB/s
> 
>       
> 
>     
> 
>     So a little performance hit with Gluster but almost insignificant
>       given that other things were going on.
> 
>     
> 
>     I don't know if you are in a VM environment but if so you could
>       try the virt tuning.
> 
>     gluster volume set VOLUME group virt
>     Unfortunately, I know little about ZFS so I can't comment on its
>       performance, but your gluster results should be closer to the
> bare
>       metal performance.
> 
>     
> 
>     Also note I am using an Arbiter, so that is less work than
>       Replica 3. With a true Replica 3 I would expect the Gluster
>       results to be lower, maybe as low as  60-70 MB/s range
> 
>     -wk
> 
>     
> 
>     
> 
>     
> 
>     On 11/25/2020 2:29 AM, Harry O wrote:
> 
>     
> 
>     
> >       Unfortunately I didn't get any improvement by upgrading the
> > network.
> > Bare metal (zfs raid1 zvol):dd if=/dev/zero
> > of=/gluster_bricks/test1.img bs=1G count=1 oflag=dsync1+0 records
> > in1+0 records out1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.6471
> > s, 68.6 MB/s
> > Centos VM on gluster volume:dd if=/dev/zero of=/test12.img bs=1G
> > count=1 oflag=dsync1+0 records in1+0 records out1073741824 bytes
> > (1.1 GB, 1.0 GiB) copied, 36.8618 s, 29.1 MB/s
> > Does this performance look
> > normal?_______________________________________________Users mailing
> > list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZKRIMXDVN3MAVE7GVQDUIL5ZE473LAL/
> > 
> >     
> 
>   
> 
> _______________________________________________Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/J27OKT7IRWZM6DA4QEX3YZISDZOFHNAX/
> 
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5PILBZ66Q67KBZMDUYM2RUVLFIA5HG4V/

Reply via email to