Hi Mathieu,

Using dd for disk performance tests is not recommended. Besides needing oflag 
and iflag options, dd will perform a sequential write what is not a real 
scenario,
To evaluate your servers, use fio or bonnie++ for performance tests.
These applications will play a more realistic scenario to validate your 
environment.

Regards,

Marcos

From: Staniforth, Paul <[email protected]>
Sent: quarta-feira, 8 de setembro de 2021 08:17
To: Mathieu Valois <[email protected]>; users <[email protected]>
Subject: [External] : [ovirt-users] Re: Poor gluster performances over 10Gbps 
network



Hi Mathieu,
                       with a linux VM using dd without the oflag=sync will 
mean it's using disk bufffers, hence the faster thoughput at the beginning 
until the buffers are full.


Regards,
               Paul S.



________________________________
From: Mathieu Valois <[email protected]<mailto:[email protected]>>
Sent: 08 September 2021 11:56
To: Staniforth, Paul 
<[email protected]<mailto:[email protected]>>; 
users <[email protected]<mailto:[email protected]>>
Subject: Re: [ovirt-users] Poor gluster performances over 10Gbps network


Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

Hi Paul,

thank you for your answer.

Indeed I did a `dd` inside a VM to measure the Gluster disk performance. I've 
also tried `dd` on an hypervisor writing into one of the replicated gluster 
bricks, which gives good performances (similar to logical volume ones).


Le 08/09/2021 à 12:51, Staniforth, Paul a écrit :

Hi Mathieu,
                    How are you measuring the Gluster disk performance?
also using dd you should use the

oflag=dsync

to avoid buffer caching.


Regards,

Paul S
________________________________
From: Mathieu Valois <[email protected]><mailto:[email protected]>
Sent: 08 September 2021 10:12
To: users <[email protected]><mailto:[email protected]>
Subject: [ovirt-users] Poor gluster performances over 10Gbps network


Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

Sorry for double post but I don't know if this mail has been received.

Hello everyone,

I know this issue was already treated on this mailing list. However none of the 
proposed solutions is satisfying me.

Here is my situation : I've got 3 hyperconverged gluster ovirt nodes, with 6 
network interfaces, bounded in bunches of 2 (management, VMs and gluster). The 
gluster network is on a dedicated bound where the 2 interfaces are directly 
connected to the 2 other ovirt nodes. Gluster is apparently using it :

# gluster volume status vmstore
Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster-ov1:/gluster_bricks
/vmstore/vmstore                            49152     0          Y       3019
Brick gluster-ov2:/gluster_bricks
/vmstore/vmstore                            49152     0          Y       3009
Brick gluster-ov3:/gluster_bricks
/vmstore/vmstore

where 'gluster-ov{1,2,3}' are domain names referencing nodes in the gluster 
network. This networks has 10Gbps capabilities :

# iperf3 -c gluster-ov3
Connecting to host gluster-ov3, port 5201
[  5] local 10.20.0.50 port 46220 connected to 10.20.0.51 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.16 GBytes  9.92 Gbits/sec   17    900 KBytes
[  5]   1.00-2.00   sec  1.15 GBytes  9.90 Gbits/sec    0    900 KBytes
[  5]   2.00-3.00   sec  1.15 GBytes  9.90 Gbits/sec    4    996 KBytes
[  5]   3.00-4.00   sec  1.15 GBytes  9.90 Gbits/sec    1    996 KBytes
[  5]   4.00-5.00   sec  1.15 GBytes  9.89 Gbits/sec    0    996 KBytes
[  5]   5.00-6.00   sec  1.15 GBytes  9.90 Gbits/sec    0    996 KBytes
[  5]   6.00-7.00   sec  1.15 GBytes  9.90 Gbits/sec    0    996 KBytes
[  5]   7.00-8.00   sec  1.15 GBytes  9.91 Gbits/sec    0    996 KBytes
[  5]   8.00-9.00   sec  1.15 GBytes  9.90 Gbits/sec    0    996 KBytes
[  5]   9.00-10.00  sec  1.15 GBytes  9.90 Gbits/sec    0    996 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.5 GBytes  9.90 Gbits/sec   22             sender
[  5]   0.00-10.04  sec  11.5 GBytes  9.86 Gbits/sec                  receiver

iperf Done.

However, VMs stored on the vmstore gluster volume has poor write performances, 
oscillating between 100KBps and 30MBps. I almost always observe a write spike 
(180Mbps) at the beginning until around 500MB written, then it drastically 
falls at 10MBps, sometimes even less (100KBps). Hypervisors have 32 threads (2 
sockets, 8 cores per socket, 2 threads per core).

Here is the volume settings :

Volume Name: vmstore
Type: Replicate
Volume ID: XXX
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster-ov1:/gluster_bricks/vmstore/vmstore
Brick2: gluster-ov2:/gluster_bricks/vmstore/vmstore
Brick3: gluster-ov3:/gluster_bricks/vmstore/vmstore
Options Reconfigured:
performance.io-thread-count: 32 # was 16 by default.
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 8 # was 4 by default
cluster.choose-local: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on

When I naively write directly on the logical volume, which is mounted on a 
material RAID5 3-disks array, I have interesting performances:

# dd if=/dev/zero of=a bs=4M count=2048
2048+0 records in
2048+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 17.2485 s, 498 MB/s #urandom gives 
around 200MBps

Moreover, hypervisors have SSD which have been configured as lvcache, but I'm 
unsure how to test it efficiently.

I can't find where is the problem, as every piece of the chain is apparently 
doing well ...

Thanks anyone for helping me :)
--
[téïcée]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.teicee.com*2F*3Fpk_campaign*3DEmail&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953658980281*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=Xb4zs12CHeEMlUNXoxcpuRfad7zQrpeJLPaz5k0g2XI*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJQ!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZHVpanYw4$>
Mathieu Valois

Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760 
Bretteville-sur-Odon
Bureau Vitré: Zone de la baratière - 12, route de Domalain - 35500 Vitré
02 72 34 13 20 | 
www.teicee.com<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.teicee.com*2F*3Fpk_campaign*3DEmail&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953658990273*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=GjWLrgcZYaIrmH9BXbNyiPP*2BGQ4jeZZKARl0WOM6Hh8*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSU!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZHi5HLhEU$>
[téïcée sur                            
facebook]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.facebook.com*2Fteicee&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953658990273*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=CGyVnOOhKZfjgbTndArKyX*2FF1UDSio3Ytw41QZ37r7k*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUl!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZH2WEcc8Y$>[téïcée
 sur                            
twitter]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Ftwitter.com*2FTeicee_fr&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659000270*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=VfrlX7*2BNEAgcZDFykiCNBD*2FKQmEXvcB0RWSF70csuNA*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJQ!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZHzCnAQmA$>[téïcée
 sur                            
linkedin]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.linkedin.com*2Fcompany*2Ft-c-e&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659000270*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=2wWc1igq8AI2M0ooJIqCfexK3LvzTYchPkjLoa*2FXV*2Fs*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSU!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZH6KQCihs$>[téïcée
 sur 
viadeo]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Ffr.viadeo.com*2Ffr*2Fcompany*2Fteicee&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659010264*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=Ip3VWuiKxBj1rZlNIynC9fvYvs9ZkQvwNRqx49Ogl3w*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJQ!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZH48mnqgY$>
[Datadocké]

To view the terms under which this email is distributed, please go to:-
https://leedsbeckett.ac.uk/disclaimer/email<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Fleedsbeckett.ac.uk*2Fdisclaimer*2Femail&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659010264*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=iDbrO2XcDbKdiy4KDNuxTwKBAmIU*2FT0utUf5ihapcE4*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJQ!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZHDjxjuPk$>
--
[téïcée]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.teicee.com*2F*3Fpk_campaign*3DEmail&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659020259*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=qfc9z*2BdCcos4zoJ8uYfiDc7XpNRVq8C8e*2FFJsz1buSc*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZHwq0QS8o$>
Mathieu Valois

Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760 
Bretteville-sur-Odon
Bureau Vitré: Zone de la baratière - 12, route de Domalain - 35500 Vitré
02 72 34 13 20 | 
www.teicee.com<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.teicee.com*2F*3Fpk_campaign*3DEmail&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659020259*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=qfc9z*2BdCcos4zoJ8uYfiDc7XpNRVq8C8e*2FFJsz1buSc*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZHwq0QS8o$>
[téïcée sur                      
facebook]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.facebook.com*2Fteicee&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659030247*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=1w8bOzBRGUFHHUaUlmOjdifVxeV3cWcYctcK7s*2Fwsxg*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUl!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZH7MZeS6c$>[téïcée
 sur                      
twitter]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Ftwitter.com*2FTeicee_fr&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659030247*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=q4YLFppwxfcPxWEhWCFQQ6H9zjWkFbeBVUOeX*2FVOnMg*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUl!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZHe4Ll20o$>[téïcée
 sur                      
linkedin]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.linkedin.com*2Fcompany*2Ft-c-e&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659040248*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=MvJrxIwIl21blHxmtpylaBG2cmRERxVgB7csjaYskPc*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUl!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZHHGQegVo$>[téïcée
 sur 
viadeo]<https://urldefense.com/v3/__https:/eur02.safelinks.protection.outlook.com/?url=https*3A*2F*2Ffr.viadeo.com*2Ffr*2Fcompany*2Fteicee&data=04*7C01*7CP.Staniforth*40leedsbeckett.ac.uk*7Ca4fd6515a97c415edb6a08d972b73ff0*7Cd79a81124fbe417aa112cd0fb490d85c*7C0*7C0*7C637666953659040248*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=CaPNKCGrsvv0xFS3SdgnL6YJIYighulaGAmbRCpvt3E*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJQ!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZH3ncMwGo$>
[Datadocké]

To view the terms under which this email is distributed, please go to:-
https://leedsbeckett.ac.uk/disclaimer/email<https://urldefense.com/v3/__https:/leedsbeckett.ac.uk/disclaimer/email__;!!ACWV5N9M2RV99hQ!YTnt9m_hqEsFMvWPkRnZC0gzHLAVwQBSz0rSfcKgJjOsEdrgejdYEekcxwZHUuPZM5I$>
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/5JR77SQZ43VPLSXA3XOPHDKZ4EFMRXTL/

Reply via email to