Hi,
Sorry, I need to send this mail again. This is the same question, only append the url for the images. I have tested the storage performance of my storage system using three different methods: 1. Using the NFS protocol with a 10G network, on VMware ESXi, with a 4K block size and 100% sequential read operations. Url of this image: https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png 2. Using the NFS protocol with a 10G network, on Cloudstack, with a 4K block size and 100% sequential read operations. Mounted with these: 'vers=4.1,nconnect=16' Url of this image: https://s2.loli.net/2024/11/18/a2WY3gpGVA6XeCO.png 3. Using the FC protocol with 16G , on Cloudstack, treating the LUN accessed via FC as a local disk, with a 4K block size and 100% sequential read operations. Url of this image: https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png The results show : with the same storage system, the average I/O response time is best (0.32) when using NFS with VMware ESXi. The second is FC-SAN with Cloudstack (1.25), and the worst is NFS with Cloudstack. Even the NFS with ESXi is better than fc-san? I believe there may be some configurations that could improve the storage performance when using Cloudstack. I would greatly appreciate it if anyone could offer some advice or solutions to help me optimize the Cloudstack storage performance. Thank you very much for your attention. Best regards. Leo.