Hello Andras,

I would use an external tool such as diskspd or any other to validate the DFS 
directory reading speed. E.g. 
<https://www.windowscentral.com/how-test-hard-drive-performance-diskspd-windows-10?amp>.
IMHO Clustered Filesystems will hardly have the same speed of regular ones. 
Hadoop has even a special command (distcp) to copy data from several nodes.
The best way of finding a performance bottleneck is the scientific. You gotta 
test each one of the involved components, including client disk reading, 
network capacity and actual negotiation (even faulty cable can matters). VSS 
snapshot also can impact, as mentioned before.

Atte.
--
MSc Heitor Faria (Miami/USA)
CEO Bacula LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220

América Latina
[ http://bacula.lat/]
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to