NightBird wrote:
Thanks Ian.
I read the best practices and undestand the IO limitation I have created for 
this vdev. My system is a built for maximize capacity using large stripes, not 
performance.
All the tools I have used show no IO problems. I think the problem is memory but I am unsure on how to troubleshoot it.

Look for latency, not bandwidth.  iostat will show latency at the
device level.

Other things that affect ls -la are name services and locale. Name services
because the user ids are numbers and are converted to user names via
the name service (these are cached in the name services cache daemon,
so you can look at the nscd hit rates with "nscd -g").  The locale matters
because the output is sorted, which is slower for locales which use unicode.
This implies that the more entries in the directory, and the longer the names
are with more common prefixes, the longer it takes to sort.  I expect case
insensitive sorts (common for CIFS environments) also take longer to
sort.  You could sort by a number instead, try "ls -c" or "ls -S"

ls looks at metadata, which is compressed and typically takes little space.
But it is also cached, which you can see by looking at the total name
lookups in "vmstat -s"

As others have pointed out, I think you will find that a 23-wide raidz,
raidz2, raid-5, or raid-6 configuration is not a recipe for performance.
-- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to