A customer has a zpool where their spectral analysis applications create a ton (millions?) of very small files that are typically 1858 bytes in length. They're using ZFS because UFS consistently runs out of inodes. I'm assuming that ZFS aggregates these little files into recordsize (128K?) blobs for writes. This seems to go reasonably well amazingly enough. Reads are a disaster as we might expect. To complicate things, writes are coming in over NFS. Reads may be local or may be via NFS and may be random. Once written, data is not changed until removed. No Z RAID'ing is used. The storage device is a 3510 FC array with 5+1 RAID5 in hardware. I would like to triage this if possible. Would changing the recordsize to something much smaller like 8k and tuning down vdev_cache to something like 8k be of initial benefit (S10U4)? Any other ideas gratefully accepted.
bill This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss