This problem only manifests itself when dealing with many small files over NFS. There is no throughput problem with the network.
I've run tests with the write cache disabled on all disks, and the cache flush disabled. I'm using two Intel SSDs for ZIL devices. This setup is faster than using the two Intel SSDs with write caches enabled on all disks, and with the cache flush enabled. My test would run around 3.5 to 4 minutes, now it is completing in abound 2.5 minutes. I still think this is a bit slow, but I still have quite a bit of testing to perform. I'll keep the list updated with my findings. I've already established both via this list and through other research that ZFS has performance issues over NFS when dealing with many small files. This seems to maybe be an issue with NFS itself, where NVRAM-backed storage is needed for decent performance with small files. Typically such an NVRAM cache is supplied by a hardware raid controller in a disk shelf. I find it very hard to explain to a user why an "upgrade" is a step down in performance. For the users these Thors are going to serve, such a drastic performance hit is a deal breaker... I've done my homework on this issue, I've ruled out the network as an issue, as well as the NFS clients. I've narrowed my particular performance issue down to the ZIL, and how well ZFS plays with NFS. -Greg Jim Mauro wrote: > Multiple Thors (more than 2?), with performance problems. > Maybe it's the common demnominator - the network. > > Can you run local ZFS IO loads and determine if performance > is expected when NFS and the network are out of the picture? > > Thanks, > /jim > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss