On Thu, Apr 23, 2009 at 04:29:39PM -0700, Dearl D. Neal wrote: > I appreciate the response.. it's the best I have received so far. > From further testing today, I am thoroughly confused. This morning I > was seeing par results with ufs an zfs for the fileserver workload. > The db workloads were about 1/2 the performance of the ufs filesytems > ( 6000 iops zfs vs. 12000 iops for ufs). Later in the day, the zfs > iops dropped back down to 1500..which was what I was seeing in > preliminary testing. I thought maybe an issue with the san.. so > reran the test for the ufs filesystem.. it was perfectly normal.. > which is why I am so confused!!! Our san is fronted with IBM SVC ( > SAN Virtual Controller ?? ), which allows the presentation of luns > from any source.. currently tier 2 EMC storage. Our storage backends > comprise of IBM tier 1 storage ( I know them as sharks and DS8500 ??) > and tier 2 EMC storage. I believe there is 64GB of mirrored cache > from the SAN. I don't understand the SAN all that well as it is > managed by a different group. They are interested in this as much > as I am. They are always looking for ways to speed things up.
I'm glad that I was able to help. I don't think that I can easily explain the sudden change you saw, however filebench does tend to re-use its test files. If you were tuning different per-filesystem parameters (like compression or recordsize) and forgot to remove the files between configuration changes, that might explain some of the variability. > I am not certain that it has to do with SMI versus EFI label.. but > when I switched it yesterday, my performance went up.. I still have > more investigation to perform. I thought about opening a ticket with > SUN ( aka.. oracle ), but I am not sure what to tell them.. that is > why I am still investigating.. I would like to have something more > substantial to backup my ticket. I'm afraid I don't know much about this. I would trust Richard's advice. He has a lot of ZFS expertise, especially when it comes to building high-performance configurations. > I am interested in the per filesystem zfs configurations you speak > of.. I tried setting the recordsize=8k.. but not difference. Ok. You may have a different problem. If your SAN has a NVRAM backed cache, and you're performing lots of synchronous operations, you may be hitting a problem where ZFS is requesting a cache flush from the SAN and the SAN is being a little overzealous about the flushing. If you're allowed to change the configuration on the device, you might be able to instruct it to ignore certain cache flushes when it knows the data is safely committed. You can test this hypothesis by tuning a kernel parameter, but I wouldn't necessarily run with this in production unless you have to. The Evil Tuning Guide has a more detailed explanation of the cache flushing and some possible solutions. http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes -j _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org