Steven: >My objective is simple. Fabric attached high end storages like EMC have >stripe width around 8 Mbyte. Can we not match the SCSI transfer size to >be in or around the same size?
I’m not sure what you mean. The most common EMC configuration is RAID-1 pairs, so there is no stripe width to speak of. You may well stripe these together with host based RAID-0, but then to get an 8 MB stripe width you would need 8 LUNS with 1 MB stripe unit, or 4 LUNS with a 2 MB stripe unit, or some such combination that yields an 8 MB stripe width, none of which are common. In HPC applications you may find this, but it is far from the main stream. I have also observed that larger I/O is better for high bandwidth sequential read or write, it is clearly more efficient, but depending on the configuration most resources can achieve maximum capability with 1 MB transfers if you have multi-threaded I/O. Also, while specialized HPC cases are an exception, in most mainstream applications you have a combination of I/O sizes and access types and it does make sense to completely optimize for large sequential only. The other thing to watch out for when setting a large default maxphys is that UFS will default to a 1 MB MAXCONTIG and I have seen cases where 32 KB random read were causing 1 MB pre-fetch which is not what you want! In all cases empirical data is the best way to know. You have to try it to be sure. As the dd example in this thread shows. It would be interesting to see that same device under saturation with multiple threads of 1 MB compared to 8 MB as a comparison. I expect you can achieve full saturation and full bandwidth capability using multiple threads of 1 MB… but I could be wrong. If you get a chance please try and let us know. Regards, Dave This message posted from opensolaris.org _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org