... Writing to a local disk (/opt) and not our SAN disk, we were still seeing 1.5MB/s, so it didn't appear to be a SAN/FC/HBA related issue either. Finally, FTP from client to TSM server was consistently rating at the full 11MB/s over the LAN, which suggested that it was *something* to do with the way that TSM was interacting with the disk layer, rather than general slow disk performance. ...
It may be of value to evaluate native disk speed within a computer system. A simple method is to time how long it takes to write a file of a given size, as via command: time dd if=/dev/zero bs=64k count=1000 > /Some/DiskFile where the count value may be increased as needed.
There are many affectors of disk performance. One insidious affector is Queue Depth, which limits on the number of I/O requests that can be outstanding from a host adapter to a device. You may have run into vendors recommending using only disk storage which has is documented as being supported by their hardware and OS level. The compelling reason for this is that OS drivers have been programmed to understand the device, and thus optimally work with it. Attaching some storage device which is not documented as supported causes the OS to plead ignorance of it, and assign minimal default values...values which can drastically impair performance. Typically, if you query Queue Depth on such a device ('lsattr -El <DevName>' in AIX), you will see a queue_depth = 1. Compare that with the value on a supported device which is attached to your system, and you will see quite a difference in values - as well as performance. This is just one example of an affecting value: there are plenty more affectors in complex systems.
All this is to say that I would not necessarily attribute disk performance issues to an application until basal values had been verified.
Richard Sims http://people.bu.edu/rbs