We had 4 TSM server sharing 4 raid groups in one Clariion. Although we had plenty of LUNs and many spindles, our problem was that we were flooding the 2 internal paths of the Clariion.
It really cut into our i/o rates. Gerald Michalak IBM "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 07/31/2008 10:05:28 AM: > I went through a number of different configurations on our Clariion. The > best ended up being to take only 1 LUN from each raid group. As many raid > groups as possible. > My main TSM server has 19 X 200gB raw volumes for its diskpool. Each > volume is from a different raidgroup. > > Just try not to use more than one lun from the same raid group. That's > when I ran into performance issues. > > Regards, > Shawn > ________________________________________________ > Shawn Drew > > > > > > Internet > [EMAIL PROTECTED] > > Sent by: ADSM-L@VM.MARIST.EDU > 07/31/2008 10:15 AM > Please respond to > ADSM-L@VM.MARIST.EDU > > > To > ADSM-L > cc > > Subject > [ADSM-L] New TSM Layout > > > > > > We are in the process of designing our new TSM server. As part of this > we are also going to give it new SAN drive space. > > > > Currently we have 661 Gig in our disk pool and we are upping that to 900 > Gig. What our questions is how should we partition that? Our current > pool is in 7 partitions but I was thinking more like 3 or 4 partitions. > Are there any pro's/con's with going with fewer disk partitions? > > > This message and any attachments (the "message") is intended solely for > the addressees and is confidential. If you receive this message in error, > please delete it and immediately notify the sender. Any use not in accord > with its purpose, any dissemination or disclosure, either whole or partial, > is prohibited except formal approval. The internet can not guarantee the > integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) > not therefore be liable for the message if modified. Please note that certain > functions and services for BNP Paribas may be performed by BNP > Paribas RCC, Inc.