Followup, We upgraded a customer from 6.3.5 to 7.1.1.100 today (Windows TSM server). The disk pool where TSM/VE 7.1.1 data lands will now indeed do multiple concurrent migration processes out to a sequential pool, even though all the data is owned by one node. (It's migrating at the filespace level now; so you still get only 1 stream if only 1 filespace left to be migrated.)
Big help for our environment. W -----Original Message----- From: Prather, Wanda Sent: Tuesday, December 16, 2014 3:33 PM To: 'ADSM: Dist Stor Manager' Subject: Filespace level migration? was: breaking up tsm for ve backups into different nodes Didn't I read that there were supposed to be changes in the 7.1 server to make migration run at the filespace level? Which would make this a multi-thread process - anybody with experience? I have a 6.3.5 customer with the same issue, but don't have a 7.1 customer in production to verify. Wanda Prather TSM Consultant ICF International Enterprise and Cybersecurity Systems Division -----Original Message----- From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, Gary Sent: Tuesday, December 16, 2014 3:04 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] breaking up tsm for ve backups into different nodes We are backing up our vmware environment (approximately 110 vms), using tsm for ve 6.4.2 tsmserver 6.2.5 under redhat 6.1. The backup destination is disk. Trouble is all data is owned by the datacenter node; consequently, migration has only one stream. I do not currently have enough tape drives to run directly to tape. Is there a solution I have missed?