Hi Again, you would have to create a collocated storagepool (on tape), and migrate data from the diskpool to the new tapepool. You would have to setup next storagepoole for your diskpool, to point to the right tapepool before you start migration, by setting the migration hi/lo for the diskpool.
I'm sorry to have kept you waiting, but I've been rather busy, before christmas. Since it's a few days since you wrote this I hope yo've already figured it out. Rgds, Geirr G. Halvorsen -----Original Message----- From: Ron Lochhead [mailto:[EMAIL PROTECTED]] Sent: 20. december 2002 19:16 To: [EMAIL PROTECTED] Subject: synthetic fullbackup Hi Halvorsen: I am trying to implement this same move nodedata idea, but am coming up with problem. My environment is Win2k server running TSM server 5.1.5.2 and same on clients. My goal is consolidate my tapepool data for each node so each node has it's own tape. We only have 25 nodes. I figured out how to move nodedata to our diskpool but now how do I put nodedata from diskpool back on to one tapepool tape? The error I got said that I couldn't move nodedata from diskpool back to tapepool because of sequential access storage. Any ideas? Thanks, Ron Lochhead Halvorsen Geirr To: [EMAIL PROTECTED] Gulbrand cc: <gehal@WMDATA Subject: Re: synthetic fullbackup .COM> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] RIST.EDU> 12/18/2002 05:44 AM Please respond to "ADSM: Dist Stor Manager" Hi Werner, we might need some clearifying of your setup. What is your server version? Are you backing up to tape, or disk? Generally I can say this: If you are running TSM v. 5.x you have the possibility to use MOVE NODEDATA, which moves data for one node to another storagepool (from tape to disk), and then start your restore from the diskpool. It may sound strange, because you move the data twice, but often, you have a delay between the time you decide to restore, until you actually start the restore (f.ex. in a disaster recovery situation, where you have to get new hardware, install OS + TSM client software, before you start the restore). In this interval, you can start to move data from tape to disk, and the subsequent restore will be alot faster. The other possibility is to use collocation by filespace. Different filespaces from the same server will be collocated on different tapes, enabling you to simultaneously start a restore for each filespace. This helps reducing restore times. Third option is using backupsets, which can be created just for active files. Then you will have all active files on one volume. Others may also have an opinion on best approach to solve this. I have just pointed out some of TSM's features. Rgds. Geirr Halvorsen -----Original Message----- From: Schwarz Werner [mailto:[EMAIL PROTECTED]] Sent: 18. december 2002 14:08 To: [EMAIL PROTECTED] Subject: synthetic fullbackup We are looking for a solution for the following problem: During a restore of a whole TSM-client we found that the needed ACTIVE backup_versions were heavy scattered around our virtual tape-volumes. This was the main reason for an unacceptable long restore-time. Disk as a primary STGPool is too expensive. Now we are looking for methods to 'cluster together' all active backup_versions per node without backing up the whole TSM-client every night (like VERITAS NetbackUp). Ideally the full_backup should be done in the TSM-server (starting with an initial full_backup, then combining the full_backup and the incrementals from next run to build the next synthetic full_backup and so on). We already have activated COLLOCATE. Has anybody good ideas? thanks, werner