Re: Re: TSM install on ESX Linux help?
A little late to the party, but this is what we've done here, with no ill effects (been working fine since March). David Bronder wrote: An alternative in this particular case would be to install the RPM with the "--nodeps" option to ignore dependencies. AFAIK, the /usr/bin/ksh dependency is only there for the dsmj Java GUI client wrapper script, which isn't usable on ESX anyway since Java isn't installed (on either 2.x or 3.x of ESX).
Re: Data Deduplication
On Thu, Aug 30, 2007 at 03:09:09AM -0400, Curtis Preston wrote: > Unlike a de-dupe VTL that can be used with TSM, de-dupe backup software > would replace TSM (or NBU, NW, etc) where it's used. De-dupe backup > software takes TSM's progressive incremental much farther, only backing > up new blocks/fragements/pieces of data that have never been seen by the > backup server. This makes de-dupe backup software really great at > backing up remote offices. We had Avamar out a few years ago pitching their solution, and we liked everything about it except the price. (And now that they're a part of EMC, I don't expect that price to drop much... *smirk*) But since we're talking about software, there's an aspect of de-dupe that I don't think has been explicitly mentioned yet. Avamar said their software got 10-20% reduction on a backup of a stock Windows XP installation. A single system, say it's the first one you added to your backup group. That's not two users with the same email attachments saved, or identical files across two systems - that's hashing files in the OS (I presume from headers in DLLs and such.) So if you backup two identical stock XP installs, you get 20% reduction on the first one and 100% on the second and beyond. Scale that up to hundreds of systems, and that's an incredible cost savings. Suddenly backing up entire systems doesn't seem so inefficient anymore. Dave
Re: Data Deduplication
Good point. They mainly get that 10-20% with compression. (They use compression after they've de-duped.) They're at different levels of granularity, so it still works. --- W. Curtis Preston Backup Blog @ www.backupcentral.com VP Data Protection, GlassHouse Technologies -Original Message- From: Dave Mussulman [mailto:[EMAIL PROTECTED] Sent: Friday, August 31, 2007 1:34 PM To: Curtis Preston Cc: ADSM-L@VM.MARIST.EDU Subject: Re: Data Deduplication On Thu, Aug 30, 2007 at 03:09:09AM -0400, Curtis Preston wrote: > Unlike a de-dupe VTL that can be used with TSM, de-dupe backup software > would replace TSM (or NBU, NW, etc) where it's used. De-dupe backup > software takes TSM's progressive incremental much farther, only backing > up new blocks/fragements/pieces of data that have never been seen by the > backup server. This makes de-dupe backup software really great at > backing up remote offices. We had Avamar out a few years ago pitching their solution, and we liked everything about it except the price. (And now that they're a part of EMC, I don't expect that price to drop much... *smirk*) But since we're talking about software, there's an aspect of de-dupe that I don't think has been explicitly mentioned yet. Avamar said their software got 10-20% reduction on a backup of a stock Windows XP installation. A single system, say it's the first one you added to your backup group. That's not two users with the same email attachments saved, or identical files across two systems - that's hashing files in the OS (I presume from headers in DLLs and such.) So if you backup two identical stock XP installs, you get 20% reduction on the first one and 100% on the second and beyond. Scale that up to hundreds of systems, and that's an incredible cost savings. Suddenly backing up entire systems doesn't seem so inefficient anymore. Dave
Re: Data Deduplication
On Aug 31, 2007, at 4:33 PM, Dave Mussulman wrote: ... Avamar said their software got 10-20% reduction on a backup of a stock Windows XP installation. A single system, say it's the first one you added to your backup group. That's not two users with the same email attachments saved, or identical files across two systems - that's hashing files in the OS (I presume from headers in DLLs and such.) ... I'm mildly amused that in all these postings on the subject, none has addressed the corollary of the backups: restoral. There are likely some implications in the restoral of files backed up this way, perhaps most particularly in system files; and restoral performance is also something one would wonder about. And there may be situations where such a backup/restore regimen is to be avoided, because of issues. Perhaps those with experience in this area would post what they've found. Richard Sims, at Boston University
create 2 sessions for backups
Hello, I have increased 'Maximum Mount Points Allowed: 2 ' on the Mail node and added these two lines on the dsm.opt (see below) of the domino, restarted the TDP scheduler. When the backup schedule kicks off I am not seeing the extra session shows up. What else I should check? Include *.* TDPClass Resourceutilization 5 Thanks, Avy Wong Business Continuity Administrator Mohegan Sun 1 Mohegan Sun Blvd Uncasville, CT 06382 (860)862-8164 (cell) (860)961-6976 ** The information contained in this message may be privileged and confidential and protected from disclosure. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution, or copy of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by replying to the message and deleting it from your computer. **
Re: Data Deduplication
I thought we DID address that in one of the posts. (Maybe I'm getting things confused with another thread I'm having on the same topic.) A properly designed de-duplication backup system should restore the data at the same speed as, if not faster than the backup, and the tests that I've done with a few of them have all worked this way. I believe it's something you should test, but it appears that the designers thought of this natural objection and designed around it. I believe it has to do with the fact that restoring 100 random pieces to create a single file means you get to read off of a bunch of spindles. I will say that there are speed differences between the de-dupe appliances (VTLs) and de-dupe backup software. De-dupe backup software still restores fast enough for what it was designed for. (You should be able to fill a GbE pipe with such a restore.) But they're not going to restore at the 100s of MB/s that you can get out of one of the appliances. --- W. Curtis Preston Backup Blog @ www.backupcentral.com VP Data Protection, GlassHouse Technologies -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Richard Sims Sent: Friday, August 31, 2007 3:13 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Data Deduplication On Aug 31, 2007, at 4:33 PM, Dave Mussulman wrote: > ... Avamar said their software got > 10-20% reduction on a backup of a stock Windows XP installation. A > single system, say it's the first one you added to your backup group. > That's not two users with the same email attachments saved, or > identical > files across two systems - that's hashing files in the OS (I presume > from headers in DLLs and such.) ... I'm mildly amused that in all these postings on the subject, none has addressed the corollary of the backups: restoral. There are likely some implications in the restoral of files backed up this way, perhaps most particularly in system files; and restoral performance is also something one would wonder about. And there may be situations where such a backup/restore regimen is to be avoided, because of issues. Perhaps those with experience in this area would post what they've found. Richard Sims, at Boston University
SQL TDP in MSCLuster
It took much work, and we still have on open PMR on the subject, but we have our first MSCluster up and going. The one problem is the TDP agent on the cluster has ceased working. Master, model, and msdb backup with no trouble, but the big production DB dies after about 12-13 minutes. Large, unclustered DBs, backup on the machine regularly, and we backeup this one before defining the cluster. The DBA is sure he has found the solution. When he starts the backup, this is the generated command: BACKUP failed to complete the command USE master BACKUP DATABASE [ProdDB] TO VIRTUAL_DEVICE=N'TDPSQL-17B8-' WITH BLOCKSIZE=512, MAXTRANSFERSIZE=1048576, NAME=N'full', DESCRIPTION=N'TDPSQL-17B8- (TDP MS SQL V2)' He's convinced the problem is with the MAXTRANSFERSIZE. If I would change that, everything would work. But it's not a TSM parameter. He says it doesn't appear in MSSQL either. So where does this come from? Is it really the culprit? Support hasn't been very supportive so far, prefering to concentrate on the cluster.
Messages ANS1228E and ANS4005E combination during backup
Windows2003, TSM client 5.4.1.0 TSM Server 5.3.4.0 on WIndows2003. This one client continually gets combinations of: 08/12/2007 03:00:38 ANS1228E Sending of object '\\vmtntansa\c$\WINDOWS\system32\CatRoot\{127D0A1D-4EF2-11D1-8608-00C04FC295EE}' failed 08/12/2007 03:00:38 ANS4005E Error processing '\\vmtntansa\c$\WINDOWS\system32\CatRoot\{127D0A1D-4EF2-11D1-8608-00C04FC295EE}': file not found 08/12/2007 03:00:38 ANS1228E Sending of object '\\vmtntansa\c$\WINDOWS\system32\CatRoot\{127D0A1D-4EF2-11D1-8608-00C04FC295EE}\TimeStamp' failed 08/12/2007 03:00:38 ANS4005E Error processing '\\vmtntansa\c$\WINDOWS\system32\CatRoot\{127D0A1D-4EF2-11D1-8608-00C04FC295EE}\TimeStamp': file not found Including: 08/12/2007 03:00:38 ANS4005E Error processing 'SYSTEM STATE': file not found Some files on the C: drive backup, but probably most of them fail with these 2 messages. This fails both from the TSM scheduler and a domain admin logged on running the GUI. The backup completes with RC=0, but skipped files. Any ideas on where to go next? Bill Boyer >Select * from USERS where CLUE>0 0 rows returned