Hi Eric,
These should give you what you are looking for
echo
""
echo "* Node Activities - How many volumes each client has data data11*"
select count(DISTINCT volume_name) as Num_volumes, node_name, stgpool_name
from
Robert,
You have many options, what is it that you are trying to accomplish? Move
the data from LTO5 tape to LTO7 tape? Utilize a single library for all
your tapes? (this option has limitations). If you could be a bit more
specific on the goal, we could provide you with better advice.
Best R
Robert,
As you can see there are more that a few way to get the info I will add a
couple more for you tool box. You can create a macro or a script to do
the same thing. You would have to translate the VM verbage into TSM and
use select statement like Neil did. The translation would be as such
IF you setup the web client to perform the restores, you can restore them
to someplace other than the original location, provided that your NDMP
backups are using what is called a 3-way setup of the NDMP environment.
This setup has the NDMP backups being sent to a TSM/Spectrum Protect
controlle
to piggyback on Michael's statement, it has been IBM's position for over a
decade, to use Christie as its Bare Metal Restore Solution and to put
IBM's programming efforts towards other areas of Tivoli Storage Manager
(TSM), now IBM Spectrum Protect, product. Having worked with this product
for
Ricky,
could you please send the output of the following commands:
1. Q MOUNT
2. q act begint=-12 se=Migration
Also, the only way that the stgpool would migrate back to itself would be
if there was a loop, meaning your disk pool points to the tape pool as the
next stgpool, and your tape pool
Fabio,
your idea is not as crazy as you think. TSM and Spectrum Protect have an
option available that allows you to use disk as a reclamation area. This
is from the manual:
RECLAIMSTGpool
Specifies another primary storage pool as a target for reclaimed data from
this storage pool. This param
Robert,
This is a housekeeping type task that is performed, think of it like you
if you ran a defrag disk. This task takes place so that admin task of
reclamation is not needed. All of that is performed by the code without
configuration, that same for your deduplication. Containers are dedup
that is because the container (cloud or directory) manages deduplication.
As the data is ingested, Spectrum Protect detemines if the data is to be
deduplicated. Inside the storage pool, you will see two types of
containers, a container that is deduplicated and a non-deduplicated. To
answer you
the source pool), also Protect Stgpool gives you
chunk level recovery that hooks into REPAIR Stgpool.
Best Regards.
Ron Delaware
From: Robert Ouzen
To: ADSM-L@VM.MARIST.EDU
Date: 03/20/16 22:52
Subject:[ADSM-L] Help on Directory Container
Sent by:"ADSM: Dist
EJ,
Just so we are on the same page about this, my understanding is that you
want only the nodes that have ZERO data (backup, archive, or copy). Under
normal circumstances, there would be two ways that would happen within
TSM, if you are running Spectrum Protect (SP)(version 7.1.3 and above)
t
John,
Do you have enough scratch volumes available in the dedup pool? Could you
please provide output from these commands: q stg deduppool f=d and q devc
f=d
Best Regards,
_
email: ron.delaw...@us.ibm.com
From: "Dury, John
Robert,
I am still pretty much old school when it comes to managing structured vs
unstructured data, in that you keep them separate. But with Spectrum
Protect (SP) 7.1.3 we need to start thinking differently. In my
experience with the product so far, I doesn't seem to make a difference
since
To All,
Yesterday I replied to a question concerning upgrading to the latest
version of Spectrum Protect v7.1.3 (formerly known as Tivoli Storage
Manager). In my attempt to be brief, my comment may have caused more
confusion.
The patch that was applied was an efix for APAR IT11581, it is not a
Stefan,
If you do not have the need to move non-container data into a container,
then I would recommend performing the upgrade to SP v7.1.3.2. We found a
small bug in the 7.1.3.0 and have patched the code. It only affected the
movement of non-container stgpool data into a container stgpool.
Th
You treat the filepool as if it was tape. You don't want hundreds of nodes
data on a tape cartridge because it causes contention, and it causes
massive amount of tape mounts. You can get the same types of problems
with filepool volumes even though it is disk. When a filepool volume is
being re
EJ,
You haven't stated what you are attempting to accomplish.
"He now asks me 'What if I run daily incrementals instead of the
selectives?' I don't know if that will work, nor can I find the answer in
the manuals..."
If you run daily incremental's, you need to ensure that the data retention
is
Rick,
What type of storeage system are you using? Does it have the necessary I/O
capability to allow the throughput you are going to require?
Best Regards,
_
email: ron.delaw...@us.ibm.com
From: "Rhodes, Richard L."
To: ADSM-L@
Robert,
In order to move your TSM database, and logs, you will have to perform a
restore db and input the new directories/filespaces that you want the
database and logs to reside.
Moving both the database and recovery log
You can move the database, active log, and archive logs that are on the
sam
Grant,
To tackle your problem of multiple backups at the same time you can carve
up your NAS server (as long as you are using volumes and directories and
NOT volumes and Trees) using virtualmountpoint
Using the virtualmountpoint option to identify a directory within a file
system provides a direc
Robert,
You cannot perform a normal incremental once you have performed a full
image, unless you have two separate nodes, one doing image backups and one
doing file by file backups. You can do incremental image backups but not
an file by file incremental.
Best Regards,
_
Paul,
Not sure what your retention looks like, but if you move the nodes to a
different domain with a different retention (smaller), then the data will
get bound to the new policy after a backup is performed and the new data
retention will kick in at that time. I believe that your understanding of
Gary,
Not sure what you are trying to do this is the command I ran and the
results
dsmadmc -id=ron -passw=ron -comma -datao=yes "SELECT rtrim(node_name),
rtrim(filespace_name), filespace_id, rtrim(filespace_type),
daTE(backup_end) as back_up_DATE FROM filespaces WHERE
DAYS(current_date)-DAYS(backu
Eric,
When a NAS/NDMP backup start, there is a query from the datamover to the
TSM Server requesting space for the backup. It doesn't matter if you are
doing full's or incremental, the datamover uses that same storage
requirement for both.
example:
you have 100TB of NAS used space total. To do a
Just changing the share permissions would not cause the symptom that he is
experiencing. I agree with Steven, filesystem permissions must have been
changed. as those are propagated from the parent dir, so they may have
been changed and he wasn't aware.
Best Regards,
__
I ran into a similar problem, there were special characters hidden in the
dsm.sys file but we were not able to determine what they were. I would
recommend:
1. Rename the current dsm.sys to dsm.sys.old
2. Create a new dsm.sys (DO NOT cut and paste) then save it
3. Try the backup again.
In my case
Jeanne,
If you were to run those command at the DB2 level, they would work fine or
possible as a shell script ran from a TSM macro. There are limitations, as
you found out, when trying to run select statements from within TSM.
Best Regards,
___
Bruce,
You could do a group by node_name at the end of your select statement.
Best Regards,
_
email: ron.delaw...@us.ibm.com
Storage Services Offerings
From: "Kamp, Bruce (Ext)"
To: ADSM-L@VM.MARIST.EDU
Date: 03/09/15 12:46
Saravanan,
You could setup your tape library so that it is partitioned to use one
half for normal backup clients and one half for your storage agent(s), or,
you could setup your TSM server as a Library Manager and the storage
agent(s) as Library Clients. Since you are using a VTL, the library
Man
Ricky,
did you change the asnode= option for the proxy datamover? If you are
using the same asnode= option, there should be no problem, but if you have
changed it or omitted it, then the TSM Server assumes its a new node and
wants a full backup performed.
Best Regards,
hey hold tapes until the last data has expired. TSM format storage pools
can be reclaimed, and if they are file storage pools can be deduped as
well.
Regards
Steve
Steven Harris
TSM Admin
Canberra Australia
On 16 July 2014 05:38, Ron Delaware wrote:
> Ricky,
>
> The configuration
Steven,
The logical volumes are not dedicated disks in most cases, which means
that other applications may be using the same disks at the same time. With
our new "TSM Server Blueprint" standards, TSM database's over 1TB require
16 luns.
You can go to this link to find out more
https://www.ibm
Ricky,
The configuration that you are referring to is what could be considered
the 'Traditional' implementation of NDMP. As you have found for yourself,
there are a number of restrictions on how the data can be managed though.
If you configure the NDMP environment so that a Tivoli Storage Mana
From the manual for AIX
The user data limit that is displayed when you issue the ulimit -d command
is the soft user data limit. It is not necessary to set the hard user data
limit for DB2. The default soft user data limit is 128 MB. This is
equivalent to the value of 262,144 512-byte units as s
This is the public access site
http://www-01.ibm.com/software/tivoli/services/consulting/it-service-mgmt/
Best Regards,
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certifie
Saravanan,
There are two available options that you might not be aware of.
IBM has two workshop offerings that focus on migrating or upgrading from
TSM 5.5.x to TSM 7.1.
1. Butterfly Migration Workshop - This consulting engagement focuses on
leading the migration planning and data discovery, re
It is always a good practice to put your include statements at the bottom
of your list and the excludes at the top of the file. Remember that
exclude.dir is read first, regardless of where it is located in the file
and trumps any include statement.
To answer your question, yes by having the Exc
Tom,
As with everything, there is a cost, some times its money, sometimes it's
time, most of the time it is a combination of both. Here is my 3 cents:
You can crate backup sets of the clients for the clients that are listed
to be deleted or no longer needed. There are two bonus items of a bac
Keith,
you can find the procedure for changing the host name for you linux TSM
Servers here:
http://www-01.ibm.com/support/docview.wss?uid=swg21448218
Changing names is not a task to be taken lightly. Changes that are made
can affect future upgrades, see this link for more info:
http://www-01.
Nora,
If you are able to bring up the old TSM server, you could do a export node
directly to the new TSM server, provided that the two can communicate via
TCP/IP with each other. I believe that TSM 7.1 allows you to restore UNIX
operating systems TSM database backups into the new server, thoug
Jeanne,
You are using a device class of FILE from what I see. TSM Treats the file
as like a tape cartridge. This shows that a scratch tape was created, TSM
started writing to it, the FILE reached the max size specified for that
device class, and then closed the file.
05/23/2013 09:23:23 AN
Zoltan,
Using your example of \\server\folder\*.* I make the assumption that you
want to backup the folder directory and its subdirectories. The correct
way to do that is:
include "\\server\c$\folder\...\*"
if the directory you are wanting to backup is on a different drive, change
the c$ for yo
Jeff,
you are really leaving yourself exposed (data wise). Doing Node
Replication is NOT the same a performing a backup of your storage pools.
With Node Replication, you cannot replace a damaged volume or volumes.
Your setup on the Target server "should be" identical to the source server
to ensur
Gary,
You really don't want to use backup's for long term retention, that is an
archive function. But if you must, you set everything to NOLIMIT NOLIMIT
NOLIMIT NOLIMIT that way the data will hang around forever, but as I
stated, Archives are the way to go
Will this do the job and not retain too much data?
Thanks for the help.
Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Ron Delaware
Sent: Wednesday, December 12, 2012 12:0
The HADR function is part of DB2 and can be used without further cost for
additional software
_
Ronald C Delaware
TSM Storage Services Team Lead
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified D
46 matches
Mail list logo