The problem was corrected but, I have no idea how.
I stopped the migration and replication kicked off. I let the replication run
for an hour or so while I was at lunch. When I got back I stopped replication
and restarted the migration. I have no idea how but it used the correct volumes
this ti
Ricky,
could you please send the output of the following commands:
1. Q MOUNT
2. q act begint=-12 se=Migration
Also, the only way that the stgpool would migrate back to itself would be
if there was a loop, meaning your disk pool points to the tape pool as the
next stgpool, and your tape pool
Fabio,
your idea is not as crazy as you think. TSM and Spectrum Protect have an
option available that allows you to use disk as a reclamation area. This
is from the manual:
RECLAIMSTGpool
Specifies another primary storage pool as a target for reclaimed data from
this storage pool. This param
Is it migrating or reclaiming at 70% as defined.
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair,
Ricky
Sent: Wednesday, September 21, 2016 8:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Migration Question
Nothing out of the normal, th
Nothing out of the normal, the below is kind of odd but I'm not sure it has
anything to do with my problem.
Now here is something, I have another disk pool that is supposed to migrate to
the tapepool and now it's doing the same thing. Migrating to itself.
What the heck.
09/21/2016 10:58:21
The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get
triggered once you hit the HIGHMIG threshold.
Is there anything in the activity log for the errant migration processes?
On Wed, Sep 21, 2016 at 03:28:5
OLD STORAGE POOL
tsm: PROD-TSM01-VM>q stg ddstgpool f=d
Storage Pool Name: DDSTGPOOL
Storage Pool Type: Primary
Device Class Name: DDFILE
Estimated Capacity: 402,224 G
Space Trigger Util: 69.4
Can you post the output of "Q STG F=D" for each of those pools?
On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> Within TSM I am migrating an old storage pool on a DD4200 to a new storage
> pool on a DD4500.
>
> First of all, it worked fine yesterday.
>
> The nextpool is correct an
Hello,
memories of the times when tape drives were horribly expensive and one could
only afford 1 drive. Create a diskpool, add that pool as reclamationtarget in
the tapepool and let the reclamation work. Size the diskpool accordingly to
hold all data.
After the tape is empty, start a migration of
Hi, there!
I've got a broken drive here and my tape library is working only with 1 drive
(a IBM TS3200).
As it's only with 1 drive, all reclamation is failing.
Me and my team had a (very) crazy idea on using a VTL as a second drive to make
(eventual, not scheduled) reclamation, while we're work
Within TSM I am migrating an old storage pool on a DD4200 to a new storage pool
on a DD4500.
First of all, it worked fine yesterday.
The nextpool is correct and migration is hi=0 lo=0 and using 25 migration
process, but I had to stop it.
Now when I restart it the migration process it is migrat
We use a script that takes in a set of filesystem paths, gets a list of
files in those paths, queries a given TSM node for already-archived files,
and then generates a list of files that are in the former list but not the
latter. It then passes that final list to "dsmc archive" via -filelist. In
th
A user was running a large archive and the server was accidentally rebooted.
Am I correct that he must start all over again - there is no appending to
an existing archive? I assume the archive that was running is still
good/viable.
--
*Zoltan Forray*
TSM Software & Hardware Administrator
Xymon
Yes they are clients to each other. My MOON server is the LM for all
onsite/primary tape storage for all TSM servers, including SUN, which is
the LM for ALL offsite tape storage for all TSM servers, including MOON.
On Wed, Sep 21, 2016 at 1:42 AM, Steven Harris wrote:
> Not necessarily Zoltan
>
14 matches
Mail list logo