2008 3:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Migration Process Question
We noticed today that a migration for a virtual tape pool was migrating
to itself not the "Next Stgpoool" ? Has anoyomn seen such a thing?
Thanks,
Charles
This e-mail, including attachments, may incl
We noticed today that a migration for a virtual tape pool was migrating
to itself not the "Next Stgpoool" ? Has anoyomn seen such a thing?
Thanks,
Charles
This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entit
lt;[EMAIL PROTECTED] Subject
.EDU> Re: ANR1025W Migration process 3433
terminated for storage pool
BACKUPPOOL - insufficient space in
04/18/2007 12:28
ject
Re: [ADSM-L] ANR1025W Migration process 3433 terminated for storage pool
BACKUPPOOL - insufficient space in subordinate storage pool.(PROCESS:
3433)
No scratch tapes...
Kelly J. Lipp
VP Manufacturing & CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777
[E
07 7:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] ANR1025W Migration process 3433 terminated for storage
pool BACKUPPOOL - insufficient space in subordinate storage
pool.(PROCESS: 3433)
Any suggestions to what my problem could be?
ANR1025W Migration process 3433 terminated for storage
On Apr 18, 2007, at 10:26 AM, David Browne wrote:
Maximum Scratch Volumes Allowed: 250
Number of Scratch Volumes Used: 182
David -
We lack the context of what was going on in the server when the error
appeared, or whether the operation was reattempted and consistently
failed each time. It
how fast you run, or how high you climb but how well you
bounce" - ??
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of David
Browne
Sent: Wednesday, April 18, 2007 9:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: ANR1025W Migration process 3433 terminat
Any suggestions to what my problem could be?
ANR1025W Migration process 3433 terminated for storage pool BACKUPPOOL -
insufficient space in subordinate storage pool.(PROCESS: 3433)
I check my tape pool and it appears I have scratch volumes and they are
marked read/write. See below:
tsm
Hi Gregory,
I use the migrate command, but I try to keep as much data on disk as
possible. Just have a look at the data going to disk per night and drain
your disks to hold the next night´s backups, adding a little extra space to
make sure migration doesn´t kick in in the middle of the night. Keep
Richard Sims wrote:
>
> On Apr 10, 2007, at 7:57 PM, David Bronder wrote:
>
> > ... However, the
> > automated migrations seem to not be very sensitive to the LOWMIG value
> > (I've been moving it closer to HIGHMIG but the migrations still keep
> > on running). ...
>
> David -
>
> See "Migration" i
Well David,
TSM will always choose the node with the most data to migrate first, and
won't check again till that is done. If you have one node with much
more data than any other, that could explain why the lowmig value
appears to have little effect.
Regards
Steve
Steven Harris
AIX and TSM adm
On Apr 10, 2007, at 7:57 PM, David Bronder wrote:
... However, the
automated migrations seem to not be very sensitive to the LOWMIG value
(I've been moving it closer to HIGHMIG but the migrations still keep
on running). ...
David -
See "Migration" in the TSM Concepts redbook, and "LOwmig"
in
Allen S. Rout wrote:
>
> This used to be the only way to accomplish this effect, and it's still
> what I do now. But When I Get Around To It, I'm going to change to the
> somewhat new
>
> migrate stgpool [yadda] lowmig=0 duration=[minutes]
>
> which has the advantage that you don't have to actually
@VM.MARIST.EDU
Subject: Re: [ADSM-L] Migration process
No, caching is not enabled
***
Gregory Lynch
Lead Programmer Analyst
IT Infrastructure/Systems Administration
Stony Brook University Medical Center
HSC Level 3, Room 121 ZIP 8037
Phone: 631
07 4:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Migration process
>> On Tue, 10 Apr 2007 14:06:46 -0600, Kelly Lipp <[EMAIL PROTECTED]>
said:
> An even better way is to lower to 0 0, then immediately update again
> to 90 0 (migration will continue since it star
>> On Tue, 10 Apr 2007 14:06:46 -0600, Kelly Lipp <[EMAIL PROTECTED]> said:
> An even better way is to lower to 0 0, then immediately update again to
> 90 0 (migration will continue since it started) and then sometime later
> set low back to 70.
This used to be the only way to accomplish this e
Gregory Lynch wrote:
> No, caching is not enabled
>
How are you running the migrate? If you just give a storage pool name,
then it will still use the thresholds in the storage pool configuration.
That would keep it from getting below 70% if that's it's set for. You
can override that by setting th
.EDU
Subject: [ADSM-L] Migration process
Hello All,
I have noticed that migration has been kicking off during the backup
window and is slowing down the nightly backups. We have migration
running during the day via an admin schedule for a hour, but the
diskpool never seems to get below 70%. I checked
all copies of the original.
Skylar Thompson <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager"
04/10/2007 03:46 PM
Please respond to
"ADSM: Dist Stor Manager"
To
ADSM-L@VM.MARIST.EDU
cc
Subject
Re: [ADSM-L] Migration process
Gregory Lynch wrote:
> Hello
Gregory Lynch wrote:
> Hello All,
>
> I have noticed that migration has been kicking off during the backup
> window and is slowing down the nightly backups. We have migration running
> during the day via an admin schedule for a hour, but the diskpool never
> seems to get below 70%. I checked the p
Hello All,
I have noticed that migration has been kicking off during the backup
window and is slowing down the nightly backups. We have migration running
during the day via an admin schedule for a hour, but the diskpool never
seems to get below 70%. I checked the properties of the diskpool and th
s. I reclaim onsite at 50% just to keep it
barely working.
-Original Message-
From: Giedrius Jankauskas [mailto:[EMAIL PROTECTED]
Sent: Friday, May 14, 2004 5:14 AM
To: [EMAIL PROTECTED]
Subject: Help needed ! Migration process fails :(
Hi there,
I have TSM (Version 5, Release 1, Leve
torage transaction
0:21378 was
aborted.
05/14/2004 12:54:07 ANR2183W dfmigr.c1965: Transaction 0:21378 was
aborted.
05/14/2004 12:54:07 ANR1033W Migration process 21 terminated for
storage pool
ML_DISKDAILYPOOL - transaction aborted.
05/14/2004
Hi list,
Using a TSM server v 5.1.6.2 on AIX 4.3.3, I'm sometimes getting following entry in
activity log :
30.05.2003 22:36:02 ANRD dfmigr.c(3224): ThreadId<56> Migration process 3464
unable to locate cluster element srvId=0, ck1=603
Should this error be considered as
-
From: Claudio Cofre Caro [mailto:[EMAIL PROTECTED]]
Sent: 13 June 2001 00:18
To: [EMAIL PROTECTED]
Subject: only one migration process :-(
Hi, anybody know how resolve thist ? :
I have TSM 4.1.3 on WinNT4.0 in Server to Server configuration with only
one node in the target server (the node is the
Claudio,
Since you only have one node, the only way to have more than one migration
process is to turn on collocation for you tape pool at the filespace level.
This will cause each filespace on the client to migrate to a separate tape.
If you only have one filespace on the client, your out of
You only get one migration process for every client's data in the source
pool. Since you only have one client on this server, that's all you'll
see. There is no way that I've figured out to cause multiple migration
processes for one client.
One suggestion may be to set a
rown
cc:
Sent by: Subject: Re: only one migration process
:-(
"ADSM: Dist
Sto
on WinNT4.0 in Server to Server configuration with only
> one node in the target server (the node is the source server).
>
> In the Target Server when the disk migration begin, i have only one
> migration process that use only one drive, and the others drives still
> empties (i hav
Hi, anybody know how resolve thist ? :
I have TSM 4.1.3 on WinNT4.0 in Server to Server configuration with only
one node in the target server (the node is the source server).
In the Target Server when the disk migration begin, i have only one
migration process that use only one drive
I've been off the list for a while so this might have already been asked...
Has there been any improvement in migration process limits in versions past
3.7.2.0 ? ? ?
In other words, if there is only one client's data in a pool, will there
still only be a single migration process ? ? ?
cheers ,
(Is it a feature ?? to stop backup failure )
Regards Ped
( Pietro M D Brenni )
IBM Global Services Australia
ZE06 (Zenith Centre - Tower A)
Level 6,
821 Pacific Highway
Chatswood NSW 2067
Sydney Australia
Ph: +61-2-8448 4788
Fax: +61-2-8448 4006
ael (74) Ltd.
[EMAIL PROTECTED]
Phone: +972-4-865-6588, Fax: +972-4-865-5999
> -Original Message-
> From: Pietro Brenni [mailto:[EMAIL PROTECTED]]
> Sent: Monday, November 06, 2000 6:20 AM
> To: [EMAIL PROTECTED]
> Subject: Migration process using more than 1 tape drive
> ev
This problem is very particular
I have occurances where a diskpool becomes full , or exceeds the Hi
threshold and a migration process starts. Issuing a tape mount.
About 3 mins later in the activity log another tape is mounted for the same
tapepool.
q proc shows only 1 tape mounted for the
34 matches
Mail list logo