If you have a 'filling volume' you can move data from a tape that needs to be
reclaimed (with very little data) hence freeing up a new volume. Of course,
you may need to adjust reuse delay to get this back as scratch immediately.
---
David Nixon
S
Please forgive the perhaps dumb question. How does everyone manage accounts
across multiple TSM servers?
We are looking to upgrade from 6.4.x to 7.1 and setting up OC. We will have a
new TSM instance for OC, along with he existing two production instances and
our single instance in our DR sit
We have been using node replication for a couple years now with out a problem
on 6.3.x. We only replicate active data. Two weeks ago we upgraded our
destination system to 7.1.1.100 and last week one of our source systems. That
was Tuesday. Thursday/Friday all of the TDP for SQL nodes started
I believe that the OC will want to create a named account (default name) on
each TSM server it monitors (it's ok to say TSM, I'm not IBM). This password
would need to be the same across all the TSM servers. I think that's going to
be the problem for you but I could be wrong.
-
Is it possible to organize the home directories into something like A-F,
G-L,... Then use virtual mount points so that the TSM client views each of
these groupings as a separate volume and thus the worker threads will kick in
as expected?
---
Dav
According to the 7.1 blueprint you are a large deployment (for server side
dedupe)
CPU: 16 core Power 8
Memory: 192GB
Directory for the active log: 300 GB
Directory for the archive log:4 TB
It was suggested to use that we make sure that these logs are all on SSD/flash
for ideal performance. We
k for that requirement? We have asked IBM a
number of times, and they are fairly nebulous on what the actual
requirements are. That is a very interesting list you have there, thanks.
On Thu, Dec 10, 2015 at 8:19 AM, Nixon, Charles D. (David) <
cdni...@carilionclinic.org> wrote:
> Accord
We stagger our replication during the morning, to avoid locking the node too
long. Some things that we have seen:
1. If the node has a current backup session running and you attempt to
replicate it, the backup session gets killed since replication takes priority.
This has been a problem with
This is what we had to do too (shrink logs to 50% before upgrade and then grow
after the procedure). It tried to double logs and then I believe that it
forgot about the mirrored location because we were out of space.
---
David Nixon
System Progra
I suggest opening a ticket. We have seen something similar.
---
David Nixon
System Programmer II
Technology Services Group
Carilion Clinic
451 Kimball Ave.
Roanoke, VA 24015
Phone: 540-224-3903
cdni...@carilionclinic.org
Our mission: Improve the h
We had to do it. I suggest doing the rebalances one at a time, or if you
aren't in a hurry, once a day as you will see quite a bit of IO. The reduce
max are instantaneous and I wouldn't feel bad running them one right after the
other. Other than that, we did not have any problems and the DB h
I'll third the odd percentages... using 7.1.3.100.
tsm: TSMPRD02>select sum(reporting_mb) from OCCUPANCY where
stgpool_name='SASCONT0'
Unnamed[1]
--
182520798.90
tsm: TSMPRD02>q stg sascont0
Storage Device
Since you are trying to "export node from version 6.3.5.100 and import into
version 7.1.5.0," you should be able to replicate the node instead of exporting
it to get your data migrated.
Copy the policy domain from source to destination, and then in the destination,
point the target to the conta
I second this question. So the answer we got from our storage software sales
team is to download 7.1.3, extract the license file. Then, download the
version you want to want to use, extract and install that, then copy the file
over. Seems like way more work than it should be. Do we only get
I'm super interested in seeing how this turns out. Obviously, it's a pain
point for many customers and IBM has seen this to be true based on the size of
this thread. Furthermore, it's a relatively simple fix. Either update the new
code to have the license or provide detailed directions in the
We run OC on a separate LPAR and mirror the luns for the LPAR to the DR site.
This gives us two things...
1. In a DR scenario, you can quickly bring up the whole OC machine.
2. When doing code upgrades, you can upgrade OC separate from the TSM servers.
---
Since protect pool doesn't lock out nodes like repl node does, we have started
to run it every couple hours so that we don't saturate the WAN after the
backups finish. Then, twice a day, we run the repl node. Once when the
backups are complete and once in the middle of the backup cycle. Of co
We have a HDS G400 behind our new TSM servers FMD/Flash behind one container
for our primary production database (11.5 TB with 16 versions is fitting in 16.6TB) and the rest is in a giant nearline SAS pool.
The flash also serves to back the logs/db and the SAS serves as a DBB/logarch
location.
We opened a ticket related to long replication times in a container pool after
replication takes place, and got an answer that 'we can recreate your problem
but it is likely working as designed' even though it's contrary to
documentation. Any ideas would be appreciated.
-Two TSM servers at 7.1
u, Sep 15, 2016 at 4:29 PM, Nixon, Charles D. (David) <
cdni...@carilionclinic.org> wrote:
> We opened a ticket related to long replication times in a container pool
> after replication takes place, and got an answer that 'we can recreate your
> problem but it is likely working as d
But that system has SSD's and runs in excess of 140.000 IOP/s in Spectrum
Protect database benchmarks.
I would think this is very much database (and active log) performance bound
(on both sides).
On Thu, Sep 15, 2016 at 5:50 PM, Nixon, Charles D. (David) <
cdni...@carilionclinic.org>
gt; > > > >
> > > > > Don't you still need to copy the data at least once from server A
> to
> > > > server
> > > > > B? Isn't this normal?
> > > > >
> > > > > Best regards,
> > > > >
> > >
I'd say yes, hypervisors (with the right storage/procedures) mitigates the need
for this. We don't back up system state for anything. But, using VMware
snapshots for temporary backups, and array based snapshots for longer restores,
minimizes the need that we could have for system restores.
Th
23 matches
Mail list logo