TSM STUDIO for monitoring and reports

2010-10-21 Thread Hana Darzi
Hello,

I check TSM STUDIO for monitoring and reports .
Do anybody use it? Is it good tool?

Thank You hana


_

Hana Shparber (Darzi)
Computation Center, Ben-Gurion University
Email: ha...@bgu.ac.il
Phone: 972-8-6461160, Mobile: 972-52-6839378
_


Re: TSM STUDIO for monitoring and reports

2010-10-21 Thread Adrian Compton
I would rather check out www.tsmmanager.com
I use this product for the last 3 years and it is really good.
You can down load trial for a month..
Looked at TSM studio. Just as bulky and cumbersome as IBM's attempts at 
reporting.. Don't know when IBM will ever attempt to get the reporting side of 
TSM sorted out.

But this is just my humble opinion...

Regards

Adrian Compton 
Group IT Infrastructure
Aspen Pharmacare
8 Gibaud Street, Korsten, Port Elizabeth, 6014
P O Box 4002
Korsten
Port Elizabeth
6014
Switchboard Tel No: +27 (0) 41 407 2111   Direct Tel No: +27 (0) 41 407 2855
Fax:+27 (0) 41 453 7452    Cell:+27 (0) 82 861 7745

This email is solely for the named addressee.  Any unauthorized use or 
interception of this email, or the review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon the contents of this 
email, by persons or entities other than the intended recipient, is prohibited. 
If you are not the named addressee please notify us immediately by way of a 
reply e-mail, and also delete this email and any attached files.

Disclaimer:  You must scan this email and any attached files for viruses and/or 
any other defects.  Aspen accepts no liability for any loss, damages or 
consequence, whether direct, indirect, consequential or economic, however 
caused, and whether by negligence or otherwise, which may result directly or 
indirectly from this communication or of any attached files.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Hana 
Darzi
Sent: Thursday, October 21, 2010 13:03
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM STUDIO for monitoring and reports

Hello,

I check TSM STUDIO for monitoring and reports .
Do anybody use it? Is it good tool?

Thank You hana


_

Hana Shparber (Darzi)
Computation Center, Ben-Gurion University
Email: ha...@bgu.ac.il
Phone: 972-8-6461160, Mobile: 972-52-6839378
_


Ang: Re: TSM STUDIO for monitoring and reports

2010-10-21 Thread Daniel Sparrman
Or, if you want to be able to access your TSM monitoring from anywhere in the 
world, while still allowing the collection of historical data, granular 
analysis and a nice interface both for reporting and operation:

http://www.debriefingsoftware.com/

I've worked with TSM Manager, ServerGraph and IBM/Tivoli's own reporting tools, 
but still find this to be both easy to use yet powerful.

Best Regards

Daniel Sparrman

-"ADSM: Dist Stor Manager"  skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Adrian Compton 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 10/21/2010 13:11
Ärende: Re: TSM STUDIO for monitoring and reports

I would rather check out www.tsmmanager.com
I use this product for the last 3 years and it is really good.
You can down load trial for a month..
Looked at TSM studio. Just as bulky and cumbersome as IBM's attempts at 
reporting.. Don't know when IBM will ever attempt to get the reporting side of 
TSM sorted out.

But this is just my humble opinion...

Regards

Adrian Compton 
Group IT Infrastructure
Aspen Pharmacare
8 Gibaud Street, Korsten, Port Elizabeth, 6014
P O Box 4002
Korsten
Port Elizabeth
6014
Switchboard Tel No: +27 (0) 41 407 2111   Direct Tel No: +27 (0) 41 407 2855
Fax:+27 (0) 41 453 7452Cell:+27 (0) 82 861 7745

This email is solely for the named addressee.  Any unauthorized use or 
interception of this email, or the review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon the contents of this 
email, by persons or entities other than the intended recipient, is prohibited. 
If you are not the named addressee please notify us immediately by way of a 
reply e-mail, and also delete this email and any attached files.

Disclaimer:  You must scan this email and any attached files for viruses and/or 
any other defects.  Aspen accepts no liability for any loss, damages or 
consequence, whether direct, indirect, consequential or economic, however 
caused, and whether by negligence or otherwise, which may result directly or 
indirectly from this communication or of any attached files.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Hana 
Darzi
Sent: Thursday, October 21, 2010 13:03
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM STUDIO for monitoring and reports

Hello,

I check TSM STUDIO for monitoring and reports .
Do anybody use it? Is it good tool?

Thank You hana


_

Hana Shparber (Darzi)
Computation Center, Ben-Gurion University
Email: ha...@bgu.ac.il
Phone: 972-8-6461160, Mobile: 972-52-6839378
_

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Zoltan Forray/AC/VCU
Speaking of this book, I found the paragraph titled:  Mitigating 
performance degradation when backing up or archiving to FILE volumes

Yes, I did follow their recommendations plus other recommendations for 
transfer of data to high-performance (TS1130) tape drives.  Didn't see 
much if any difference.

I am definitely going to see if regular pre-formatted volumes on SAN 
filesystems is any better/worse.

FWIW, I have been trying to empty the existing filedevclass stgpool. 
Migrating 4TB has been running for over 24-hours - still have 33% to 
migrate with no user activity (still considered a somewhat test server). 
Using 2-TS1130 drives at the same time.  The backups in this stgpool are 
for 4-nodes. Not doing collocation.
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will 
never use email to request that you reply with your password, social 
security number or confidential personal information. For more details 
visit http://infosecurity.vcu.edu/phishing.html


From:
Paul Zarnowski 
To:
ADSM-L@VM.MARIST.EDU
Date:
10/20/2010 04:04 PM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS 
storage
Sent by:
"ADSM: Dist Stor Manager" 



Hmm...

I thought perhaps the Performance Tuning Guide would help clarify, which 
is where I thought I read this.  But it seems somewhat ambiguous.  Here 
are some snippets (for AIX):

>When AIX detects sequential file reading is occurring, it can read ahead 
even
>though the application has not yet requested the data.
>* Read ahead improves sequential read performance on JFS and JFS2 file 
systems.
>* The recommended setting of maxpgahead is 256 for both JFS and JFS2:
>ioo .p .o maxpgahead=256 .o j2_maxPageReadAhead=256

then later on the same page:

>Tivoli Storage Manager server - Improves storage pool migration 
throughput on
>JFS volumes only (does not apply to JFS2 or raw logical volumes).

and still later:

>This does not improve read performance on raw logical volumes or JFS2
>volumes on the Tivoli Storage Manager server. The server uses direct I/O 
on
>JFS2 file systems.

So which is it?  Does it read ahead on jfs2 or not?  One vote for and 2 
against.

On later on, there are a couple of related to using raw LV's which 
mentions array-based read-ahead:

>Using raw logical volumes on UNIX systems can cut CPU consumption but
>might be slower during storage pool migrations due to lack of read-ahead.
>However, many disk subsystems have read-ahead built in, which negates 
this
>concern.

Clear?  eh.  What I take away from this is if your array supports 
read-ahead, make sure you've got it enabled - at least for storage pool 
LUNs.  Probably doesn't make sense for DB LUNs, as it will just waste your 
precious cache.

..Paul

.. thinking I might need to spend a few more nights at Holiday Inn Express 
..


At 03:43 PM 10/20/2010, Remco Post wrote:
>Hmmm, that's interesting, jfs2 read-ahead. I know it exists, but recent 
TSM servers by default use direct I/O on jfs2, bypassing the buffer cache, 
and I assume the read-ahead as well... Or am I wrong?
>
>I noticed that on an XIV, dd can read a TSM diskpool volume at say 100 
MB/s, and yes two dd processes, reading two diskpool volumes get  about 
185 MB/s, not exactly twice as much, but much more than one process. The 
same is true for TSM migrating to tape. So, even though you'd think that 
two processes would appear more random than one, the XIV is still able to 
handle them quite efficiently. Yes, this is two processes working on a 
single filesystem from a single host. Now, of course, dd doesn't use 
direct i/o, and TSM does, but still, there is a noticeable benefit to 
running two migrations in parallel, even if both are on the same lun, 
filesystem, etc. (Yes, on jfs2).
>
>On 20 okt 2010, at 21:28, Paul Zarnowski wrote:
>
>> yes, this can get complicated...  Yes, multiple threads accessing 
different volumes on the same spindles can create head contention, even 
with volumes formatted serially.  But I think you can still reap benefits 
from laying down blocks sequentially on the filesystem.  Remco points out 
read-ahead benefits, and he is (IMHO) referring to disk array-based 
read-ahead.  Keep in mind that jfs[2] also has read-ahead, and it will 
still try to do this regardless of whether the physical blocks are laid 
down sequentially - it will just result in more head movement, more 
latency, and less efficiency.  I do not believe that jfs2 read-ahead uses 
array-based read-ahead.  The array-based read-ahead will pre-stage blocks 
in array cache, whereas jfs2-based read-ahead will pre-stage them in jfs 
mbufs.
>>
>> When the array is doing read-ahead, it will turn a single-block read 
into a multi-block read.  Since the blocks are laid down in sequence, 
there will be (I think) less head contention during this array-based 

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Strand, Neil B.
Zoltan,
Is your database/logs on separate disks and separate HBAs from your 
filedevclass disks and are the disk HBAs separate from tape HBAs?


Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Zoltan 
Forray/AC/VCU
Sent: Thursday, October 21, 2010 8:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with 
SAN/FILEDEVCLASS storage

Speaking of this book, I found the paragraph titled:  Mitigating
performance degradation when backing up or archiving to FILE volumes

Yes, I did follow their recommendations plus other recommendations for
transfer of data to high-performance (TS1130) tape drives.  Didn't see
much if any difference.

I am definitely going to see if regular pre-formatted volumes on SAN
filesystems is any better/worse.

FWIW, I have been trying to empty the existing filedevclass stgpool.
Migrating 4TB has been running for over 24-hours - still have 33% to
migrate with no user activity (still considered a somewhat test server).
Using 2-TS1130 drives at the same time.  The backups in this stgpool are
for 4-nodes. Not doing collocation.
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


From:
Paul Zarnowski 
To:
ADSM-L@VM.MARIST.EDU
Date:
10/20/2010 04:04 PM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS
storage
Sent by:
"ADSM: Dist Stor Manager" 



Hmm...

I thought perhaps the Performance Tuning Guide would help clarify, which
is where I thought I read this.  But it seems somewhat ambiguous.  Here
are some snippets (for AIX):

>When AIX detects sequential file reading is occurring, it can read ahead
even
>though the application has not yet requested the data.
>* Read ahead improves sequential read performance on JFS and JFS2 file
systems.
>* The recommended setting of maxpgahead is 256 for both JFS and JFS2:
>ioo .p .o maxpgahead=256 .o j2_maxPageReadAhead=256

then later on the same page:

>Tivoli Storage Manager server - Improves storage pool migration
throughput on
>JFS volumes only (does not apply to JFS2 or raw logical volumes).

and still later:

>This does not improve read performance on raw logical volumes or JFS2
>volumes on the Tivoli Storage Manager server. The server uses direct I/O
on
>JFS2 file systems.

So which is it?  Does it read ahead on jfs2 or not?  One vote for and 2
against.

On later on, there are a couple of related to using raw LV's which
mentions array-based read-ahead:

>Using raw logical volumes on UNIX systems can cut CPU consumption but
>might be slower during storage pool migrations due to lack of read-ahead.
>However, many disk subsystems have read-ahead built in, which negates
this
>concern.

Clear?  eh.  What I take away from this is if your array supports
read-ahead, make sure you've got it enabled - at least for storage pool
LUNs.  Probably doesn't make sense for DB LUNs, as it will just waste your
precious cache.

..Paul

.. thinking I might need to spend a few more nights at Holiday Inn Express
..


At 03:43 PM 10/20/2010, Remco Post wrote:
>Hmmm, that's interesting, jfs2 read-ahead. I know it exists, but recent
TSM servers by default use direct I/O on jfs2, bypassing the buffer cache,
and I assume the read-ahead as well... Or am I wrong?
>
>I noticed that on an XIV, dd can read a TSM diskpool volume at say 100
MB/s, and yes two dd processes, reading two diskpool volumes get  about
185 MB/s, not exactly twice as much, but much more than one process. The
same is true for TSM migrating to tape. So, even though you'd think that
two processes would appear more random than one, the XIV is still able to
handle them quite efficiently. Yes, this is two processes working on a
single filesystem from a single host. Now, of course, dd doesn't use
direct i/o, and TSM does, but still, there is a noticeable benefit to
running two migrations in parallel, even if both are on the same lun,
filesystem, etc. (Yes, on jfs2).
>
>On 20 okt 2010, at 21:28, Paul Zarnowski wrote:
>
>> yes, this can get complicated...  Yes, multiple threads accessing
different volumes on the same spindles can create head contention, even
with volumes formatted serially.  But I think you can still reap benefits
from laying down blocks sequentially on the filesystem.  Remco points out
read-ahead benefits, and he is (IMHO) referring to disk array-based
read-ahead.  Keep in mind that jfs[2] also has read-ahead, and it will
still try to do this regardless of whet

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Zoltan Forray/AC/VCU
Correct.  This machine has 8-internal 600GB 15K drives.  The OS and DB are 
on one pair of mirrored drives.  The log  and archlog share the rest of 
the internal drives in a raid-10 (I think) array plus leaving extra space 
to DB expansion (one server I plan to migrate to these new 6.2 servers has 
a DB size of 190GB which comes to a minimum of ~600GB when converted from 
5.5).  The file devclass is SAN storage in a Claiiron box.  There are 
3-Qlogic HBA cards.  1-for disk and the other 2-for tape but only 1-in use 
due to lack of switch-ports.

We have tried to max this box out, performance wise.  48GB RAM, dual X5560 
Xeon 2.8GHz processors,  RedHat 5.5
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will 
never use email to request that you reply with your password, social 
security number or confidential personal information. For more details 
visit http://infosecurity.vcu.edu/phishing.html



From:
"Strand, Neil B." 
To:
ADSM-L@VM.MARIST.EDU
Date:
10/21/2010 11:02 AM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS 
storage
Sent by:
"ADSM: Dist Stor Manager" 



Zoltan,
Is your database/logs on separate disks and separate HBAs from your 
filedevclass disks and are the disk HBAs separate from tape HBAs?


Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Zoltan Forray/AC/VCU
Sent: Thursday, October 21, 2010 8:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with 
SAN/FILEDEVCLASS storage

Speaking of this book, I found the paragraph titled:  Mitigating
performance degradation when backing up or archiving to FILE volumes

Yes, I did follow their recommendations plus other recommendations for
transfer of data to high-performance (TS1130) tape drives.  Didn't see
much if any difference.

I am definitely going to see if regular pre-formatted volumes on SAN
filesystems is any better/worse.

FWIW, I have been trying to empty the existing filedevclass stgpool.
Migrating 4TB has been running for over 24-hours - still have 33% to
migrate with no user activity (still considered a somewhat test server).
Using 2-TS1130 drives at the same time.  The backups in this stgpool are
for 4-nodes. Not doing collocation.
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


From:
Paul Zarnowski 
To:
ADSM-L@VM.MARIST.EDU
Date:
10/20/2010 04:04 PM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS
storage
Sent by:
"ADSM: Dist Stor Manager" 



Hmm...

I thought perhaps the Performance Tuning Guide would help clarify, which
is where I thought I read this.  But it seems somewhat ambiguous.  Here
are some snippets (for AIX):

>When AIX detects sequential file reading is occurring, it can read ahead
even
>though the application has not yet requested the data.
>* Read ahead improves sequential read performance on JFS and JFS2 file
systems.
>* The recommended setting of maxpgahead is 256 for both JFS and JFS2:
>ioo .p .o maxpgahead=256 .o j2_maxPageReadAhead=256

then later on the same page:

>Tivoli Storage Manager server - Improves storage pool migration
throughput on
>JFS volumes only (does not apply to JFS2 or raw logical volumes).

and still later:

>This does not improve read performance on raw logical volumes or JFS2
>volumes on the Tivoli Storage Manager server. The server uses direct I/O
on
>JFS2 file systems.

So which is it?  Does it read ahead on jfs2 or not?  One vote for and 2
against.

On later on, there are a couple of related to using raw LV's which
mentions array-based read-ahead:

>Using raw logical volumes on UNIX systems can cut CPU consumption but
>might be slower during storage pool migrations due to lack of read-ahead.
>However, many disk subsystems have read-ahead built in, which negates
this
>concern.

Clear?  eh.  What I take away from this is if your array supports
read-ahead, make sure you've got it enabled - at least for storage pool
LUNs.  Probably doesn't make sense for DB LUNs, as it will just waste your
precious cache.

..Paul

.. thinking I might need to spend a few more nights at Holiday Inn Express
..


At 03:43 PM 10/20/2010, Remco Post wrote:
>Hmmm, that's interesting, jfs2 read-ahead. I know it exists, but recent
TSM servers by default use direct I/O on

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Zoltan Forray/AC/VCU
In reference to these recommendations, this is what one of my SAN folks
said:

If "increasing the queue depth for the individual disks" is something you
can do on a CLARiiON, it's not something I'm familiar with.  On the HBA
(and if you can), you would do that from the host side (like with
SanSurfer for Qlogic HBAs).

I have no idea what he might be referring to with "EMC voodoo
application".

"iostat/vmstat" are unix host utilities.

Each of the two LUNs is spread out over 7 disks.  The 2 RAID Groups and
the enclosure they are in are dedicated to Tivoli.

I've seen some references to using lots of smaller LUNs rather than a few
big ones.  You have 2 5.5TB LUNs now.  We can try splitting each of those
into 10-12 smaller LUNs.
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



From:
"Strand, Neil B." 
To:
ADSM-L@VM.MARIST.EDU
Date:
10/19/2010 01:50 PM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS
storage
Sent by:
"ADSM: Dist Stor Manager" 



Zoltan,
   You may need to increase the queue depth for the individual disks
and/or the HBA attached to the disks.
   Monitor both the server (iostat/vmstat) and the storage (EMC voodoo
application) for latency and compare the results for consistency.  You
may need to adjust the striping of your logical LUNs on the storage.  I
have observed serious performance degradation on an older IBM ESS simply
because the logical volumes were created on a single SSA rather than
spread across the entire set of disks.

Cheers,
Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Tuesday, October 19, 2010 9:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Lousy performance on new 6.2.1.1 server with
SAN/FILEDEVCLASS storage

Now that I have ventured into new territory with this new server (Linux
6.2.1.1), I am experiencing terrible performance when it comes to moving
data from disk (FILEDEVCLASS on EMC/SAN storage) vs my other 6.1 and 5.5
servers.

With the server doing nothing but migrating data from this SAN based
stgpool to TS1130 tape, I am seeing roughly 700GB being moved in a
12-hour
period.  On my other, internal disk based TSM servers, I move
multiple-terabytes per day/24-hours.

So, where should I focus on why this is so slow?  Is it because I am
using
SAN storage?  How about the FILEDEVCLASS vs fixed, pre-formatted volumes
(like every other server is using)?

Or is this normal?  If it is, I am in for some serious problems.  One of
these servers is expected to replace an existing 5.5 server that
processes
20TB+ of backups, per week (no, I can not go straight to tape due to the
type of backups being performed).

Suggestions?  Thoughts?
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason
therefore recommends that you do not send any confidential or sensitive
information to us via electronic mail, including social security numbers,
account numbers, or personal identification numbers. Delivery, and or
timely delivery of Internet mail is not guaranteed. Legg Mason therefore
recommends that you do not send time sensitive
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged
or confidential information. Unless you are the intended recipient, you
may not use, copy or disclose to anyone any information contained in this
message. If you have received this message in error, please notify the
author by replying to this message and then kindly delete the message.
Thank you.


Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Remco Post
for AIX:

lsattr -l hdiskX -> look foor the queue_depth field values between 20 and 64 
make sense, 128 in some extreme cases
chdev -l hdiskX -a queue_depth=Y (but only if the vg is off-line)

I found that some drives only support a queue_depth of 1... in that case, you 
found your bottleneck.

On 21 okt 2010, at 18:57, Zoltan Forray/AC/VCU wrote:

> In reference to these recommendations, this is what one of my SAN folks
> said:
> 
> If "increasing the queue depth for the individual disks" is something you
> can do on a CLARiiON, it's not something I'm familiar with.  On the HBA
> (and if you can), you would do that from the host side (like with
> SanSurfer for Qlogic HBAs).
> 
> I have no idea what he might be referring to with "EMC voodoo
> application".
> 
> "iostat/vmstat" are unix host utilities.
> 
> Each of the two LUNs is spread out over 7 disks.  The 2 RAID Groups and
> the enclosure they are in are dedicated to Tivoli.
> 
> I've seen some references to using lots of smaller LUNs rather than a few
> big ones.  You have 2 5.5TB LUNs now.  We can try splitting each of those
> into 10-12 smaller LUNs.
> Zoltan Forray
> TSM Software & Hardware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html
> 
> 
> 
> From:
> "Strand, Neil B." 
> To:
> ADSM-L@VM.MARIST.EDU
> Date:
> 10/19/2010 01:50 PM
> Subject:
> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS
> storage
> Sent by:
> "ADSM: Dist Stor Manager" 
> 
> 
> 
> Zoltan,
>   You may need to increase the queue depth for the individual disks
> and/or the HBA attached to the disks.
>   Monitor both the server (iostat/vmstat) and the storage (EMC voodoo
> application) for latency and compare the results for consistency.  You
> may need to adjust the striping of your logical LUNs on the storage.  I
> have observed serious performance degradation on an older IBM ESS simply
> because the logical volumes were created on a single SSA rather than
> spread across the entire set of disks.
> 
> Cheers,
> Neil Strand
> Storage Engineer - Legg Mason
> Baltimore, MD.
> (410) 580-7491
> Whatever you can do or believe you can, begin it.
> Boldness has genius, power and magic.
> 
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
> Zoltan Forray/AC/VCU
> Sent: Tuesday, October 19, 2010 9:15 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Lousy performance on new 6.2.1.1 server with
> SAN/FILEDEVCLASS storage
> 
> Now that I have ventured into new territory with this new server (Linux
> 6.2.1.1), I am experiencing terrible performance when it comes to moving
> data from disk (FILEDEVCLASS on EMC/SAN storage) vs my other 6.1 and 5.5
> servers.
> 
> With the server doing nothing but migrating data from this SAN based
> stgpool to TS1130 tape, I am seeing roughly 700GB being moved in a
> 12-hour
> period.  On my other, internal disk based TSM servers, I move
> multiple-terabytes per day/24-hours.
> 
> So, where should I focus on why this is so slow?  Is it because I am
> using
> SAN storage?  How about the FILEDEVCLASS vs fixed, pre-formatted volumes
> (like every other server is using)?
> 
> Or is this normal?  If it is, I am in for some serious problems.  One of
> these servers is expected to replace an existing 5.5 server that
> processes
> 20TB+ of backups, per week (no, I can not go straight to tape due to the
> type of backups being performed).
> 
> Suggestions?  Thoughts?
> Zoltan Forray
> TSM Software & Hardware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html
> 
> IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason
> therefore recommends that you do not send any confidential or sensitive
> information to us via electronic mail, including social security numbers,
> account numbers, or personal identification numbers. Delivery, and or
> timely delivery of Internet mail is not guaranteed. Legg Mason therefore
> recommends that you do not send time sensitive
> or action-oriented messages to us via electronic mail.
> 
> This message is intended for the addressee only and may contain privileged
> or confidential information. Unless you are the intended recipient, you
> may not use, copy or disclose to anyone any information contained in this
> message. If you have received this message in error, please notify the
>

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Zoltan Forray/AC/VCU
This is RedHat Linux 5.5



From:
Remco Post 
To:
ADSM-L@VM.MARIST.EDU
Date:
10/21/2010 01:24 PM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS
storage
Sent by:
"ADSM: Dist Stor Manager" 



for AIX:

lsattr -l hdiskX -> look foor the queue_depth field values between 20 and
64 make sense, 128 in some extreme cases
chdev -l hdiskX -a queue_depth=Y (but only if the vg is off-line)

I found that some drives only support a queue_depth of 1... in that case,
you found your bottleneck.

On 21 okt 2010, at 18:57, Zoltan Forray/AC/VCU wrote:

> In reference to these recommendations, this is what one of my SAN folks
> said:
>
> If "increasing the queue depth for the individual disks" is something
you
> can do on a CLARiiON, it's not something I'm familiar with.  On the HBA
> (and if you can), you would do that from the host side (like with
> SanSurfer for Qlogic HBAs).
>
> I have no idea what he might be referring to with "EMC voodoo
> application".
>
> "iostat/vmstat" are unix host utilities.
>
> Each of the two LUNs is spread out over 7 disks.  The 2 RAID Groups and
> the enclosure they are in are dedicated to Tivoli.
>
> I've seen some references to using lots of smaller LUNs rather than a
few
> big ones.  You have 2 5.5TB LUNs now.  We can try splitting each of
those
> into 10-12 smaller LUNs.
> Zoltan Forray
> TSM Software & Hardware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html
>
>
>
> From:
> "Strand, Neil B." 
> To:
> ADSM-L@VM.MARIST.EDU
> Date:
> 10/19/2010 01:50 PM
> Subject:
> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with
SAN/FILEDEVCLASS
> storage
> Sent by:
> "ADSM: Dist Stor Manager" 
>
>
>
> Zoltan,
>   You may need to increase the queue depth for the individual disks
> and/or the HBA attached to the disks.
>   Monitor both the server (iostat/vmstat) and the storage (EMC voodoo
> application) for latency and compare the results for consistency.  You
> may need to adjust the striping of your logical LUNs on the storage.  I
> have observed serious performance degradation on an older IBM ESS simply
> because the logical volumes were created on a single SSA rather than
> spread across the entire set of disks.
>
> Cheers,
> Neil Strand
> Storage Engineer - Legg Mason
> Baltimore, MD.
> (410) 580-7491
> Whatever you can do or believe you can, begin it.
> Boldness has genius, power and magic.
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
> Zoltan Forray/AC/VCU
> Sent: Tuesday, October 19, 2010 9:15 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Lousy performance on new 6.2.1.1 server with
> SAN/FILEDEVCLASS storage
>
> Now that I have ventured into new territory with this new server (Linux
> 6.2.1.1), I am experiencing terrible performance when it comes to moving
> data from disk (FILEDEVCLASS on EMC/SAN storage) vs my other 6.1 and 5.5
> servers.
>
> With the server doing nothing but migrating data from this SAN based
> stgpool to TS1130 tape, I am seeing roughly 700GB being moved in a
> 12-hour
> period.  On my other, internal disk based TSM servers, I move
> multiple-terabytes per day/24-hours.
>
> So, where should I focus on why this is so slow?  Is it because I am
> using
> SAN storage?  How about the FILEDEVCLASS vs fixed, pre-formatted volumes
> (like every other server is using)?
>
> Or is this normal?  If it is, I am in for some serious problems.  One of
> these servers is expected to replace an existing 5.5 server that
> processes
> 20TB+ of backups, per week (no, I can not go straight to tape due to the
> type of backups being performed).
>
> Suggestions?  Thoughts?
> Zoltan Forray
> TSM Software & Hardware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html
>
> IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason
> therefore recommends that you do not send any confidential or sensitive
> information to us via electronic mail, including social security
numbers,
> account numbers, or personal identification numbers. Delivery, and or
> timely delivery of Internet mail is not guaranteed. Legg Mason therefore
> recommends that you do not send time sensitive
> or action-oriented messages to us via electronic mail.
>
> This message is intended for the addressee only and may contain
privileged
> or confidential i

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Strand, Neil B.
There should be queue depth settings for individual disks and for the
HBA on the server - not on the Array.  The sum of the Disk settings
should not exceed the setting for the HBA.

By EMC voodoo I meant the EMC management application that allows you to
monitor the performance of the array - I'm not sure what it's proper
name is.

As Remco pointed out check with the EMC folks and your HBA vendor and OS
support to determine queue depth limitations.  Seems like I ran across a
combination for AIX or Solaris attached to either an older FastT or
NetApp that defaulted to a depth of 1 and required a firmware update to
fix.


Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Thursday, October 21, 2010 12:58 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with
SAN/FILEDEVCLASS storage

In reference to these recommendations, this is what one of my SAN folks
said:

If "increasing the queue depth for the individual disks" is something
you
can do on a CLARiiON, it's not something I'm familiar with.  On the HBA
(and if you can), you would do that from the host side (like with
SanSurfer for Qlogic HBAs).

I have no idea what he might be referring to with "EMC voodoo
application".

"iostat/vmstat" are unix host utilities.

Each of the two LUNs is spread out over 7 disks.  The 2 RAID Groups and
the enclosure they are in are dedicated to Tivoli.

I've seen some references to using lots of smaller LUNs rather than a
few
big ones.  You have 2 5.5TB LUNs now.  We can try splitting each of
those
into 10-12 smaller LUNs.
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



From:
"Strand, Neil B." 
To:
ADSM-L@VM.MARIST.EDU
Date:
10/19/2010 01:50 PM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with
SAN/FILEDEVCLASS
storage
Sent by:
"ADSM: Dist Stor Manager" 



Zoltan,
   You may need to increase the queue depth for the individual disks
and/or the HBA attached to the disks.
   Monitor both the server (iostat/vmstat) and the storage (EMC voodoo
application) for latency and compare the results for consistency.  You
may need to adjust the striping of your logical LUNs on the storage.  I
have observed serious performance degradation on an older IBM ESS simply
because the logical volumes were created on a single SSA rather than
spread across the entire set of disks.

Cheers,
Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Tuesday, October 19, 2010 9:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Lousy performance on new 6.2.1.1 server with
SAN/FILEDEVCLASS storage

Now that I have ventured into new territory with this new server (Linux
6.2.1.1), I am experiencing terrible performance when it comes to moving
data from disk (FILEDEVCLASS on EMC/SAN storage) vs my other 6.1 and 5.5
servers.

With the server doing nothing but migrating data from this SAN based
stgpool to TS1130 tape, I am seeing roughly 700GB being moved in a
12-hour
period.  On my other, internal disk based TSM servers, I move
multiple-terabytes per day/24-hours.

So, where should I focus on why this is so slow?  Is it because I am
using
SAN storage?  How about the FILEDEVCLASS vs fixed, pre-formatted volumes
(like every other server is using)?

Or is this normal?  If it is, I am in for some serious problems.  One of
these servers is expected to replace an existing 5.5 server that
processes
20TB+ of backups, per week (no, I can not go straight to tape due to the
type of backups being performed).

Suggestions?  Thoughts?
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason
therefore recommends that you do not send any confidential or sensitive
information to us via electronic mail, including social security
numbers,
account numbers, or personal identification numbers. 

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Remco Post
Hi all,

YMMV, I've really never played with these, but check 
/sys/block/sdX/queue/nr_requests and some other files in that dir

ow BTW, google is your friend ;-)

On 21 okt 2010, at 19:38, Zoltan Forray/AC/VCU wrote:

> This is RedHat Linux 5.5
> 
> 
> 
> From:
> Remco Post 
> To:
> ADSM-L@VM.MARIST.EDU
> Date:
> 10/21/2010 01:24 PM
> Subject:
> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS
> storage
> Sent by:
> "ADSM: Dist Stor Manager" 
> 
> 
> 
> for AIX:
> 
> lsattr -l hdiskX -> look foor the queue_depth field values between 20 and
> 64 make sense, 128 in some extreme cases
> chdev -l hdiskX -a queue_depth=Y (but only if the vg is off-line)
> 
> I found that some drives only support a queue_depth of 1... in that case,
> you found your bottleneck.
> 
> On 21 okt 2010, at 18:57, Zoltan Forray/AC/VCU wrote:
> 
>> In reference to these recommendations, this is what one of my SAN folks
>> said:
>> 
>> If "increasing the queue depth for the individual disks" is something
> you
>> can do on a CLARiiON, it's not something I'm familiar with.  On the HBA
>> (and if you can), you would do that from the host side (like with
>> SanSurfer for Qlogic HBAs).
>> 
>> I have no idea what he might be referring to with "EMC voodoo
>> application".
>> 
>> "iostat/vmstat" are unix host utilities.
>> 
>> Each of the two LUNs is spread out over 7 disks.  The 2 RAID Groups and
>> the enclosure they are in are dedicated to Tivoli.
>> 
>> I've seen some references to using lots of smaller LUNs rather than a
> few
>> big ones.  You have 2 5.5TB LUNs now.  We can try splitting each of
> those
>> into 10-12 smaller LUNs.
>> Zoltan Forray
>> TSM Software & Hardware Administrator
>> Virginia Commonwealth University
>> UCC/Office of Technology Services
>> zfor...@vcu.edu - 804-828-4807
>> Don't be a phishing victim - VCU and other reputable organizations will
>> never use email to request that you reply with your password, social
>> security number or confidential personal information. For more details
>> visit http://infosecurity.vcu.edu/phishing.html
>> 
>> 
>> 
>> From:
>> "Strand, Neil B." 
>> To:
>> ADSM-L@VM.MARIST.EDU
>> Date:
>> 10/19/2010 01:50 PM
>> Subject:
>> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with
> SAN/FILEDEVCLASS
>> storage
>> Sent by:
>> "ADSM: Dist Stor Manager" 
>> 
>> 
>> 
>> Zoltan,
>> You may need to increase the queue depth for the individual disks
>> and/or the HBA attached to the disks.
>> Monitor both the server (iostat/vmstat) and the storage (EMC voodoo
>> application) for latency and compare the results for consistency.  You
>> may need to adjust the striping of your logical LUNs on the storage.  I
>> have observed serious performance degradation on an older IBM ESS simply
>> because the logical volumes were created on a single SSA rather than
>> spread across the entire set of disks.
>> 
>> Cheers,
>> Neil Strand
>> Storage Engineer - Legg Mason
>> Baltimore, MD.
>> (410) 580-7491
>> Whatever you can do or believe you can, begin it.
>> Boldness has genius, power and magic.
>> 
>> 
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
>> Zoltan Forray/AC/VCU
>> Sent: Tuesday, October 19, 2010 9:15 AM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: [ADSM-L] Lousy performance on new 6.2.1.1 server with
>> SAN/FILEDEVCLASS storage
>> 
>> Now that I have ventured into new territory with this new server (Linux
>> 6.2.1.1), I am experiencing terrible performance when it comes to moving
>> data from disk (FILEDEVCLASS on EMC/SAN storage) vs my other 6.1 and 5.5
>> servers.
>> 
>> With the server doing nothing but migrating data from this SAN based
>> stgpool to TS1130 tape, I am seeing roughly 700GB being moved in a
>> 12-hour
>> period.  On my other, internal disk based TSM servers, I move
>> multiple-terabytes per day/24-hours.
>> 
>> So, where should I focus on why this is so slow?  Is it because I am
>> using
>> SAN storage?  How about the FILEDEVCLASS vs fixed, pre-formatted volumes
>> (like every other server is using)?
>> 
>> Or is this normal?  If it is, I am in for some serious problems.  One of
>> these servers is expected to replace an existing 5.5 server that
>> processes
>> 20TB+ of backups, per week (no, I can not go straight to tape due to the
>> type of backups being performed).
>> 
>> Suggestions?  Thoughts?
>> Zoltan Forray
>> TSM Software & Hardware Administrator
>> Virginia Commonwealth University
>> UCC/Office of Technology Services
>> zfor...@vcu.edu - 804-828-4807
>> Don't be a phishing victim - VCU and other reputable organizations will
>> never use email to request that you reply with your password, social
>> security number or confidential personal information. For more details
>> visit http://infosecurity.vcu.edu/phishing.html
>> 
>> IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason
>> therefore recommends that you do not send any confidential or sensitive
>> information to 

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Zoltan Forray/AC/VCU
Thanks for the hint...using Google is great (which I do), but if you
don't have a clue what to look forlike a dictionary--you have to
have some kind of clue/general idea how a word is spelled to be able to
look it up  ;)

I did find a /sys/block/emcpowerb/queue/nr_requests file with 128 in it.



From:
Remco Post 
To:
ADSM-L@VM.MARIST.EDU
Date:
10/21/2010 02:47 PM
Subject:
Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS
storage
Sent by:
"ADSM: Dist Stor Manager" 



Hi all,

YMMV, I've really never played with these, but check
/sys/block/sdX/queue/nr_requests and some other files in that dir

ow BTW, google is your friend ;-)

On 21 okt 2010, at 19:38, Zoltan Forray/AC/VCU wrote:

> This is RedHat Linux 5.5
>
>
>
> From:
> Remco Post 
> To:
> ADSM-L@VM.MARIST.EDU
> Date:
> 10/21/2010 01:24 PM
> Subject:
> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with
SAN/FILEDEVCLASS
> storage
> Sent by:
> "ADSM: Dist Stor Manager" 
>
>
>
> for AIX:
>
> lsattr -l hdiskX -> look foor the queue_depth field values between 20
and
> 64 make sense, 128 in some extreme cases
> chdev -l hdiskX -a queue_depth=Y (but only if the vg is off-line)
>
> I found that some drives only support a queue_depth of 1... in that
case,
> you found your bottleneck.
>
> On 21 okt 2010, at 18:57, Zoltan Forray/AC/VCU wrote:
>
>> In reference to these recommendations, this is what one of my SAN folks
>> said:
>>
>> If "increasing the queue depth for the individual disks" is something
> you
>> can do on a CLARiiON, it's not something I'm familiar with.  On the HBA
>> (and if you can), you would do that from the host side (like with
>> SanSurfer for Qlogic HBAs).
>>
>> I have no idea what he might be referring to with "EMC voodoo
>> application".
>>
>> "iostat/vmstat" are unix host utilities.
>>
>> Each of the two LUNs is spread out over 7 disks.  The 2 RAID Groups and
>> the enclosure they are in are dedicated to Tivoli.
>>
>> I've seen some references to using lots of smaller LUNs rather than a
> few
>> big ones.  You have 2 5.5TB LUNs now.  We can try splitting each of
> those
>> into 10-12 smaller LUNs.
>> Zoltan Forray
>> TSM Software & Hardware Administrator
>> Virginia Commonwealth University
>> UCC/Office of Technology Services
>> zfor...@vcu.edu - 804-828-4807
>> Don't be a phishing victim - VCU and other reputable organizations will
>> never use email to request that you reply with your password, social
>> security number or confidential personal information. For more details
>> visit http://infosecurity.vcu.edu/phishing.html
>>
>>
>>
>> From:
>> "Strand, Neil B." 
>> To:
>> ADSM-L@VM.MARIST.EDU
>> Date:
>> 10/19/2010 01:50 PM
>> Subject:
>> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with
> SAN/FILEDEVCLASS
>> storage
>> Sent by:
>> "ADSM: Dist Stor Manager" 
>>
>>
>>
>> Zoltan,
>> You may need to increase the queue depth for the individual disks
>> and/or the HBA attached to the disks.
>> Monitor both the server (iostat/vmstat) and the storage (EMC voodoo
>> application) for latency and compare the results for consistency.  You
>> may need to adjust the striping of your logical LUNs on the storage.  I
>> have observed serious performance degradation on an older IBM ESS
simply
>> because the logical volumes were created on a single SSA rather than
>> spread across the entire set of disks.
>>
>> Cheers,
>> Neil Strand
>> Storage Engineer - Legg Mason
>> Baltimore, MD.
>> (410) 580-7491
>> Whatever you can do or believe you can, begin it.
>> Boldness has genius, power and magic.
>>
>>
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
Of
>> Zoltan Forray/AC/VCU
>> Sent: Tuesday, October 19, 2010 9:15 AM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: [ADSM-L] Lousy performance on new 6.2.1.1 server with
>> SAN/FILEDEVCLASS storage
>>
>> Now that I have ventured into new territory with this new server (Linux
>> 6.2.1.1), I am experiencing terrible performance when it comes to
moving
>> data from disk (FILEDEVCLASS on EMC/SAN storage) vs my other 6.1 and
5.5
>> servers.
>>
>> With the server doing nothing but migrating data from this SAN based
>> stgpool to TS1130 tape, I am seeing roughly 700GB being moved in a
>> 12-hour
>> period.  On my other, internal disk based TSM servers, I move
>> multiple-terabytes per day/24-hours.
>>
>> So, where should I focus on why this is so slow?  Is it because I am
>> using
>> SAN storage?  How about the FILEDEVCLASS vs fixed, pre-formatted
volumes
>> (like every other server is using)?
>>
>> Or is this normal?  If it is, I am in for some serious problems.  One
of
>> these servers is expected to replace an existing 5.5 server that
>> processes
>> 20TB+ of backups, per week (no, I can not go straight to tape due to
the
>> type of backups being performed).
>>
>> Suggestions?  Thoughts?
>> Zoltan Forray
>> TSM Software & Hardware Administrator
>> Virginia Commonwealth University
>> UCC/Office of Technology Se

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Remco Post
On 21 okt 2010, at 20:58, Zoltan Forray/AC/VCU wrote:

> Thanks for the hint...using Google is great (which I do), but if you
> don't have a clue what to look forlike a dictionary--you have to
> have some kind of clue/general idea how a word is spelled to be able to
> look it up  ;)
> 

you're so right.

> I did find a /sys/block/emcpowerb/queue/nr_requests file with 128 in it.
> 

which sounds ok... if that's the queue depth, which I'm guessing it is. And 128 
outstanding requests you'll never need more. I've recently increased the 
queue depth on the HBA while doing some performance testing, and have not 
noticed any major impact with the server and the box on the same switch. I 
wouldn't bother with that unless some support guy explicitly tells you to do so.

> 
> 
> From:
> Remco Post 
> To:
> ADSM-L@VM.MARIST.EDU
> Date:
> 10/21/2010 02:47 PM
> Subject:
> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS
> storage
> Sent by:
> "ADSM: Dist Stor Manager" 
> 
> 
> 
> Hi all,
> 
> YMMV, I've really never played with these, but check
> /sys/block/sdX/queue/nr_requests and some other files in that dir
> 
> ow BTW, google is your friend ;-)
> 
> On 21 okt 2010, at 19:38, Zoltan Forray/AC/VCU wrote:
> 
>> This is RedHat Linux 5.5
>> 
>> 
>> 
>> From:
>> Remco Post 
>> To:
>> ADSM-L@VM.MARIST.EDU
>> Date:
>> 10/21/2010 01:24 PM
>> Subject:
>> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with
> SAN/FILEDEVCLASS
>> storage
>> Sent by:
>> "ADSM: Dist Stor Manager" 
>> 
>> 
>> 
>> for AIX:
>> 
>> lsattr -l hdiskX -> look foor the queue_depth field values between 20
> and
>> 64 make sense, 128 in some extreme cases
>> chdev -l hdiskX -a queue_depth=Y (but only if the vg is off-line)
>> 
>> I found that some drives only support a queue_depth of 1... in that
> case,
>> you found your bottleneck.
>> 
>> On 21 okt 2010, at 18:57, Zoltan Forray/AC/VCU wrote:
>> 
>>> In reference to these recommendations, this is what one of my SAN folks
>>> said:
>>> 
>>> If "increasing the queue depth for the individual disks" is something
>> you
>>> can do on a CLARiiON, it's not something I'm familiar with.  On the HBA
>>> (and if you can), you would do that from the host side (like with
>>> SanSurfer for Qlogic HBAs).
>>> 
>>> I have no idea what he might be referring to with "EMC voodoo
>>> application".
>>> 
>>> "iostat/vmstat" are unix host utilities.
>>> 
>>> Each of the two LUNs is spread out over 7 disks.  The 2 RAID Groups and
>>> the enclosure they are in are dedicated to Tivoli.
>>> 
>>> I've seen some references to using lots of smaller LUNs rather than a
>> few
>>> big ones.  You have 2 5.5TB LUNs now.  We can try splitting each of
>> those
>>> into 10-12 smaller LUNs.
>>> Zoltan Forray
>>> TSM Software & Hardware Administrator
>>> Virginia Commonwealth University
>>> UCC/Office of Technology Services
>>> zfor...@vcu.edu - 804-828-4807
>>> Don't be a phishing victim - VCU and other reputable organizations will
>>> never use email to request that you reply with your password, social
>>> security number or confidential personal information. For more details
>>> visit http://infosecurity.vcu.edu/phishing.html
>>> 
>>> 
>>> 
>>> From:
>>> "Strand, Neil B." 
>>> To:
>>> ADSM-L@VM.MARIST.EDU
>>> Date:
>>> 10/19/2010 01:50 PM
>>> Subject:
>>> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with
>> SAN/FILEDEVCLASS
>>> storage
>>> Sent by:
>>> "ADSM: Dist Stor Manager" 
>>> 
>>> 
>>> 
>>> Zoltan,
>>> You may need to increase the queue depth for the individual disks
>>> and/or the HBA attached to the disks.
>>> Monitor both the server (iostat/vmstat) and the storage (EMC voodoo
>>> application) for latency and compare the results for consistency.  You
>>> may need to adjust the striping of your logical LUNs on the storage.  I
>>> have observed serious performance degradation on an older IBM ESS
> simply
>>> because the logical volumes were created on a single SSA rather than
>>> spread across the entire set of disks.
>>> 
>>> Cheers,
>>> Neil Strand
>>> Storage Engineer - Legg Mason
>>> Baltimore, MD.
>>> (410) 580-7491
>>> Whatever you can do or believe you can, begin it.
>>> Boldness has genius, power and magic.
>>> 
>>> 
>>> -Original Message-
>>> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of
>>> Zoltan Forray/AC/VCU
>>> Sent: Tuesday, October 19, 2010 9:15 AM
>>> To: ADSM-L@VM.MARIST.EDU
>>> Subject: [ADSM-L] Lousy performance on new 6.2.1.1 server with
>>> SAN/FILEDEVCLASS storage
>>> 
>>> Now that I have ventured into new territory with this new server (Linux
>>> 6.2.1.1), I am experiencing terrible performance when it comes to
> moving
>>> data from disk (FILEDEVCLASS on EMC/SAN storage) vs my other 6.1 and
> 5.5
>>> servers.
>>> 
>>> With the server doing nothing but migrating data from this SAN based
>>> stgpool to TS1130 tape, I am seeing roughly 700GB being moved in a
>>> 12-hour
>>> period.  On my other, internal disk based T

Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-21 Thread Richard Rhodes

>Correct.  This machine has 8-internal 600GB 15K drives.  The OS and DB are
>on one pair of mirrored drives.  The log  and archlog share the rest of
>the internal drives in a raid-10 (I think) array plus leaving extra space
>to DB expansion (one server I plan to migrate to these new 6.2 servers has
>a DB size of 190GB which comes to a minimum of ~600GB when converted from
>5.5).  The file devclass is SAN storage in a Claiiron box.  There are
>3-Qlogic HBA cards.  1-for disk and the other 2-for tape but only 1-in use
>due to lack of switch-ports.

What is the performance of the mirrored drives during migration?
2 mirrored drives (i/o performance of one drive unless
both can be used for reads) may not be enough iops for the
updates needed during migration.  In other words, you may be
bottlenecked on the db drives.

What kind of storage does the clariion have?  Are there enought
spindles to put the file pool on a different raidset than db/log?
All our TSM instances run to Clariion or DMX (mostly clariion) where
disk pools and db/logs use the same HBA's (2 of the).  We get
excellent performance.  If possible, I would put the db/log
on the clariion - It will be faster than the internal drives.
You get right back  cache, and some read caching.  In fact,
I'm to the place where I don't care where the db/log/pools
are in the clariion - we just wide spread everything across
all raidsets and get excellent performance.

Also, what is the utilization of the log drives?  Are they
sitting idle most of the time?  If so, then those are wasted
iops.  You could use more for the db and less for the logs.
Logs are mostly sequential and don't need much iops, which
are much better used for the db (random access).

Rick



-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


AUTO: Alexander Heindl/IT/IM/EAG/AT ist außer Haus. (Rückkehr am 27.10.2010)

2010-10-21 Thread Alexander Heindl

Ich bin bis 27.10.2010 abwesend.

Gerne werde ich Ihre Nachricht nach meiner Rückkehr beantworten.

Alexander Heindl


Hinweis: Dies ist eine automatische Antwort auf Ihre Nachricht  "Re: [ADSM-L] 
Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage" gesendet 
am 22.10.2010 03:35:20.

Diese ist die einzige Benachrichtigung, die Sie empfangen werden, während diese 
Person abwesend ist.

Re: Copypool storage pool questions

2010-10-21 Thread Paul_Dudley
Thanks for the reply but one thing I am still confused about are these copypool 
tapes sitting in my onsite tape library. Given that the primary reason for this 
copypool is for offsite tapes, should these copypool tapes currently residing 
in my tape library (created by reclaimation processes) be checked out and sent 
offsite as well?

Thanks & Regards
Paul


> -Original Message-
> > A lot of these have a status of "Full" however they are only 50/60/70% used.
>
> ...a reflection of files on the filled tape expiring over time, which 
> reclamation will
> combine with other data to fill a new tape.  Surrogate reclamation tends to 
> be quite
> slow compared to direct reclamation, where it may not be able to keep up with 
> the
> need for additional offsite tapes for new data, the result being a 
> considerable
> demand for the scratch tapes you have in the library, which could account for 
> your
> issue of running low on scratch tapes.
>
> The only other suggestion that may apply is to verify the devclass Format 
> being
> used, that it uses compression so as to get the most onto the tapes.
>
> Richard Sims






ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.


Re: Copypool storage pool questions

2010-10-21 Thread Steven Harris

Think of it this way Paul. (and please excuse me if I pitch this at too
low a level)

You are trying to create an offsite point of consistency every day.  So,
you run your backups overnight, then copy the data to your copypool.  At
this point you should create a DB backup - I prefer to use the classic
DB backup for onsite recovery and a DB Snapshot for offsite - then eject
the current copypool tapes and the DB snapshot from the library. Finally
create a plan file and get it offsite by your preferred method.

When the meteor hits your primary data centre, you take the offsite
tapes and the plan file to your new hardware.

Unpack the plan file and it will tell you whilch database backup tape to
use.  Restore the database and you are right where you were at your
point of consistency, so all the data from that night's backup is on the
offsite tapes that you have.  Then you can start your client restores
and stgpool restores.

If you don't have some of the copypool tapes then you will not be able
to restore some part of the data. So unless you maintain both an onsite
copypool and an offsite copypool for the same data, you need to eject
all of the copypool tapes every day.

This means that 1 tape per copypool per day will go offsite
part-filled.  When reclamation starts you may see the tape you just sent
offsite get reclaimed.  No matter, the database snapshot that is with
that tape is consistent.

Two other things to watch with reclamation.  If you run offsite
reclamation the "new" way with the reclaim command it will finish after
examining the number of tapes specfied in the offsitereclaimlimit
operand of the command or the same property of the storage pool.  There
is also a bug in 5.5 that affects the selection of offsite volumes for
reclamation.  For these two reasons, I have reverted to the traditional
method of setting and resetting the reclaim property for the storage
pool, as this will restart itself as necessary until all tapes over the
threshold have been reclaimed.

I hope this helps

Steve

Steven Harris
TSM Admin
Paraparaumu, New Zealand





On 22/10/2010 6:24 PM, Paul_Dudley wrote:

Thanks for the reply but one thing I am still confused about are these copypool 
tapes sitting in my onsite tape library. Given that the primary reason for this 
copypool is for offsite tapes, should these copypool tapes currently residing 
in my tape library (created by reclaimation processes) be checked out and sent 
offsite as well?

Thanks&  Regards
Paul



-Original Message-

A lot of these have a status of "Full" however they are only 50/60/70% used.

...a reflection of files on the filled tape expiring over time, which 
reclamation will
combine with other data to fill a new tape.  Surrogate reclamation tends to be 
quite
slow compared to direct reclamation, where it may not be able to keep up with 
the
need for additional offsite tapes for new data, the result being a considerable
demand for the scratch tapes you have in the library, which could account for 
your
issue of running low on scratch tapes.

The only other suggestion that may apply is to verify the devclass Format being
used, that it uses compression so as to get the most onto the tapes.

 Richard Sims






ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.