> I agree. In my last job it took my being able to diagnose a bad
> problem from an sadump to get the manager to allow me to take an
> sadump on a more important system. I suggest another SHARE session
> showing how fast an sadump can be taken these days, if it is set up
> right. Throw in pointers how to convince management to allow an
> sadump. But also think of the smaller installations that cannot use
HiperPAV.
>
> I had a very bad case of envy when I saw that one customer routinely
> takes sadumps of a 15GB real lpar that only takes 4 minutes (and
> they had pretty much everything in except storage above the bar).
> That sadump resulted in a 'meager' 63000cyl sadump data set. And
> they weren't even using autoipl and/or fully automated sadump (as
> in: the operator was still confirming all the options and typing in
> the title, contributing to the 4 minutes.) I advised them to make
> their procedure fully automated and to use autoipl to speed things
The last "how fast can SADMP go when optimally configured"
measurement that I did was in 2009, on a z10 processor, using whatever
the current DS8xxx DASD model was at that time. A 16-volume
data set was used, spread over multiple LCUs (and maybe over
several physical boxes).
AMD104I DEVICE VOLUME USED DATA SET NAME
1 1AC0 SADF01 5% PDBANN.SADMP16
2 22C0 SADF02 5% PDBANN.SADMP16
3 2AC0 SADF03 5% PDBANN.SADMP16
4 12C1 SADF04 4% PDBANN.SADMP16
5 1AC1 SADF05 5% PDBANN.SADMP16
6 22C1 SADF06 4% PDBANN.SADMP16
7 2AC1 SADF07 4% PDBANN.SADMP16
8 1340 SADF08 7% PDBANN.SADMP16
9 1B40 SADF09 5% PDBANN.SADMP16
10 2340 SADF0A 5% PDBANN.SADMP16
11 2B40 SADF0B 6% PDBANN.SADMP16
12 1341 SADF0C 4% PDBANN.SADMP16
13 1B41 SADF0D 4% PDBANN.SADMP16
14 2341 SADF0E 4% PDBANN.SADMP16
15 2B41 SADF0F 4% PDBANN.SADMP16
16 12C0 SADF00 5% PDBANN.SADMP16
The result was:
Total Dump Statistics
Start time 05/14/2009 14:09:58.560371
Stop time 05/14/2009 14:11:20.898789
Elapsed time 00:01:22.33
Elapsed dumping time 00:00:30.77
Console reply wait time 00:00:51.55
Console I/O wait time 00:00:01.13
Output I/O short wait time 00:00:05.26
Output I/O long wait time 00:00:00.00
Work file I/O time 00:00:00.86
DASD error delay time 00:00:00.00
Nonwait elapsed time 00:00:23.02
Cpu Timer 00:01:22.24
Page buffer steal time 00:00:00.31
Paging I/O wait time (Single) 00:00:00.00
Paging I/O wait time (Batch) 00:00:00.49
CPU busy percentage 74
Zero pages suppressed 737,419
Logical records dumped 9,751,498
Modified LR output tack ons 1,218,103
Normal output tack ons 406,648
Modified LR unit checks 5
Modified LR unit checks (TIC) 0
Idle output SSCHs 519
I/O interrupt output SSCHs 14
Entries to DASD error recovery 5
Short output waits 101,222
Long output waits 0
Branch Entries to AMDSAGTM 210
BCTRs Created 0
Page buffers from available 1,541,334
Page buffers stolen 0
UseData(No) pages 3,075
Average output data rate 1,237.95 megabytes per second
Address space real pages 82,614
Data space real pages 22,312
High virtual real pages 9,447,933
Address space immediate aux pages 0
Data space immediate aux pages 0
High virtual immediate aux pages 0
Address space deferred aux pages 2,528
Data space deferred aux pages 552
High virtual deferred aux pages 1
Unresolved page faults 0
Single page reads 0
Single page read rate 0.00 megabytes per second
Successful batch reads 115
Successful batch pages 3,081
Failed batch reads 0
Failed batch pages 0
Batch Buffer Shortages 12
Extra Buffers Batch Could Use 1,437
No Wait SIO Batches 40
Batch read rate 24.56 megabytes per second
Note that we did very little reading from AUX during this dump
(which can be very slow, due to the 4K blocksize for page data sets),
so the overall dumping rate was around 1.2GB per second. The rate
while dumping real storage was around 1.5GB per second. Taking the
same dump to a single-volume data set had a real storage dumping rate
of around 118MB per second.
On year 2013 processors and DASD, I typically see a real storage
dumping rate to a single volume around 160MB per second
(about 35% faster than 2009). I haven't had an opportunity
to measure an optimally spread 16-volume configuration on
current hardware.
We have measured SADMP reading from Flash memory on an EC12
machine at around 1GB per second. For that measurement, we did
not have an optimal output DASD configuration, so the dumping
rate for the Flash data was limited to around 500MB per second.
Whether or not the reading from flash and dumping to DASD
would have overlapped nicely enough to allow 1GB per second with
a better DASD configuration, I don't know.
Jim Mulder z/OS System Test IBM Corp. Poughkeepsie, NY
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN