Interesting! The Java program probably is probably much faster because
it runs on a full capacity zIIP. At my shop we run an enterprise class
machine and I don't see the same results. It's very difficult to measure
Java vs native when the gcp's also run full capacity.
Can you share some of your SMF reports?
On 2020-06-18 7:42 AM, Andrew Rowley wrote:
On 18/06/2020 12:24 am, Kirk Wolf wrote:
Lionel,
I wasn't thinking of using the "all members" form of cp - that seems
like
it should be *much* better, although it would depend on how cp works
under
the covers - evidence indicates that it just loops and does
alloc/open/close/free on each member. If only the cp authors had
better
C library support for PDSs ;-)
I'm curious - how much time did you save by preallocating the PDS?
Kirk Wolf
http://dovetail.com
Preallocating the PDS gave me about a 12x speed improvement. Here is a
Rexx shell script rxalloc to do the allocation (I couldn't figure out
a way to do bpxwdyn from the shell):
/* rexx */
parse arg dataset
call bpxwdyn "alloc da("|| dataset || ") old msg(2) rtddn(ddname)"
say ddname
and a shell script to test:
#!/bin/sh
export _BPX_SHAREAS=YES
./rxalloc SYS1.MACLIB
/bin/cp -T -U -S a=.txt "//'SYS1.MACLIB'" /home/andrewr/temp
The individual member copies with progress indicator in Zigi also
seems to have significant overhead. Approximate copy speeds on my
system were:
Zigi: 1 member/second
cp, without preallocation: 8 members/second
cp, with preallocation: 100 members/second
SMF also suggests cp for some reason opens and closes the PDS twice
for each member. I wrote a small Java program to perform the copy
using the JZOS classes, this coped about 200 members/second.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN