<facetiously>
Perhaps they should also look at removing all embedded SQL call into a
pre-step to unload the required data to a temporary file, read it, write a
new temporary output file, then have a post-step which loads the changed
DB2 data back into the appropriate tables. That would also reduce the CPU
usage (at least of the step in question), right?
</facetiously>

You've had a cattle stampede. I know because there is a lot of B.S. left
behind. Or perhaps the idea came from a C programmer who thinks that
COBOL.is using something like C's built in qsort() function.


On Mon, Nov 25, 2013 at 10:41 AM, Alan Field
<[email protected]>wrote:

> Peter,
>
> Review/research the COBOL compiler FASTSRT option. If you are using it
> what you suggest will possibly make things worse.
>
> If you aren't using it, it may be a cleaner solution that recoding JCL. to
> achieve the desired savings.
>
> Alan Field
> Technical Engineer Principal
> BCBS Minnesota
>
> Phone: 651.662.3546  Mobile:  651.428.8826
>
>
>
>
>
> From:   "Farley, Peter x23353" <[email protected]>
> To:     [email protected],
> Date:   11/25/2013 09:43
> Subject:        Has anyone measured CPU savings using external SORT's vs
> internal (COBOL) SORT's?
> Sent by:        IBM Mainframe Discussion List <[email protected]>
>
>
>
> It has been suggested to management here that there could be potentially
> significant CPU savings from re-engineering application programs such that
> any SORT's are done in a separate step, so that a program with a single
> internal SORT would be broken up into a pre-SORT process followed by an
> external SORT of the massaged data followed by a post-process of the
> SORTed data.
>
> The first obvious factor is that SORT (at least Syncsort and DFSORT) are
> *far* more efficient at I/O than any COBOL program can be.  It is also
> obvious that the data volume would affect the relative CPU cost of the two
> methods, with small volume possibly favoring an internal SORT and large(r)
> volume possibly favoring the external SORT process, FSVO "large(r)".
> Compressed (z/OS compression, not disk subsystem compression) vs
> non-compressed data files could also be another factor in CPU differences.
>
> Has anyone else been asked to measure whether this claim is true or not,
> and if true where the "break" point in volume might be?
>
> TIA for any insight you can provide.
>
> Peter
> --
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an
> authorized representative of the intended recipient, you are hereby
> notified that any dissemination of this communication is strictly
> prohibited. If you have received this communication in error, please
> notify us immediately by e-mail and delete the message and any attachments
> from your system.
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>



-- 
This is clearly another case of too many mad scientists, and not enough
hunchbacks.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to