It has been suggested to management here that there could be potentially 
significant CPU savings from re-engineering application programs such that any 
SORT's are done in a separate step, so that a program with a single internal 
SORT would be broken up into a pre-SORT process followed by an external SORT of 
the massaged data followed by a post-process of the SORTed data.

The first obvious factor is that SORT (at least Syncsort and DFSORT) are *far* 
more efficient at I/O than any COBOL program can be.  It is also obvious that 
the data volume would affect the relative CPU cost of the two methods, with 
small volume possibly favoring an internal SORT and large(r) volume possibly 
favoring the external SORT process, FSVO "large(r)".  Compressed (z/OS 
compression, not disk subsystem compression) vs non-compressed data files could 
also be another factor in CPU differences.

Has anyone else been asked to measure whether this claim is true or not, and if 
true where the "break" point in volume might be?

TIA for any insight you can provide.

Peter
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to