On 11 November 2015 at 13:58, Bharath Nunepalli <[email protected]> wrote:
> 1. The size of the file on Optim server (before FTP to z/OS) is ~1.7 GB,
>     and records count is ~12 million

I have no idea what "Optim server" is, but from the context it appears
to generate or at least store a file in the NETDATA format that is the
input to RECEIVE. For historical reasons (transmission as card-image
datastreams), NETDATA format records are always 80 bytes, so
presumably that is what you mean by "records count".

> 2. The size of the Input Dataset (into which the Optim file is FTPed) is ~1.7 
> GB (32243 Tracks*56664 bytes),

OK - this is the number of tracks allocated by FTP, based on your FTP
statements:

> quote site blksize=3120 recfm=fb lrecl=80 conddisp=delete
> binary
> quote site blocks PRImary=483640 SECondary=10
> put "E:\Data\XXXXX\XXXXX003_DPEXT.000" "''XXXXXXX.XXXXX003.X000''"

In other words someone (FTP, SMS...) decided that 32243 tracks would
be necessary to contain the data when block (as you requested) at
3120.

>     and the records count can be ~483,645 (for 3120 blksize, 15 
> Records/Track, so 32243*15)

No - the record count will still be ~12 million. Nothing in the FTP
changes the number of records. You can't calculate backwards to derive
a record count like this.

> 3. The size of the Output Dataset is ~8.91 GB,

Yes, this is because you specified a BLKSIZE of 132, which as already
discussed is extremely inefficient.

>     and the records count can be ~12,659,996 (for 124 blksize, 75 
> Records/Track, so 168799.95*75)

The record count will be that of the original input to whatever
generated the NETDATA format on the "Optim server". You have said
nothing about what that is.

> The row count of the file in Optim Server is close to the row count of Output 
> Dataset.

OK - now you are newly bringing "row count" into the discussion. I
guess that this is the "record count" I mentioned in the previous
sentence, i.e. the original count of logical records in wherever this
data originates (SQL server?)

> The size of the file in Optim server is close to the size of the Input 
> Dataset.

I don't know what "file" and "Input Dataset" you are referring to
here. Please be clear.

> Am I missing something here, or totally calculating in a wrong way??

Yes, I think so.

> Why is the size of Input Dataset is close to size of the Optim file, but with 
> less rows???

You cannot mix "rows" in some original data with "records" as encoded
in the NETDATA format. The NETDATA representation can use more or
fewer records than the original file, depending on the relative record
lengths. Typically it uses more.

Did you do as I and others suggested, and specify (or allow to
default) the output BLKSIZE to something sensible?

Tony H.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to