My understanding of the difference between unsigned and signed packed decimal values is that the rightmost sign nibble in a signed packed value is occupied by a digit in an unsigned value.
Consider a four-byte value. The hardware interprets a signed value as |d|d|d|d|d|d|d|s|, a precision of 7 decimal digits. and an unsigned one as |d|d|d|d|d|d|d|d|, a precision of 8 decimal digits. They are thus easy to distinguish: the decimal digits are in the sequence 0000, 0001, . . . 1001; and the signs are the [logically] larger four-bit values. Vide pp. 9-2ff of the PrOp. I do not recommend mixing them. John Gilmore, Ashland, MA 01721 - USA ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
