> -----Original Message-----
> From: Paul Johnson [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, August 21, 2001 9:37 AM
> To: Bob Showalter
> Cc: '[EMAIL PROTECTED]'
> Subject: Re: question concerning signed char
> 
> 
> On Tue, Aug 21, 2001 at 09:24:06AM -0400, Bob Showalter wrote:
> 
> > a signed char is an integer data type with a size of one 
> byte. 7 bits are
> > magnitude and 1 bit for sign (twos-complement). The range 
> is -128 to +127.
> 
> Technically, "twos complement" ne "sign and magnitude".  The 
> difference,
> in 8 bits, is with 10000000, which is either 0 or -128, depending on
> your encoding.  Sign and magnitude has two patterns for 0 and a
> symmetric range.  Twos compliment generally makes the hardware easier.

Actually, 10000000 is 128 as an unsigned char, no?

Yes, I understand the difference. I thought that writing simply
"twos-complement" would be insufficient explanation. But you are very
correct that the "sign bit" is not, strictly speaking, a sign bit.
Otherwise, the bit pattern 10000000 would be -0 instead of -128.

Thanks.

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to