Michael Glaesemann <[EMAIL PROTECTED]> writes:
> On May 31, 2005, at 12:48 AM, Tom Lane wrote:
>> Actually, practically all of the Postgres code assumes int is at least
>> 32 bits.  Feel free to change pg_tm's field to be declared int32  
>> instead
>> of just int if that bothers you, but it is really quite academic.

> Thanks for the clarification. My instinct would be to change so that  
> it's no longer just an assumption. Is there any benefit to changing  
> the other pg_tm int fields to int32? I imagine int is used quite a  
> bit throughout the code, and I'd think assuming 32-bit ints would  
> have bitten people in the past if it were invalid, so perhaps  
> changing them is unnecessary.

As I understand it, the received wisdom of the C community is that
"int" means the machine's natural, most efficient word width.  The
C specification was written at a time when a fair percentage of hardware
thought that meant int16 (and I do remember programming such hardware).
But there are no longer any machines ... or at least none on which you'd
want to run Postgres ... for which int means int16; today I'd assume
that int means "probably int32, maybe int64 if that's really faster
on this machine".

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to