I have no official standing, but I've spent a lot of time reading the
JDBC spec and working with various implementations, and I concur. 
Precision should be the maximum number of significant digits the column
is capable of returning.
 
-Kevin
 
>>> Gilles Dubochet <[EMAIL PROTECTED]> 06/13/05 3:27 AM >>>
>> Whith the JDBC driver at least up to version 8.1dev-400, the  
>> result of the
>> getPrecision method of ResultSetMetaData on a bigint column is 0  
>> instead of
>> the expected 19.
>>
>
> This has been reported before but I haven't got to fixing it yet. This
> is partly because I haven't seen a good explanation of exactly what we
> should be returning here -- what spec says we should return 19?
>

Well, in PostgreSQL, BIGINT uses 8 bytes (that is what the  
documentation says, at least).  Now, with 8 bytes, the range of  
numbers that can be represented is:

For 63 bits + 1 sign bit: [ (2^63/2)-1, -2^63/2] =  
[9223372036854775807, -9223372036854775808]
For 64 bits (unsigned): [2^64, 0] = [18446744073709551616, 0]

If you count the number of digits in these numbers, you'll notice  
that for the signed number, 19 decimal digits at most are required to  
represent it (if the sign comes for free, which seems assumed for  
other data types such as INT or SMALLINT).

For the unsigned number, 20 decimal digits are required. But as far  
as I understand the PostgreSQL reference, integers are always signed,  
except for serial data types, but where the range is that of a signed  
number anyway (since they need to be compatible with "normal" integer  
types to represent references).

This is why I believe 19 is the value the getPrecision method should  
return. I don't think there is some kind of standard reference that  
defines it, but it seems pretty clear what it should be really.


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to