Celia McInnis wrote:
> 
> The following bug has been logged online:
> 
> Bug reference:      1578
> Logged by:          Celia McInnis
> Email address:      [EMAIL PROTECTED]
> PostgreSQL version: 8.0.1
> Operating system:   Linux
> Description:        ::bit(n) behaves "differently" if applied to bit strings
> than integers.
> Details: 
> 
> It's probably not good (at least for mathematicians!) to have the following
> give different results:
> 
> select B'1110110101'::bit(6);
> select B'1110110101'::integer::bit(6);
> 
> The first gives 110101 (the 6 least significant bits).
> The second gives 111011 (the 6 most significant bis).

I ran some tests on your example:

        test=> select B'1'::bit(6);
          bit
        --------
         100000
        (1 row)
        
        test=> select B'1'::integer::bit(6);
          bit
        --------
         000001
        (1 row)
        
        test=> select B'100000'::bit(6);
          bit
        --------
         100000
        (1 row)
        
        test=> select B'100000'::integer::bit(6);
          bit
        --------
         100000
        (1 row)

>From this, it seems the issue is how ::bit should pad a string if the
number of bits supplied in the string is less than the length specified.
I think it is behaving correctly to pad with zeros on the end because it
is not a number but a string of bits.

What happens with the ::integer cast is that the string is expanded to
32 bits, and then cast to only six.  An argument could be made that it
should then take the left 6 bits and they should all be 0's.  In fact,
that's what it does if you supply just a string with zero padding:
        
        test=> select B'0000000000100000'::bit(6);
          bit
        --------
         000000
        (1 row)

        test=> select B'0000000000100000'::integer::bit(6);
          bit
        --------
         100000
        (1 row)

Looking at the code, backend/utils/adt/varbit.c::bitfromint4 is a
special function just for converting to bit from int4, and this piece of
the code seems most significant:

    /* drop any input bits that don't fit */
    srcbitsleft = Min(srcbitsleft, destbitsleft);

And, I think it is done this way because int4 has a fixed length, so it
is pretty hard to set the high 3 bits in an int32 value, so the way the
code works is it assumes you want the lower X bits during the
conversion.  Though this is slightly inconsistent in how it works with a
bit string, it does seem the most useful approach.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to