2)
static double
uuid2num(const pg_uuid_t *i)
{
return *((uint64 *)i);
}
It isn't looked as correct transformation for me. May be, it's better
to transform to numeric type (UUID looks like a 16-digit hexademical
number)
and follow gbt_numeric_penalty() logic (or even call directly).
Thanks for the review! A UUID is actually not stored as a string of
hexadecimal digits. (It is normally displayed that way, but with 32
digits, not 16.) Rather it is stored as an unstructured 128-bit value
(which in C is 16 unsigned chars). Here is the easy-to-misread
declaration from src/backend/utils/adt/uuid.c:
#define UUID_LEN 16
struct pg_uuid_t
{
unsigned char data[UUID_LEN];
};
I would love to just cast this to a 128-bit unsigned int. But it looks
like Postgres doesn't always have 128-bit ints available, so my code
takes the top half and uses that for penalty calculations. It seemed to
me that was "good enough" for this purpose.
The only other 128-bit type I found in btree_gist was Interval. For that
type we convert to a double using INTERVAL_TO_SEC, then call
penalty_num. By my read that accepts a similar loss of precision.
If I'm mistaken about 128-bit integer support, let me know, and maybe we
can do the penalty computation on the whole UUID. Or maybe I should just
convert the uint64 to a double before calling penalty_num? I don't
completely understand what the penalty calculation is all about, so I
welcome suggestions here.
Thanks again,
Paul
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers