Dave Page wrote: > Oliver reported: > > 2. The dump produced: > CREATE TABLE cust_alloc_history ( > ... > "year" integer DEFAULT date_part('year'::text, > ('now'::text)::timestamp(6) with time zone) NOT NULL, > ... > ERROR: Column "year" is of type integer but default expression is > of type double precision > You will need to rewrite or cast the expression > > For an original definition of: > > year INTEGER DEFAULT > date_part('year',CURRENT_TIMESTAMP)
Wow. That is clear. Why are we returning "year" as a double? Yes, I see now: test=> \df date_part List of functions Result data type | Schema | Name | Argument data types ------------------+------------+-----------+----------------------------------- double precision | pg_catalog | date_part | text, abstime double precision | pg_catalog | date_part | text, date double precision | pg_catalog | date_part | text, interval double precision | pg_catalog | date_part | text, reltime double precision | pg_catalog | date_part | text, time with time zone double precision | pg_catalog | date_part | text, time without time zone double precision | pg_catalog | date_part | text, timestamp with time zone double precision | pg_catalog | date_part | text, timestamp without time zone I would love to say that this is related to change in casts, but that isn't the case. It is the new double-precision handling of dates; and I see no easy way to fix this, and you can't fix this after the data load because the table wasn't created. Yuck. I have to ask, why are we using a double here rather than a 64-bit value, if available? -- Bruce Momjian | http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073 ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster