Re: [BUGS] BUG #1816: Insert null values on a null field

2005-08-11 Thread Richard Huxton

Rafael Barrios wrote:

Richard, thanks for your response.
here is te information you request:


Don't forget to cc: the mailing list.


CREATE TABLE tbarrio (
   bar_coddep character varying(2) NOT NULL,
   bar_codmun character varying(3) NOT NULL,
   bar_codbarrio character varying(3) NOT NULL,
   bar_barrio character varying(40) NOT NULL,
   bar_zongeografica character varying(1) NOT NULL
);

ALTER TABLE ONLY tbarrio
   ADD CONSTRAINT pk_tbarrio PRIMARY KEY (bar_coddep, bar_codmun, 
bar_codbarrio);

ALTER TABLE ONLY tbarrio
   ADD CONSTRAINT fk_tbarrio_tmunicipio FOREIGN KEY (bar_coddep, bar_codmun)
REFERENCES tmunicipio(mun_coddep, mun_codmun) ON UPDATE CASCADE;



failling Insert: (adds records to table 'barrio' observe empty values..)

INSERT INTO tbarrio VALUES ('05', '001', '', '', '');


These aren't null - they are empty strings. You haven't forbidden empty 
strings.


You'll want constraint(s) to do something like:

ALTER TABLE tbarrio
ADD CONSTRAINT no_empty_codbarrio
CHECK (bar_codbarrio > '');

http://www.postgresql.org/docs/8.0/static/sql-altertable.html

HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] Data Selection Slow From VB 6.0

2005-08-11 Thread Richard Huxton

Mahesh Shinde wrote:

Hi I am using Postgres version *PostgreSQL 7.4.5 on
i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 20030222 (Red Hat
Linux 3.2.2-5).* for an multy user desktop application using VB 6.0
as a front end toll.


Mahesh - please post this to the general mailing list, not the bug-list.
You have not described a bug in PostgreSQL, and there are more people
who can help on the general list anyway.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[BUGS] BUG #1819: COPY filename rejects Windows format path

2005-08-11 Thread Steve Peterson

The following bug has been logged online:

Bug reference:  1819
Logged by:  Steve Peterson
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.3
Operating system:   Windows XP SP 2
Description:COPY filename rejects Windows format path
Details: 

Running COPY FROM on a Windows server; using a Windows-format fully
qualified path with backslashes results in the backslashes being interpreted
as escapes.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[BUGS] BUG #1816: Insert null values on a null field

2005-08-11 Thread Rafael Barrios
Thanks for your valuable response. 
-- Original Message ---
From: Richard Huxton 
To: Rafael Barrios <[EMAIL PROTECTED]>, pgsql-bugs@postgresql.org
Sent: Thu, 11 Aug 2005 08:14:13 +0100
Subject: Re: [BUGS] BUG #1816: Insert null values on a null field

> Rafael Barrios wrote:
> > Richard, thanks for your response.
> > here is te information you request:
> 
> Don't forget to cc: the mailing list.
> 
> > CREATE TABLE tbarrio (
> >bar_coddep character varying(2) NOT NULL,
> >bar_codmun character varying(3) NOT NULL,
> >bar_codbarrio character varying(3) NOT NULL,
> >bar_barrio character varying(40) NOT NULL,
> >bar_zongeografica character varying(1) NOT NULL
> > );
> > 
> > ALTER TABLE ONLY tbarrio
> >ADD CONSTRAINT pk_tbarrio PRIMARY KEY (bar_coddep, bar_codmun,
bar_codbarrio);
> > 
> > ALTER TABLE ONLY tbarrio
> >ADD CONSTRAINT fk_tbarrio_tmunicipio FOREIGN KEY (bar_coddep, bar_codmun)
> > REFERENCES tmunicipio(mun_coddep, mun_codmun) ON UPDATE CASCADE;
> 
> > failling Insert: (adds records to table 'barrio' observe empty values..)
> > 
> > INSERT INTO tbarrio VALUES ('05', '001', '', '', '');
> 
> These aren't null - they are empty strings. You haven't forbidden 
> empty strings.
> 
> You'll want constraint(s) to do something like:
> 
> ALTER TABLE tbarrio
> ADD CONSTRAINT no_empty_codbarrio
> CHECK (bar_codbarrio > '');
> 
> http://www.postgresql.org/docs/8.0/static/sql-altertable.html
> 
> HTH
> --
>Richard Huxton
>Archonet Ltd
--- End of Original Message ---


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] BUG #1800: "unexpected chunk number" during pg_dump

2005-08-11 Thread Aaron Harsh
> >>> Alvaro Herrera <[EMAIL PROTECTED]> 08/10/05 9:03 AM >>>
> On Mon, Aug 01, 2005 at 06:02:30AM +0100, Aaron Harsh wrote:
> > pg_dump: ERROR:  unexpected chunk number 0 (expected 1) for toast value
> > ...
> Looks very much like the table was corrupted.  Maybe you should try to
> test your RAM and disks.  Not sure how to do that on x86-64 though,
> unless the test utility at www.memtest86.com has been ported to it.

The server is running off of ECC RAM on a RAID-10 set, so a one-off disk/RAM 
failure seems unlikely.  The server had been running beautifully for 6 months 
prior to this error, and hasn't been evidencing the problem since, so it seems 
unlikely that this is due to a bad DIMM or RAID controller.

The timing might be a coincidence, but this error happened within a day of our 
OID counter wrapping around back to 0.  (Although Tom Lane mentioned in 
pgsql-general that he was inclined to consider the timing a coincidence).


-- 
Aaron Harsh
[EMAIL PROTECTED]
503-284-7581 x347


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Marc G. Fournier


The table contains ~10 million rows:

# time psql -c "UPDATE xa_url SET url = url;" -U pgsql pareto
ERROR:  out of memory
DETAIL:  Failed on request of size 32.
0.000u 0.022s 2:41:14.76 0.0%   88+66k 12+0io 19pf+0w

And the server is running:

 PostgreSQL 8.0.0 on i386-portbld-freebsd4.10, compiled by GCC 2.95.4

I haven't had a chance to upgrade it to 8.0.3 yet ... didn't realize we 
had any limits on stuff like this ... bug, or really a limit?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Tom Lane
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> The table contains ~10 million rows:

> # time psql -c "UPDATE xa_url SET url = url;" -U pgsql pareto
> ERROR:  out of memory
> DETAIL:  Failed on request of size 32.

If you've got any AFTER UPDATE triggers on that table, you could be
running out of memory for the pending-triggers list.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [BUGS] BUG #1819: COPY filename rejects Windows format path

2005-08-11 Thread Richard Huxton

Steve Peterson wrote:


Running COPY FROM on a Windows server; using a Windows-format fully
qualified path with backslashes results in the backslashes being interpreted
as escapes.


Did you escape the backslashes: C:\\Windows\\Path ?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [BUGS] BUG #1819: COPY filename rejects Windows format path

2005-08-11 Thread John R Pierce

Steve Peterson wrote:


Running COPY FROM on a Windows server; using a Windows-format fully
qualified path with backslashes results in the backslashes being interpreted
as escapes.


Windows APIs are perfectly happy with regular / forward slashes in pathnames, 
in fact, I use them everywhere BUT in command line interfaces.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Marc G. Fournier

On Thu, 11 Aug 2005, Tom Lane wrote:


"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

The table contains ~10 million rows:



# time psql -c "UPDATE xa_url SET url = url;" -U pgsql pareto
ERROR:  out of memory
DETAIL:  Failed on request of size 32.


If you've got any AFTER UPDATE triggers on that table, you could be
running out of memory for the pending-triggers list.


Nope, only have a BEFORE UPDATE, or would that be similar except for at 
which point it runs out of memory?


Triggers:
xa_url_domain_b_i_u BEFORE INSERT OR UPDATE ON xa_url FOR EACH ROW EXECUTE 
PROCEDURE xa_url_domain()


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [BUGS] BUG #1800: "unexpected chunk number" during pg_dump

2005-08-11 Thread Alvaro Herrera
On Wed, Aug 10, 2005 at 06:07:24PM -0700, Aaron Harsh wrote:
> > >>> Alvaro Herrera <[EMAIL PROTECTED]> 08/10/05 9:03 AM >>>
> > On Mon, Aug 01, 2005 at 06:02:30AM +0100, Aaron Harsh wrote:
> > > pg_dump: ERROR:  unexpected chunk number 0 (expected 1) for toast value
> > > ...
> > Looks very much like the table was corrupted.  Maybe you should try to
> > test your RAM and disks.  Not sure how to do that on x86-64 though,
> > unless the test utility at www.memtest86.com has been ported to it.
> 
> The server is running off of ECC RAM on a RAID-10 set, so a one-off
> disk/RAM failure seems unlikely.  The server had been running
> beautifully for 6 months prior to this error, and hasn't been
> evidencing the problem since, so it seems unlikely that this is due to
> a bad DIMM or RAID controller.
> 
> The timing might be a coincidence, but this error happened within a
> day of our OID counter wrapping around back to 0.  (Although Tom Lane
> mentioned in pgsql-general that he was inclined to consider the timing
> a coincidence).

Not sure what else to attribute the failure to then.  But I should point
out that Oid normally wraps to FirstNormalObjectId (known as
BootstrapObjectIdData on previous sources), which is 16384, not 0.

Anyway I was originally thinking the problem data was 4294879152
(0xFFFEA7B0), not the 0.  Have you tried to manually extract the data
from the dataset_cache table?  You could try figuring out what page
contains the bad data, and manually peek into it using pg_filedump.

-- 
Alvaro Herrera ()
"Uno puede defenderse de los ataques; contra los elogios se esta indefenso"

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Marc G. Fournier

On Thu, 11 Aug 2005, Tom Lane wrote:


"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

On Thu, 11 Aug 2005, Tom Lane wrote:

If you've got any AFTER UPDATE triggers on that table, you could be
running out of memory for the pending-triggers list.



Nope, only have a BEFORE UPDATE, or would that be similar except for at
which point it runs out of memory?


Nope, BEFORE UPDATE shouldn't result in any permanent memory
accumulation.

An out-of-memory error should result in a long report in the postmaster
log about how many bytes in each memory context --- can you post that?


This is all I'm seeing in the logs:

# grep "\[653\]" pgsql
Aug 11 08:45:47 pgsql80 pg[653]: [34-1] ERROR:  out of memory
Aug 11 08:45:47 pgsql80 pg[653]: [34-2] DETAIL:  Failed on request of size 32.




Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Tom Lane
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> On Thu, 11 Aug 2005, Tom Lane wrote:
>> An out-of-memory error should result in a long report in the postmaster
>> log about how many bytes in each memory context --- can you post that?

> This is all I'm seeing in the logs:

> # grep "\[653\]" pgsql
> Aug 11 08:45:47 pgsql80 pg[653]: [34-1] ERROR:  out of memory
> Aug 11 08:45:47 pgsql80 pg[653]: [34-2] DETAIL:  Failed on request of size 32.

(looks at code...)  Hmm, it seems to print the report on stderr.  I
imagine we did that because there's possibly not enough memory to use
elog.  Anyway, can you arrange to capture the postmaster's stderr and
try it again?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Tom Lane
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> On Thu, 11 Aug 2005, Tom Lane wrote:
>> If you've got any AFTER UPDATE triggers on that table, you could be
>> running out of memory for the pending-triggers list.

> Nope, only have a BEFORE UPDATE, or would that be similar except for at 
> which point it runs out of memory?

Nope, BEFORE UPDATE shouldn't result in any permanent memory
accumulation.

An out-of-memory error should result in a long report in the postmaster
log about how many bytes in each memory context --- can you post that?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[BUGS] Cascading updates run seperately

2005-08-11 Thread Allan Wang
I'm running a fairly recent CVS head server, but I think this bug
applies in all versions.

talluria=# delete from items;
ERROR:  insert or update on table "players" violates foreign key constraint 
"players_accessory1_fkey"
DETAIL:  Key (accessory1)=(90205) is not present in table "items".
CONTEXT:  SQL statement "UPDATE ONLY "public"."players" SET "head" = NULL WHERE 
"head" = $1"
 
players
"users_pkey" PRIMARY KEY, btree (playerid) CLUSTER
"players_name_key" UNIQUE, btree (name)
"players_coord" btree (mapid, x, y)
"players_lastactive_key" btree (lastactive)
"players_username_lkey" btree (lower(name::text))
Foreign-key constraints:
"players_accessory1_fkey" FOREIGN KEY (accessory1) REFERENCES items(itemid) 
ON UPDATE CASCADE ON DELETE SET NULL
"players_accessory2_fkey" FOREIGN KEY (accessory2) REFERENCES items(itemid) 
ON UPDATE CASCADE ON DELETE SET NULL
"players_accessory3_fkey" FOREIGN KEY (accessory3) REFERENCES items(itemid) 
ON UPDATE CASCADE ON DELETE SET NULL
"players_feet_fkey" FOREIGN KEY (feet) REFERENCES items(itemid) ON UPDATE 
CASCADE ON DELETE SET NULL
"players_stylesheet_fkey" FOREIGN KEY (stylesheet) REFERENCES 
stylesheets(stylesheetid) ON UPDATE CASCADE ON DELETE SET DEFAULT
"users_arm" FOREIGN KEY (arm) REFERENCES items(itemid) ON UPDATE CASCADE ON 
DELETE SET NULL
"users_belt" FOREIGN KEY (belt) REFERENCES items(itemid) ON UPDATE CASCADE 
ON DELETE SET NULL
"users_body" FOREIGN KEY (body) REFERENCES items(itemid) ON UPDATE CASCADE 
ON DELETE SET NULL
"users_head" FOREIGN KEY (head) REFERENCES items(itemid) ON UPDATE CASCADE 
ON DELETE SET NULL
"users_lefthand" FOREIGN KEY (lefthand) REFERENCES items(itemid) ON UPDATE 
CASCADE ON DELETE SET NULL
"users_leg" FOREIGN KEY (leg) REFERENCES items(itemid) ON UPDATE CASCADE ON 
DELETE SET NULL
"users_righthand" FOREIGN KEY (righthand) REFERENCES items(itemid) ON 
UPDATE CASCADE ON DELETE SET NULL
 
talluria=# \d items
Table "public.items"
   Column|  Type   | Modifiers
-+-+
 itemid  | integer | not null default nextval('items_itemid_seq'::text)
 itemdataid  | integer | not null default 0
 playerid| integer |
 quantity| integer | not null default 1
 elementflag | integer | not null default 0
 shopid  | integer |
 map | integer |
 x   | integer |
 y   | integer |
 price   | integer |
Indexes:
"items_pkey" PRIMARY KEY, btree (itemid)
"items_coord" btree (map, x, y)
"items_playerid_idx" btree (playerid)
Foreign-key constraints:
"items_playerid_fkey" FOREIGN KEY (playerid) REFERENCES players(playerid) 
ON UPDATE CASCADE ON DELETE CASCADE
Triggers:
test_valid_item_trig BEFORE INSERT OR UPDATE ON items FOR EACH ROW EXECUTE 
PROCEDURE valid_item()

I'm told this:
 I think the problem is that the cascading updates all run separately
 which means that the one that does SET head = NULL, for example, 
fails because all the other fields still have values pointing at the deleted row
 or at _a_ deleted row

I made a testcase, but after some retrying (without the mistakes I made) I was 
unable to reproduce it anymore..

allan=# create table items (itemid serial, playerid int);
NOTICE:  CREATE TABLE will create implicit sequence "items_itemid_seq1" for 
serial column "items.itemid"
CREATE TABLE
allan=# create table items (itemid serial, playerid int);
allan=# create table players (playerid serial, head int references 
items(itemid) on update cascade on delete set null, body int references 
items(itemid) on update cascade on delete set null);
NOTICE:  CREATE TABLE will create implicit sequence "players_playerid_seq1" for 
serial column "players.playerid"
ERROR:  there is no unique constraint matching given keys for referenced table 
"items"
allan=# \d items
  Table "public.items"
  Column  |  Type   | Modifiers
--+-+
 itemid   | integer | not null default nextval('public.items_itemid_seq1'::text)
 playerid | integer |

allan=# create unique index
information_schema.  pg_temp_1.   plays_pkey   public.  
videos_pkey
pg_catalog.  pg_toast.plays_videoid_keyvideos_path_key
allan=# create unique index items_pkey on items using btree(itemid);
CREATE INDEX
allan=# create table players (playerid serial, head int references 
items(itemid) on update cascade on delete set null, body int references 
items(itemid) on update cascade on delete set null);
NOTICE:  CREATE TABLE will create implicit sequence "players_playerid_seq1" for 
serial column "players.playerid"
CREATE TABLE
allan=# insert into players
allan=# \d players
   Table "public.players"
  Column  |  Type   |   Modifiers
--+-

Re: [BUGS] Cascading updates run seperately

2005-08-11 Thread Stephan Szabo
On Thu, 11 Aug 2005, Allan Wang wrote:

> I'm running a fairly recent CVS head server, but I think this bug
> applies in all versions.

It doesn't happen for me in 7.4.2 with the example below, although my
couple month old CVS server and an 8.0.x server do error.  My first guess
is that there's some side effect of one of the trigger timing changes
that's causing this, but I haven't looked yet.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Marc G. Fournier

On Thu, 11 Aug 2005, Marc G. Fournier wrote:


On Thu, 11 Aug 2005, Tom Lane wrote:


"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

On Thu, 11 Aug 2005, Tom Lane wrote:

If you've got any AFTER UPDATE triggers on that table, you could be
running out of memory for the pending-triggers list.



Nope, only have a BEFORE UPDATE, or would that be similar except for at
which point it runs out of memory?


Nope, BEFORE UPDATE shouldn't result in any permanent memory
accumulation.

An out-of-memory error should result in a long report in the postmaster
log about how many bytes in each memory context --- can you post that?


This is all I'm seeing in the logs:

# grep "\[653\]" pgsql
Aug 11 08:45:47 pgsql80 pg[653]: [34-1] ERROR:  out of memory
Aug 11 08:45:47 pgsql80 pg[653]: [34-2] DETAIL:  Failed on request of size 
32.


'k, does this help any?

TopMemoryContext: 40960 total in 4 blocks; 8632 free (10 chunks); 32328 used
SPI Plan: 3072 total in 2 blocks; 1728 free (0 chunks); 1344 used
TopTransactionContext: 534765568 total in 74 blocks; 2144 free (68 chunks); 
534763424 used
ExecutorState: 8192 total in 1 blocks; 7648 free (3 chunks); 544 used
MessageContext: 57344 total in 3 blocks; 30312 free (11 chunks); 27032 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 896 free (0 chunks); 128 used
ExecutorState: 122880 total in 4 blocks; 98912 free (127 chunks); 23968 used
ExprContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryContext: 516096 total in 6 blocks; 134272 free (3 chunks); 381824 
used
xa_url_tuid_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
xa_url_spiderlite_signature_check_idx: 1024 total in 1 blocks; 912 free (0 
chunks); 112 used
xa_url_priority_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
urls_status: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
urls_logger_status: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
url_url: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
xa_url_id_pk: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_index_indrelid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used
pg_type_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 
used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 
used
pg_proc_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 712 free (0 chunks); 
312 used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 712 free (0 chunks); 
312 used
pg_namespace_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 
used
pg_language_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_language_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 
used
pg_group_sysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_group_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_conversion_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 
used
pg_conversion_default_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 
used
pg_opclass_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 
used
pg_cast_source_target_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 
used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_shadow_usename_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used
pg_operator_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used
pg_class_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 776 free (0 chu

Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Marc G. Fournier


Just as a reminder, this is an 8.0.0 install, so if you think this might 
have been fixed in later sub-releases, plesae let me know and I'll 
upgrade/test again ...


On Thu, 11 Aug 2005, Marc G. Fournier wrote:


On Thu, 11 Aug 2005, Marc G. Fournier wrote:


On Thu, 11 Aug 2005, Tom Lane wrote:


"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

On Thu, 11 Aug 2005, Tom Lane wrote:

If you've got any AFTER UPDATE triggers on that table, you could be
running out of memory for the pending-triggers list.



Nope, only have a BEFORE UPDATE, or would that be similar except for at
which point it runs out of memory?


Nope, BEFORE UPDATE shouldn't result in any permanent memory
accumulation.

An out-of-memory error should result in a long report in the postmaster
log about how many bytes in each memory context --- can you post that?


This is all I'm seeing in the logs:

# grep "\[653\]" pgsql
Aug 11 08:45:47 pgsql80 pg[653]: [34-1] ERROR:  out of memory
Aug 11 08:45:47 pgsql80 pg[653]: [34-2] DETAIL:  Failed on request of size 
32.


'k, does this help any?

TopMemoryContext: 40960 total in 4 blocks; 8632 free (10 chunks); 32328 used
SPI Plan: 3072 total in 2 blocks; 1728 free (0 chunks); 1344 used
TopTransactionContext: 534765568 total in 74 blocks; 2144 free (68 chunks); 
534763424 used

ExecutorState: 8192 total in 1 blocks; 7648 free (3 chunks); 544 used
MessageContext: 57344 total in 3 blocks; 30312 free (11 chunks); 27032 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 896 free (0 chunks); 128 used
ExecutorState: 122880 total in 4 blocks; 98912 free (127 chunks); 23968 used
ExprContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryContext: 516096 total in 6 blocks; 134272 free (3 chunks); 381824 
used

xa_url_tuid_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
xa_url_spiderlite_signature_check_idx: 1024 total in 1 blocks; 912 free (0 
chunks); 112 used

xa_url_priority_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
urls_status: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
urls_logger_status: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
url_url: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
xa_url_id_pk: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_index_indrelid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 
used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used

pg_type_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 
used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 
used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used

pg_proc_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 712 free (0 chunks); 
312 used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 712 free (0 chunks); 
312 used

pg_namespace_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 
used

pg_language_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_language_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used

pg_group_sysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_group_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_conversion_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 
used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used
pg_conversion_default_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 
used

pg_opclass_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 776 free (0 chunks); 
248 used
pg_cast_source_target_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 
used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used

pg_amop_opr_opc_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 
used
pg_shadow_usename_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 
used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 848 free (0 chunks); 
176 used

pg_operator_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 
used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 
used
pg_index_indexrelid_index: 1024 total in 1 blocks; 912 free (0 chunks); 11

Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Tom Lane
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> 'k, does this help any?

> TopMemoryContext: 40960 total in 4 blocks; 8632 free (10 chunks); 32328 used
> SPI Plan: 3072 total in 2 blocks; 1728 free (0 chunks); 1344 used
> TopTransactionContext: 534765568 total in 74 blocks; 2144 free (68 chunks); 
> 534763424 used

Yeah, the leak is clearly in TopTransactionContext.  That doesn't let
the trigger code off the hook though, because the pending-triggers list
is kept there.  Are you *sure* there are no AFTER triggers here?
(Don't forget foreign-key checking triggers.)

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] Cascading updates run seperately

2005-08-11 Thread Stephan Szabo

On Thu, 11 Aug 2005, Stephan Szabo wrote:

> On Thu, 11 Aug 2005, Allan Wang wrote:
>
> > I'm running a fairly recent CVS head server, but I think this bug
> > applies in all versions.
>
> It doesn't happen for me in 7.4.2 with the example below, although my
> couple month old CVS server and an 8.0.x server do error.  My first guess
> is that there's some side effect of one of the trigger timing changes
> that's causing this, but I haven't looked yet.

I think I have a theory for why the timing change would have affected
this.  The check in trigger.c to decide if the key has changed only abort
the check if the key has changed and the row was not made by this
transaction.

In the deferred case, you could have an insert / update combination where
the insert trigger sees the row it's checking is no longer live and
doesn't do any checks, and then if the update only checked changed keys
some might be missed entirely.  I think for the case given in the example
if the constraint were deferred it would work because the second update
would have made the first update's check no longer live either, and so
only the final state is checked.

In the immediate case where the checks were deferred to end of outer
statement (7.4 and below), I believe the same applies.  By the time the
first check caused by the first update is run, the second update has
happened, so the first check doesn't actually do anything.

In the immediate case where the checks run directly on the update run by
the constraint (8.0 and above), the check happens before the second update
so the first check (with the half changed key) runs on both keys which
fails.

I don't think we can simply change the immediate case behavior to
unconditionally check that the key has changed because that might break
for an update inside an after insert trigger that updates the same row (if
the update happened before the insert's check. Is there a way to detect
this case that wouldn't also catch two updates caused by separate on
updates for an action?


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Marc G. Fournier

On Thu, 11 Aug 2005, Tom Lane wrote:


"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

'k, does this help any?



TopMemoryContext: 40960 total in 4 blocks; 8632 free (10 chunks); 32328 used
SPI Plan: 3072 total in 2 blocks; 1728 free (0 chunks); 1344 used
TopTransactionContext: 534765568 total in 74 blocks; 2144 free (68 chunks); 
534763424 used


Yeah, the leak is clearly in TopTransactionContext.  That doesn't let
the trigger code off the hook though, because the pending-triggers list
is kept there.  Are you *sure* there are no AFTER triggers here?
(Don't forget foreign-key checking triggers.)


This is all of them ... nothing AFTER, just ON or BEFORE ...

Foreign-key constraints:
"xa_classification_id_fk" FOREIGN KEY (classification_id) REFERENCES 
xa_classification(classification_id) ON UPDATE RESTRICT ON DELETE RESTRICT
"xa_ip_address_id_fk" FOREIGN KEY (ip_address_id) REFERENCES 
xa_ip_addresses(ip_address_id) ON UPDATE RESTRICT ON DELETE RESTRICT
"xa_logger_status_id_fk" FOREIGN KEY (logger_status_id) REFERENCES 
xa_logger_status(logger_status_id) ON UPDATE RESTRICT ON DELETE RESTRICT
"xa_url_queue_id_fk" FOREIGN KEY (url_queue_id) REFERENCES 
xa_url_queue(url_queue_id) ON UPDATE RESTRICT ON DELETE SET NULL
Triggers:
xa_url_domain_b_i_u BEFORE INSERT OR UPDATE ON xa_url FOR EACH ROW EXECUTE 
PROCEDURE xa_url_domain()


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [BUGS] [8.0.0] out of memory on large UPDATE

2005-08-11 Thread Tom Lane
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> On Thu, 11 Aug 2005, Tom Lane wrote:
>> Are you *sure* there are no AFTER triggers here?
>> (Don't forget foreign-key checking triggers.)

> This is all of them ... nothing AFTER, just ON or BEFORE ...

> Foreign-key constraints:
>  "xa_classification_id_fk" FOREIGN KEY (classification_id) REFERENCES 
> xa_classification(classification_id) ON UPDATE RESTRICT ON DELETE RESTRICT
>  "xa_ip_address_id_fk" FOREIGN KEY (ip_address_id) REFERENCES 
> xa_ip_addresses(ip_address_id) ON UPDATE RESTRICT ON DELETE RESTRICT
>  "xa_logger_status_id_fk" FOREIGN KEY (logger_status_id) REFERENCES 
> xa_logger_status(logger_status_id) ON UPDATE RESTRICT ON DELETE RESTRICT
>  "xa_url_queue_id_fk" FOREIGN KEY (url_queue_id) REFERENCES 
> xa_url_queue(url_queue_id) ON UPDATE RESTRICT ON DELETE SET NULL
> Triggers:
>  xa_url_domain_b_i_u BEFORE INSERT OR UPDATE ON xa_url FOR EACH ROW 
> EXECUTE PROCEDURE xa_url_domain()

Um, foreign-key triggers are always AFTER.

Can you afford to drop the FK constraints while you do the update?  I
can't think of any other short-term workaround.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings