[BUGS] IpcSemaphoreCreate: semget(...) failed: No space left on device

2003-11-22 Thread jerry
I shortened the error msg to fit it into the subject box.

I just completed a fresh install (not upgrade) of MDK 9.2 onto a 1.0 GHz 
Athlon with 512 MB of RAM.  The PostgreSQL 7.3.4 RPMs were installed when I
did the MDK 9.2 install, which also created the postgres account.

The 'short version' for installation worked perfectly up to the following
step:
 
initdb -D /usr/local/pgsql/data 

which I have used for several years, failed on me for the first time, giving
the error msg shown in the Subject box.

I am not going to recompile MDK's kernel to raise the maximum number of
semaphores (SEMMNS), and the 'max_connections' parameter is currently at 1
and can't go lower, unless it is indexed starting at zero.

I was unable to post this to the Mandrake Bugzilla page because it is down.

What is the possiblity that I would encounter this error if I were to remove
the 7.3.4 RPMs and install the 7.4 tar files manually?

TIA!
--
GreyGeek

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] RC1 question of reloading data

2003-11-22 Thread jerry
Theodore Petrosky wrote:

> So, I was logged on as postgres. but there is no
> database called 'postgres' (or maybe I don't
> understand the internals). Keep in mind that I am only
> looking for understanding of the process... and I want
> to make sure that doing this doesn't create problems
> later.
> 
> Ted
Postgres is not a 'database', it is the account name which owns the commands
and the directories with which and into which the database structure is
created using "initdb".  If you followed the short installation version you
would have done something like this:

jerry$ su
[EMAIL PROTECTED] mkdir /usr/local/pgsql/data
[EMAIL PROTECTED] chown postgres /usr/local/pgsql/data
[EMAIL PROTECTED] su - postgres
-bash-2.05b$ initdb -D /usr/local/pgsql/data

Then you could log in using psql and create your user name and db template,
etc.

--
GreyGeek

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [BUGS] IpcSemaphoreCreate: semget(...) failed: No space left on device

2003-11-22 Thread jerry
jerry wrote:

> I shortened the error msg to fit it into the subject box.
> 
> I just completed a fresh install (not upgrade) of MDK 9.2 onto a 1.0 GHz
> Athlon with 512 MB of RAM.  The PostgreSQL 7.3.4 RPMs were installed when
> I did the MDK 9.2 install, which also created the postgres account.
> 
> The 'short version' for installation worked perfectly up to the following
> step:
>  
> initdb -D /usr/local/pgsql/data
> 
> which I have used for several years, failed on me for the first time,
> giving the error msg shown in the Subject box.
> 
> I am not going to recompile MDK's kernel to raise the maximum number of
> semaphores (SEMMNS), and the 'max_connections' parameter is currently at 1
> and can't go lower, unless it is indexed starting at zero.
> 
> I was unable to post this to the Mandrake Bugzilla page because it is
> down.
> 
> What is the possiblity that I would encounter this error if I were to
> remove the 7.3.4 RPMs and install the 7.4 tar files manually?
> 
> TIA!
> --
> GreyGeek

I'll answer my own question:  NO!

gmake[3]: Leaving directory `/home/jerry/postgresql-7.4/contrib/spi'
/bin/sh ./pg_regress --temp-install --top-builddir=../../..
--schedule=./parallel_schedule --multibyte=SQL_ASCII
== creating temporary installation==
== initializing database system   ==

pg_regress: initdb failed
Examine ./log/initdb.log for the reason.

The initdb.log gave the exact same reason for failure, except for:
 "reduce PostgreSQL's consumption of semaphores by reducing its
max_connections parameter (currently 10)."

I know from the error msg in the RPM installation that even with a
'max_connections" = 1 the failure occurres.   It seems that the standard
MDK 9.2 kernel (2.4.22-10mdk) is not compatible with PostgreSQL 7.3.4 or
7.4.  ???

--
GreyGeek

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[BUGS] BUG #5321: Parallel restore temporarily deadlocked by autovacuum analyze

2010-02-09 Thread Jerry Gamache

The following bug has been logged online:

Bug reference:  5321
Logged by:  Jerry Gamache
Email address:  jerry.gama...@idilia.com
PostgreSQL version: 8.4.2
Operating system:   Linux
Description:Parallel restore temporarily deadlocked by autovacuum
analyze
Details: 

While restoring a database in 8.4.2, pg_restore stopped for more than 30
minutes doing nothing:

pg_restore -d database1 -j 8 -v database1.pg_dump

The parallel restore showed very impressive performance as the verbose
output went by, but the restore suddenly stopped. The output of "ps" after
15 minutes of 0% CPU activity showed 4 process in "Ss" state:

postgres: xxx database1 [local] CREATE INDEX waiting
postgres: xxx database1 [local] CREATE INDEX waiting
postgres: autovacuum worker process   database1
postgres: autovacuum worker process   database1

The output of pg_stat_activity for database1 showed:

database1=# select procpid,datname,current_query from pg_stat_activity where
datname='database1' ORDER BY procpid;
 procpid | datname  |   
 current_query
-+--+---

6146 | database1 | autovacuum: ANALYZE public.table_x
6270 | database1 |
: CREATE UNIQUE INDEX index1 ON table_x USING
btree (col_a, col_b);
6271 | database1 |
: CREATE UNIQUE INDEX index2 ON table_x USING
btree (col_b, col_c, col_d);
6307 | database1 | autovacuum: ANALYZE public.table_y
6597 | database1 | 
6626 | database1 | select procpid,datname,current_query from
pg_stat_activity where datname='database1' ORDER BY procpid;
(6 rows)

The output of pg_locks for the 4 stuck PIDs showed:

database1=# select * from pg_locks where pid IN (SELECT procpid FROM
pg_stat_activity WHERE datname='database1') ORDER BY pid;
  locktype  | database | relation | page | tuple | virtualxid |
transactionid | classid | objid | objsubid | virtualtransaction | pid  |
  mode   | granted
+--+--+--+---++-
--+-+---+--++--+
--+-
 relation   |16993 |17597 |  |   || 
 | |   |  | 11/57  | 6146 | AccessShareLock 
| t
 relation   |16993 |17616 |  |   || 
 | |   |  | 11/57  | 6146 | AccessShareLock 
| t
 relation   |16993 |17663 |  |   || 
 | |   |  | 11/57  | 6146 | AccessShareLock 
| t
 virtualxid |  |  |  |   | 11/57  | 
 | |   |  | 11/57  | 6146 | ExclusiveLock   
| t
 relation   |16993 |17165 |  |   || 
 | |   |  | 11/57  | 6146 |
ShareUpdateExclusiveLock | t
 virtualxid |  |  |  |   | 13/100 | 
 | |   |  | 13/100 | 6270 | ExclusiveLock   
| t
 relation   |16993 |17165 |  |   || 
 | |   |  | 13/100 | 6270 | ShareLock   
| f
 relation   |16993 |17165 |  |   || 
 | |   |  | 14/50  | 6271 | ShareLock   
| f
 virtualxid |  |  |  |   | 14/50  | 
 | |   |  | 14/50  | 6271 | ExclusiveLock   
| t
 relation   |16993 |17188 |  |   || 
 | |   |  | 3/844  | 6307 |
ShareUpdateExclusiveLock | t
 virtualxid |  |  |  |   | 3/844  | 
 | |   |  | 3/844  | 6307 | ExclusiveLock   
| t
 relation   |16993 |17660 |  |   || 
 | |   |  | 3/844  | 6307 | AccessShareLock 
| t
 relation   |16993 |17590 |  |   || 
 | |   |  | 3/844  | 6307 | AccessShareLock 
| t
 relation   |0 | 2677 |  |   || 
 | |   |  | 7/187  | 6626 | AccessShareLock 
| t
 relation   |0 | 2676 |  |   || 
 | |   |  | 7/187  | 6626 | AccessShareLock 
| t
 relation   |16993 |11042 |  |   || 
 | |   |  | 7/187  | 6626 | AccessShareL

Re: [BUGS] BUG #5321: Parallel restore temporarily deadlocked by autovacuum analyze

2010-02-10 Thread Jerry Gamache
All autovacuum and deadlock_timeout settings are at their default values 
(lines are commented out in postgresql.conf).


Alvaro Herrera wrote:

Jerry Gamache wrote:

  

The restore resumed while I was writing this report, and I saw these new
entries in the logs:

ERROR:  canceling autovacuum task
CONTEXT:  automatic analyze of table "database1.public.table_y"
ERROR:  canceling autovacuum task
CONTEXT:  automatic analyze of table "database1.public.table_y"



This is what I would have expected to happen almost immediately ... what
are your autovacuum settings?  And what's your deadlock timeout?

  



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5321: Parallel restore temporarily deadlocked by autovacuum analyze

2010-02-10 Thread Jerry Gamache
I was also surprised that table_y seemed to be involved. This is not a 
typo. Might be caused by a FK constraint between table_y and table_x. 
From the logs, the autovacuum on table_x was canceled before the one on 
table_y, but the restore only resumed after the autovacuum on table_y 
was canceled. It is possible (but I cannot confirm) that the autovacuum 
thread on table_x was blocked for a while after the cancellation message 
was written to the log. I added timestamps to log_line_prefix to be able 
to give more details if this happens again.


Tom Lane wrote:

Alvaro Herrera  writes:
  

Jerry Gamache wrote:


The restore resumed while I was writing this report, and I saw these new
entries in the logs:

ERROR:  canceling autovacuum task
CONTEXT:  automatic analyze of table "database1.public.table_y"
ERROR:  canceling autovacuum task
CONTEXT:  automatic analyze of table "database1.public.table_y"
  


  

This is what I would have expected to happen almost immediately ... what
are your autovacuum settings?  And what's your deadlock timeout?



The other details showed the blockage as being on table_x though.
Was that a typo in obscuring the details, or did the cancel really
happen on an unrelated table?

regards, tom lane

  



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5321: Parallel restore temporarily deadlocked by autovacuum analyze

2010-02-10 Thread Jerry Gamache
I was not able to repro with default parameters, or at 15s naptime, and 
at 1s naptime I got only 1deadlock in 3 tests.


This time the deadlock was with table_a, table_b and table_c (table_x 
and table_y were not involved).


  18395 | database1 | autovacuum: ANALYZE public.table_a
  18406 | database1 | autovacuum: ANALYZE public.table_b
  18510 | database1 |
   : CREATE UNIQUE INDEX index_bg ON table_b 
USING btree (col_g);

  18567 | database1 | autovacuum: ANALYZE public.table_c
  18802 | database1 | select procpid,datname,current_query from 
pg_stat_activity where datname='database1' ORDER BY procpid;


There is a FK constraint between table_a and table_b, but table_c does 
not have any direct constraint relation with the other 2 tables.


The logs show that the autovacuum of table_b was canceled 20 minutes 
ago, but the thread is still alive and blocked.


Alvaro Herrera wrote:

Jerry Gamache wrote:
  

I was also surprised that table_y seemed to be involved. This is not
a typo. Might be caused by a FK constraint between table_y and
table_x. From the logs, the autovacuum on table_x was canceled
before the one on table_y, but the restore only resumed after the
autovacuum on table_y was canceled. It is possible (but I cannot
confirm) that the autovacuum thread on table_x was blocked for a
while after the cancellation message was written to the log. I added
timestamps to log_line_prefix to be able to give more details if
this happens again.



Could you try to restore the whole dump again and see if it you can
reproduce it?  Maybe decreasing autovacuum_naptime makes it more
probable.

  



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5321: Parallel restore temporarily deadlocked by autovacuum analyze

2010-02-10 Thread Jerry Gamache

Here is the pg_locks output.

Alvaro Herrera wrote:

Jerry Gamache wrote:
  

I was not able to repro with default parameters, or at 15s naptime,
and at 1s naptime I got only 1deadlock in 3 tests.

This time the deadlock was with table_a, table_b and table_c
(table_x and table_y were not involved).

  18395 | database1 | autovacuum: ANALYZE public.table_a
  18406 | database1 | autovacuum: ANALYZE public.table_b
  18510 | database1 |
   : CREATE UNIQUE INDEX index_bg ON table_b
USING btree (col_g);
  18567 | database1 | autovacuum: ANALYZE public.table_c
  18802 | database1 | select procpid,datname,current_query from
pg_stat_activity where datname='database1' ORDER BY procpid;

There is a FK constraint between table_a and table_b, but table_c
does not have any direct constraint relation with the other 2
tables.

The logs show that the autovacuum of table_b was canceled 20 minutes
ago, but the thread is still alive and blocked.



That's pretty strange.  Can we see a pg_locks snapshot?  (Please attach
as a text file so that it doesn't get word-wrapped)

  


database1=# select 
locktype,database,relation,virtualxid,virtualtransaction,pid,mode,granted from 
pg_locks where pid IN (SELECT procpid FROM pg_stat_activity WHERE 
datname='database1') ORDER BY pid;
  locktype  | database | relation | virtualxid | virtualtransaction |  pid  |   
mode   | granted
+--+--+++---+--+-
 relation   |20162 |20893 || 4/72   | 18395 | 
AccessShareLock  | t
 virtualxid |  |  | 4/72   | 4/72   | 18395 | 
ExclusiveLock| t
 relation   |20162 |20829 || 4/72   | 18395 | 
AccessShareLock  | t
 relation   |20162 |20894 || 4/72   | 18395 | 
AccessShareLock  | t
 relation   |20162 |20892 || 4/72   | 18395 | 
AccessShareLock  | t
 relation   |20162 |20515 || 4/72   | 18395 | 
ShareUpdateExclusiveLock | t
 relation   |20162 |20891 || 3/53   | 18406 | 
AccessShareLock  | t
 relation   |20162 |20813 || 3/53   | 18406 | 
AccessShareLock  | t
 relation   |20162 |20490 || 3/53   | 18406 | 
ShareUpdateExclusiveLock | t
 virtualxid |  |  | 3/53   | 3/53   | 18406 | 
ExclusiveLock| t
 relation   |20162 |20490 || 8/162  | 18510 | 
ShareLock| f
 virtualxid |  |  | 8/162  | 8/162  | 18510 | 
ExclusiveLock| t
 relation   |20162 |20247 || 7/238  | 18567 | 
ShareUpdateExclusiveLock | t
 relation   |20162 |20872 || 7/238  | 18567 | 
AccessShareLock  | t
 relation   |20162 |20881 || 7/238  | 18567 | 
AccessShareLock  | t
 relation   |20162 |20880 || 7/238  | 18567 | 
AccessShareLock  | t
 relation   |20162 |20878 || 7/238  | 18567 | 
AccessShareLock  | t
 relation   |20162 |20873 || 7/238  | 18567 | 
AccessShareLock  | t
 virtualxid |  |  | 7/238  | 7/238  | 18567 | 
ExclusiveLock| t
 relation   |20162 |20876 || 7/238  | 18567 | 
AccessShareLock  | t
 relation   |20162 |20882 || 7/238  | 18567 | 
AccessShareLock  | t
 relation   |20162 |20879 || 7/238  | 18567 | 
AccessShareLock  | t
 relation   |20162 |20797 || 7/238  | 18567 | 
AccessShareLock  | t
 relation   |20162 |20877 || 7/238  | 18567 | 
AccessShareLock  | t
 relation   |0 | 2676 || 1/611  | 18802 | 
AccessShareLock  | t
 relation   |0 | 2672 || 1/611  | 18802 | 
AccessShareLock  | t
 virtualxid |  |  | 1/611  | 1/611  | 18802 | 
ExclusiveLock| t
 relation   |0 | 1262 || 1/611  | 18802 | 
AccessShareLock  | t
 relation   |20162 |11042 || 1/611  | 18802 | 
AccessShareLock  | t
 relation   |0 | 2671 || 1/611  | 18802 | 
AccessShareLock  | t
 relation   |0 | 2677 || 1/611  | 18802 | 
AccessShareLock  | t
 relation   |20162 |10969 || 1/611  | 18802 | 
AccessSh

Re: [BUGS] BUG #5321: Parallel restore temporarily deadlocked by autovacuum analyze

2010-02-10 Thread Jerry Gamache
Yes, I have PID in the logs now. Problem was observed around 13h30, and 
there was no log output between 13h18 and 13h30.
There are messages for table_b (pid 18350) and table_c (pid 18406), but 
none for table_a.


Alvaro Herrera wrote:

The logs show that the autovacuum of table_b was canceled 20 minutes
ago, but the thread is still alive and blocked.



Well, it's clearly locked on table_b, and that autovac is still running
... maybe it was a previous run that was cancelled?  Do you have the PID
in log_line_prefix?

  


2010-02-10 13:10:26 EST::@:[18301]:ERROR:  canceling autovacuum task
2010-02-10 13:10:26 EST::@:[18301]:CONTEXT:  automatic analyze of table 
"database1.public.table_d"
2010-02-10 13:10:42 EST::@:[18350]:ERROR:  canceling autovacuum task
2010-02-10 13:10:42 EST::@:[18350]:CONTEXT:  automatic analyze of table 
"database1.public.table_e"
2010-02-10 13:10:58 EST::@:[18320]:ERROR:  canceling autovacuum task
2010-02-10 13:10:58 EST::@:[18320]:CONTEXT:  automatic analyze of table 
"database1.public.table_f"
2010-02-10 13:11:00 EST::@:[18350]:ERROR:  canceling autovacuum task
2010-02-10 13:11:00 EST::@:[18350]:CONTEXT:  automatic analyze of table 
"database1.public.table_g"
2010-02-10 13:11:08 EST::@:[18320]:ERROR:  canceling autovacuum task
2010-02-10 13:11:08 EST::@:[18320]:CONTEXT:  automatic analyze of table 
"database1.public.table_h"
2010-02-10 13:11:17 EST::@:[18350]:ERROR:  canceling autovacuum task
2010-02-10 13:11:17 EST::@:[18350]:CONTEXT:  automatic analyze of table 
"database1.public.table_i"
2010-02-10 13:11:22 EST::@:[18320]:ERROR:  canceling autovacuum task
2010-02-10 13:11:22 EST::@:[18320]:CONTEXT:  automatic analyze of table 
"database1.public.table_j"
2010-02-10 13:11:42 EST::@:[18350]:ERROR:  canceling autovacuum task
2010-02-10 13:11:42 EST::@:[18350]:CONTEXT:  automatic analyze of table 
"database1.public.table_b"
2010-02-10 13:13:39 EST::@:[18406]:ERROR:  canceling autovacuum task
2010-02-10 13:13:39 EST::@:[18406]:CONTEXT:  automatic analyze of table 
"database1.public.table_e"
2010-02-10 13:13:39 EST::@:[18395]:ERROR:  canceling autovacuum task
2010-02-10 13:13:39 EST::@:[18395]:CONTEXT:  automatic analyze of table 
"database1.public.table_g"
2010-02-10 13:14:07 EST::@:[18406]:ERROR:  canceling autovacuum task
2010-02-10 13:14:07 EST::@:[18406]:CONTEXT:  automatic analyze of table 
"database1.public.table_c"
2010-02-10 13:14:49 EST::@:[18395]:ERROR:  canceling autovacuum task
2010-02-10 13:14:49 EST::@:[18395]:CONTEXT:  automatic analyze of table 
"database1.public.table_h"
2010-02-10 13:14:49 EST::@:[18406]:ERROR:  canceling autovacuum task
2010-02-10 13:14:49 EST::@:[18406]:CONTEXT:  automatic analyze of table 
"database1.public.table_k"
2010-02-10 13:14:49 EST::@:[18350]:ERROR:  canceling autovacuum task
2010-02-10 13:14:49 EST::@:[18350]:CONTEXT:  automatic analyze of table 
"database1.public.table_l"
2010-02-10 13:16:25 EST::@:[18350]:ERROR:  canceling autovacuum task
2010-02-10 13:16:25 EST::@:[18350]:CONTEXT:  automatic analyze of table 
"database1.public.table_m"
2010-02-10 13:17:13 EST::@:[18395]:ERROR:  canceling autovacuum task
2010-02-10 13:17:13 EST::@:[18395]:CONTEXT:  automatic analyze of table 
"database1.public.table_i"
2010-02-10 13:18:20 EST::@:[18567]:ERROR:  canceling autovacuum task
2010-02-10 13:18:20 EST::@:[18567]:CONTEXT:  automatic analyze of table 
"database1.public.table_e"
2010-02-10 13:18:31 EST::@:[18395]:ERROR:  canceling autovacuum task
2010-02-10 13:18:31 EST::@:[18395]:CONTEXT:  automatic analyze of table 
"database1.public.table_l"

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5321: Parallel restore temporarily deadlocked by autovacuum analyze

2010-02-10 Thread Jerry Gamache

Alvaro Herrera wrote:

Jerry Gamache wrote:
  

Yes, I have PID in the logs now. Problem was observed around 13h30,
and there was no log output between 13h18 and 13h30.
There are messages for table_b (pid 18350) and table_c (pid 18406),
but none for table_a.



Eh, but according to the pg_locks snap you posted, the PID holding the
table_b lock is (was?) 20490.

  

20490 is the relid of table_b. The PID holding the lock on table_b is 18406.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Problem with restoring database from a pg_dump generated script.

2001-05-11 Thread Jerry Davis


In short, when I try to restore a database from a pg_dump
generated SQL script, then try to insert additional records
into the restored datbase, it gives me a duplicate key error.

I am using the following version of the PostgreSQL database:

PostgreSQL 7.0.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66

and I am having a problem with dumping and restoring data.

First, I dump an existing database to a text sql file with the
following command from the shell:

pg_dump -D -n logindb > testdump.sql

It produces the following file:

=== start of testdump.sql file
\connect - jdavis
CREATE SEQUENCE login_primary_key_seq start 1 increment 1 maxvalue
2147483647 minvalue 1  cache 1 ;
SELECT nextval ('login_primary_key_seq');
CREATE TABLE login (
primary_key int4 DEFAULT nextval('login_primary_key_seq'::text) NOT NULL,
user_index int4 NOT NULL,
hash character varying(25) NOT NULL,
last_used timestamp NOT NULL,
expired character varying DEFAULT 0 NOT NULL,
PRIMARY KEY (primary_key)
);
INSERT INTO login (primary_key,user_index,hash,last_used,expired) VALUES
(1,1,'W5GGk7yEM+0Kw260tDeBaA==','2001-04-25 15:56:43-05','1');
INSERT INTO login (primary_key,user_index,hash,last_used,expired) VALUES
(2,1,'ufrmu+395SIfuBC6ombmwA==','2001-04-26 08:45:10-05','1');
INSERT INTO login (primary_key,user_index,hash,last_used,expired) VALUES
(3,1,'A7V4Rvo20cqb4CFDnWkOvg==','2001-04-26 11:14:26-05','1');
INSERT INTO login (primary_key,user_index,hash,last_used,expired) VALUES
(4,1,'fIlh5Su9iMc2KXt5tqCAEw==','2001-04-26 13:02:38-05','1');
INSERT INTO login (primary_key,user_index,hash,last_used,expired) VALUES
(5,1,'9eLdWa6wl/DrqOpq/0QSWA==','2001-04-26 16:39:51-05','1');
INSERT INTO login (primary_key,user_index,hash,last_used,expired) VALUES
(6,1,'x9P/sRNj4rFenORY+POq1w==','2001-05-01 08:48:20-05','1');
INSERT INTO login (primary_key,user_index,hash,last_used,expired) VALUES
(7,1,'4nT7+E4wBxKChA1BDgW0+Q==','2001-05-03 14:37:27-05','1');
INSERT INTO login (primary_key,user_index,hash,last_used,expired) VALUES
(8,1,'H9QDcbiNVLGlMXnngo9ICQ==','2001-05-03 16:51:26-05','1');
=== end of testdump.sql file

Now, I do the following commands from the shell:

dropdb logindb
createdb logindb
psql -e logindb < testdump.sql

This works fine.  It recreates the table and inserts the data
records.  It does give me one notice message:

NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'login_pkey' for
table 'login'

Then I go into the psql utility to add new records.

psql logindb

This works fine and brings me to the psql prompt.  I enter the
following command:

insert into login (user_index,hash,last_used,expired) values
(1,'abcdefghijklmnop','2001-05-07 08:00:00-05','1');

This attempt to insert gives me the following error message:

ERROR: Cannot insert a duplicate key into unique index login_pkey

If I continually try the additional insert, it fails 8 times then
finally succeeds on the 9th try, and when I do the following
query:

select * from login;

it shows that the following record has been appended:

9,1,'abcdefghijklmnop','2001-05-07 08:00:00-05','1'

This suprises me, since the SQL in the testdump.sql file is
created by the pg_dump utility, not by me, so I would have thought
it would have everything necessary to restore the database without
a problem.  I also would have expected that the INSERT INTO
statements in the testdump.sql would have appropriately updated
the primary key.

Is there something missing from the original database that got
dumped, or something I have to manually do to the dumped script
that will allow a no-problem restore?  Or is this a bug in
PostgreSQL?  I would appreciate any help you can offer.

Jerry Davis
TEAM Development
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] BUG #1528: Rows returned that should be excluded by WHERE clause

2005-03-08 Thread Gill, Jerry T.
Just an interesting side note here, this behavior is identical to DB2. I am not 
sure if that makes it correct or not, but here is an example.

[EMAIL PROTECTED] gill]$ db2 "select 2 as id, max(apn3) from phoenix.client 
where 2 =1"

ID  2
--- --
  2  -

  1 record(s) selected.

-jgill

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Tom Lane
Sent: Tuesday, March 08, 2005 2:07 AM
To: Peter Wright
Cc: pgsql-bugs@postgresql.org
Subject: Re: [BUGS] BUG #1528: Rows returned that should be excluded by
WHERE clause 


"Peter Wright" <[EMAIL PROTECTED]> writes:
> Description:Rows returned that should be excluded by WHERE clause

Interesting point.  The view and union don't seem to be the issue;
I think the problem can be expressed as

regression=# select 2 as id, max(b) from t2 having 2 = 1;
 id | max 
+-
  2 |
(1 row)

Now, if this were a WHERE clause, I think the answer would be right:

regression=# select 2 as id, max(b) from t2 where 2 = 1;
 id | max 
+-
  2 |
(1 row)

but since it's HAVING I think this is probably wrong.  Looking at the
EXPLAIN output 

regression=# explain select 2 as id, max(b) from t2 having 2 = 1;
   QUERY PLAN   

 Aggregate  (cost=3.68..3.68 rows=1 width=2)
   ->  Result  (cost=0.00..3.14 rows=214 width=2)
 One-Time Filter: false
 ->  Seq Scan on t2  (cost=0.00..3.14 rows=214 width=2)
(4 rows)

the issue is clearly that the known-false HAVING clause is pushed down
inside the aggregation, as though it were WHERE.  The existing code
pushes down HAVING to WHERE if the clause contains no aggregates, but
evidently this is too simplistic.  What are the correct conditions for
pushing down HAVING clauses to WHERE?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [BUGS] BUG #1528: Rows returned that should be excluded by WHERE clause

2005-03-08 Thread Gill, Jerry T.
Sorry Tom, I missed a sentence in you previous email. My understanding of the 
having clause is that the row should be filtered. Here is the same example with 
the having clause in DB2.


[EMAIL PROTECTED] gill]$ db2 "select 2 as id, max(apn3) from phoenix.client 
having 2 =1"

ID  2
--- --

  0 record(s) selected.

[EMAIL PROTECTED] gill]$ db2 "select 2 as id, max(apn3) from phoenix.client 
where 2 =1 having 2 = 1"

ID  2
--- --

  0 record(s) selected.

-jgill

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Tom Lane
Sent: Tuesday, March 08, 2005 11:15 AM
To: Gill, Jerry T.
Cc: pgsql-bugs@postgresql.org
Subject: Re: [BUGS] BUG #1528: Rows returned that should be excluded by
WHERE clause 


"Gill, Jerry T." <[EMAIL PROTECTED]> writes:
> Just an interesting side note here, this behavior is identical to DB2. I am 
> not sure if that makes it correct or not, but here is an example.
> [EMAIL PROTECTED] gill]$ db2 "select 2 as id, max(apn3) from phoenix.client 
> where 2 =1"

> ID  2
> --- --
>   2  -

>   1 record(s) selected.

In the WHERE case I think there's no question that the above is correct:
WHERE is defined to filter rows before application of aggregates, so
zero rows arrive at the MAX aggregate, and that means it produces a
NULL.

But HAVING is supposed to filter after aggregation, so I think probably
there should be no row out in that case.

What does DB2 do when you say HAVING 2 = 1?

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [BUGS] [HACKERS] We are not following the spec for HAVING without GROUP BY

2005-03-10 Thread Gill, Jerry T.
Here is your Sql run in a DB2 database.
connect to phoenix

   Database Connection Information

 Database server= DB2/LINUX 8.1.5
 SQL authorization ID   = GILL
 Local database alias   = PHOENIX


create table tab (col integer)
DB2I  The SQL command completed successfully.

select 1 from tab having 1=0

1
---

  0 record(s) selected.


select 1 from tab having 1=1

1
---
  1

  1 record(s) selected.


insert into tab values(1)
DB2I  The SQL command completed successfully.

insert into tab values(2)
DB2I  The SQL command completed successfully.

select 1 from tab having 1=0

1
---

  0 record(s) selected.


select 1 from tab having 1=1

1
---
  1

  1 record(s) selected.

Hope that helps.
-Jgill

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Tom Lane
Sent: Thursday, March 10, 2005 11:45 AM
To: pgsql-hackers@postgresql.org; pgsql-bugs@postgresql.org
Subject: Re: [BUGS] [HACKERS] We are not following the spec for HAVING
without GROUP BY 


I wrote:
> This is quite clear that the output of a HAVING clause is a "grouped
> table" no matter whether the query uses GROUP BY or aggregates or not.

> What that means is that neither the HAVING clause nor the targetlist
> can use any ungrouped columns except within aggregate calls; that is,
>   select col from tab having 2>1
> is in fact illegal per SQL spec, because col isn't a grouping column
> (there are no grouping columns in this query).

Actually, it's even more than that: a query with HAVING and no GROUP BY
should always return 1 row (if the HAVING succeeds) or 0 rows (if not).
If there are no aggregates, the entire from/where clause can be thrown
away, because it can have no impact on the result!

Would those of you with access to other DBMSes try this:

create table tab (col integer);
select 1 from tab having 1=0;
select 1 from tab having 1=1;
insert into tab values(1);
insert into tab values(2);
select 1 from tab having 1=0;
select 1 from tab having 1=1;

I claim that a SQL-conformant database will return 0, 1, 0, and 1 rows
from the 4 selects --- that is, the contents of tab make no difference
at all.  (MySQL returns 0, 0, 0, and 2 rows, so they are definitely
copying our mistake...)

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq