Re: [BUGS] BUG #1552: massive performance hit between 7.4 and 8.0.1

2005-03-26 Thread Zeki
Have you tried an analyze after 1,000 or so inserts?  Also, you should 
be able to disable sequence scans for the duration of the connection 
using SET enable_seqscan=false. 

-Zeki
Brian O'Reilly wrote:
The following bug has been logged online:
Bug reference:  1552
Logged by:  Brian O'Reilly
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.1
Operating system:   Linux 2.6.11
Description:massive performance hit between 7.4 and 8.0.1
Details: 

When doing a lot of inserts to an empty table with a foreign key to another
table, there is an incredible performance degredation issue on 8.0.1. I have
a program that is inserting rows in an iterative loop, and in this form it
inserts about 110,000 rows. On postgresql 7.4 on a debian machine it takes a
shade over 2 minutes to complete. On an amd64 box running gentoo, it takes
over an hour and fourty minutes to complete. The query plan on the debian
host that completes quickly follows:
 "Fast" machine, Debian, PSQL 7.4:


Index Scan using requirements_pkey on requirements  (cost=0.00..4.82 rows=2
width=0) (actual time=0.013..0.013 rows=0 loops=1)
  Index Cond: (reqid = 10::bigint)
Total runtime: 0.134 ms
(3 rows)
and the query plan on the 'slow' machine:
QUERY PLAN

--
Seq Scan on requirements  (cost=0.00..0.00 rows=1 width=0) (actual
time=0.002..0.002 rows=0 loops=1)
  Filter: (reqid = 10::bigint)
Total runtime: 0.040 ms
(3 rows)
The script I am using to show this behaviour follows:
CREATE TABLE packages
 (name text PRIMARY KEY);
CREATE TABLE binary_packages
 (name text REFERENCES packages,
  version text,
  PRIMARY KEY(name, version));
CREATE TABLE requirements
 (reqid bigint PRIMARY KEY,
  name text,
  version text,
  FOREIGN KEY (name, version) REFERENCES
binary_packages);
CREATE TABLE constraints
 (constid bigint PRIMARY KEY,
  reqid bigint REFERENCES requirements,
  type text,
  name text REFERENCES packages,
  version text DEFAULT '',
  relation character(2));
explain analyze select 1 from only requirements where reqid='10';
the query optimiser seems to be setting a default strategy of doing
sequential scans on an empty table, which is a fast strategy when the table
is empty and not particularly full, but obviously on a large table the
performance is O(N^2). This is clearly a bug. Please let me know if I can
provide any more information.
Brian O'Reilly
System Architect.,
DeepSky Media Resources
---(end of broadcast)---
TIP 8: explain analyze is your friend
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[BUGS] Permission denied errors

2005-08-23 Thread zeki

Hello all, I woke up this morning and found my installation of Postgres
(7.3.2) spewing the following error message:

ERROR:  cannot count blocks for ev_log -- open failed: Permission denied
ERROR:  cannot count blocks for ev_log -- open failed: Permission denied
ERROR:  cannot count blocks for ev_log -- open failed: Permission denied
ERROR:  cannot count blocks for ev_log -- open failed: Permission denied

Now it seems as though the database is somehow diseased.  I can select and
insert data in all of the other tables in the database but I can not 
create tables:

#  create table test_tab (idfield int);
ERROR:  cannot create test_tab: Permission denied

I also can not dump data out of ev_log:

pg_dump: ERROR:  cannot count blocks for ev_log -- open failed: Permission 
denied
pg_dump: lost synchronization with server, resetting connection
pg_dump: SQL command to dump the contents of table "ev_log" failed: 
PQendcopy() failed.
pg_dump: Error message from server: pg_dump: The command was: COPY 
public.ev_log (logid, eventtype, username, ipaddress, eventdata, ts) TO 
stdout;

Any ideas?

I'm willing to dump and reload, but I'd like to get the data out of the 
log table if possible.

-Zeki


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] Primary key duplicates

2004-08-13 Thread Zeki Mokhtarzada
I have a very strange bug with postgres 7.4.3.  I have a table with about 
15 million rows and recently, duplicate rows have started appearing.

For simplicity, let's say my table is as follows:

files
---
fileid   integer   -- Primary key generated by a sequence
userid   integer   -- id of user that owns the file
filename character varying(255) -- name of file

Indexes:  "files_pkey" primary key, bree (fileid)
  "files_userid" hash (userid)


When I do: 

 select fileid, userid from files where userid = 1898598 order by fileid;

I get:

 fileid  | userid
-+-
 3787433 | 1898598
 3787433 | 1898598
 3787563 | 1898598
 9554275 | 1898598


Notice that 3787433 is duplicated.  How could this have happened if that 
column is flagged as the primary key.  Even more interesting:

select oid, fileid, userid from files where userid = 1898598 order by 
fileid;


   oid| fileid  | userid  
--+-+-
  1573737 | 3787433 | 1898598
  1573737 | 3787433 | 1898598 
  1573820 | 3787563 | 1898598 
 18612041 | 9554275 | 1898598

The rows have the same OID!  So my question is how do I delete the 
duplicate row.  If I execute 

 select fileid, userid from files where fileid = 1573737;

I get:

 fileid  | userid
-+-
 1573737 | 1008628


Similarly, if I try to delete both of the rows, only one of them gets 
deleted, then when I select by userid, I get the other remaining one 
listed.  But if I select by fileid I get no rows returned.

I suspect a corrupt index is at fault here.  If that's the case, a reindex
will take quite some time and will lock the table causing a long period of
downtime.  Is that my only option?  Any other ideas?

-Zeki 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [BUGS] Primary key duplicates

2004-08-15 Thread Zeki Mokhtarzada
On Fri, 13 Aug 2004, Tom Lane wrote:

> It would be interesting to see the ctid, xmin, cmin, and xmax fields
> as well.

 ctid |xmin|cmin|xmax|   oid| fileid  | userid
--++++--+-+-
   (53101,30) |  2 | 1478674026 | 1478674026 |  1573737 | 3787433 | 1898598
   (53101,39) |  2 |  0 |  0 |  1573737 | 3787433 | 1898598

> Possibly, but the duplicate OID entries suggest that a row was updated
> and then for some reason the old version appears to still be valid.
> (Or an attempted update was rolled back, but the new version got marked

I've run the query with set enable_indexscan =off so that rules out an 
index problem.  


> valid anyway.)  Have you had any crashes or other strange behavior lately?

The system is running on a Dell PowerEdge 2650 running RedHat 8.  We had a
kernel halt about two weeks ago that was caused by one of our disk mirrors
failing.  It could be that these problems were caused at that point in
time and are just being noticed now.

It looks like you've seen this problem before:

http://archives.postgresql.org/pgsql-bugs/2003-12/msg00174.php

But you never stated how to delete the duplicate rows.  Any suggestions?  
Also, where can I find documentation on the purpose and values of the 
ctid, oid, xmin, xmax, cmin, cmax columns?

Thanks!

-Zeki


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [BUGS] Primary key duplicates

2004-08-15 Thread Zeki Mokhtarzada
It appears that I can select the rows, but not delete or update them.  

# select ctid from files where ctid = '(53101,30)';
ctid

 (53101,30)
(1 row)

# delete from files where ctid = '(53101,30)';
DELETE 0
# update files set fileid = 1000 where ctid = '(53101,30)';
UPDATE 0
#

-Zeki

On Fri, 13 Aug 2004, Tom Lane wrote:

> Zeki Mokhtarzada <[EMAIL PROTECTED]> writes:
> > The system is running on a Dell PowerEdge 2650 running RedHat 8.  We had a
> > kernel halt about two weeks ago that was caused by one of our disk mirrors
> > failing.  It could be that these problems were caused at that point in
> > time and are just being noticed now.
> 
> Not sure.  xmin=2 (FrozenXid) implies that those rows have been like
> that for a good long time (~ 1 billion transactions).
> 
> > But you never stated how to delete the duplicate rows.  Any suggestions?  
> 
> You can select 'em by ctid, viz WHERE ctid = '(53101,30)'
> 
>   regards, tom lane
> 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [BUGS] Permission denied errors

2005-08-24 Thread Zeki Mokhtarzada
Very interesting...  We recently changed the location of the data files 
using a symbolic link.  Although all of the files had the correct ownership,
the new parent directory was not owned by postgres.  The strange thing 
is that we have been running like this for over a week, and even now, 
only the log table was effected  (the log table is the only table that 
has a large number of data changing, most of the other tables are 
relatively static).  Changing the owner of the parent directory to 
postgres fixed the problem. 


Thanks!

-Zeki

Tom Lane wrote:


[EMAIL PROTECTED] writes:
 


Hello all, I woke up this morning and found my installation of Postgres
(7.3.2) spewing the following error message:
   



 


ERROR:  cannot count blocks for ev_log -- open failed: Permission denied
ERROR:  cannot count blocks for ev_log -- open failed: Permission denied
ERROR:  cannot count blocks for ev_log -- open failed: Permission denied
ERROR:  cannot count blocks for ev_log -- open failed: Permission denied
   



Look around in the database directory.  It kinda looks like one or more
files or directories had had its ownership or permissions changed so
that the server can't read it anymore.

Postgres itself would not do such a thing, so the culprit is either
administrator error or flaky disk hardware.  After you fix the problem
by hand, you'll want to try to figure out which.

regards, tom lane
 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq