Re: [BUGS] BUG #2225: Backend crash -- BIG table

2006-02-04 Thread Patrick Rotsaert



Is there some reason you can't add more swap space?
 


Yes, disk space. I have about 2 GB of swap space enabled.


How do you know it is Postgres that is using lots of memory? The OOM killer
doesn't just kill of memory hogs, so you can't just assume the processes
being killed tells you which processes were using lots of memory.

 

I keep 'top' running while I launch the query. One single postmaster 
climbs to the top, claiming 98% MEM and about 8% CPU.



The memory that Postgres uses is controlled in postgresql.conf. One particular
gotcha is that sortmem is per sort, so if you have a number of concurrent
sorts you might be using more memory than you expected.
 

I am very sure there are no other queries running at the same time. This 
is a development machine of which I have full control.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[BUGS] BUG #2235: Wal archiving problems

2006-02-04 Thread Per Lauvaas

The following bug has been logged online:

Bug reference:  2235
Logged by:  Per Lauvaas
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.3
Operating system:   Windows Server 2003 SE SP1
Description:Wal archiving problems
Details: 

Hi

We are using wal archiving. The wal segments are stored at another
computer:
(in postgresql.conf)
archive_command = 'copy /Y "%p"
Nasvolume1\\Backups\\campus\\20051125_backup\\wal_archive\\"%f"'

This following error has occured twice in 2 months:

Log output:
2005-11-27 14:19:011592LOG:  archived transaction log file
"0001000100D5"
...
2005-11-27 21:45:031580PANIC:  could not open file
"D:/PostgreSQL/8.0/data/pg_xlog/0001000100D7" (log file 1,
segment 215): Invalid argument
This application has requested the Runtime to terminate it in an unusual
way.
Please contact the application's support team for more information.

I don't know if this is a bug, but maybe you know. The wal archiving is
working well except for the to occurences.

Regards Per

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [BUGS] BUG #2225: Backend crash -- BIG table

2006-02-04 Thread Patrick Rotsaert


0 means overcommit is enabled.  You want to set it to something other 
than 0 to prevent overcommitting and the consequent suprising process 
deaths.  Exactly what other values are accepted varies, but 0 isn't the 
one for you.


 

I do not understand how 0 could mean overcommit is enabled. I do not 
know how it is in recent kernels, but the source code of the 2.4 kernel 
I use is this:


int vm_enough_memory(long pages)
{
   unsigned long free;

   /* Sometimes we want to use more memory than we have. */
   if (sysctl_overcommit_memory)
   return 1;
 
   // ...

}

seems pretty straightforward to me.
I also did a recursive grep through all of the kernel source and this is 
the only place where this parameter is used.

I tried setting the parameter to 1, but it did not make any difference.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] BUG #2225: Backend crash -- BIG table

2006-02-04 Thread Patrick Rotsaert



One question is what does the explain (without analyze) plan look like for
the above and are the row estimates valid in the case of one of the hash
plans.
 

pointspp=# explain select trid, count(*) from pptran group by trid 
having count(*) > 1;

   QUERY PLAN
--
HashAggregate  (cost=1311899.28..1311902.78 rows=200 width=18)
  Filter: (count(*) > 1)
  ->  Seq Scan on pptran  (cost=0.00..1039731.02 rows=36289102 width=18)
(3 rows)


Failing that, how many rows should the above return?
 

That is exactly what I am trying to find out. I can only guess that, but 
it should not be more than a couple of 10k rows.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[BUGS] BUG #2236: extremely slow to get unescaped bytea data from db

2006-02-04 Thread Kai Ronan

The following bug has been logged online:

Bug reference:  2236
Logged by:  Kai Ronan
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.1
Operating system:   redhat linux
Description:extremely slow to get unescaped bytea data from db
Details: 

Using php 5.1.2, trying to store images in database using bytea column and
pg_unescape_bytea() which is a PHP wrapper for PQunescapeBytea().

It took 77 seconds to extract a 400K gif image from the database.  Using a
text column and base64 escaping, the same image took <1 to extract from the
database.  lo functions also work fast.  Loading images to the db is fast in
all cases.

Code snippet is:



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] BUG #2237: SELECT optimizer drops everything improperly

2006-02-04 Thread Alexis Wilke

The following bug has been logged online:

Bug reference:  2237
Logged by:  Alexis Wilke
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.1
Operating system:   Linux (RH9)
Description:SELECT optimizer drops everything improperly
Details: 

Hi guys,

It looks like it could have been fixed by Tom in 8.1.1

* Fix incorrect optimizations of outer-join conditions (Tom)

But I cannot upgrade just right now to test whether it would work.

The following is a very simple set of SQL commands.

Create a new database, then do:

  psql -f .sql 

where .sql is the following SQL saved in a file named .sql and
 is that empty database.

Note that it is written so you can run it multiple times, which means I DROP
tables and the first time you get some warnings which is fine.

Hope this helps.

Best regards,
Alexis




--
-- Sample script to create several tables and show that
-- a select does not behave as it should.
--

-- Assuming a database named empty_table_bug exists
-- and we are connected to it somehow.

-- In case you want to try several times, generates warnings...
DROP TABLE phpbb_users;
DROP TABLE phpbb_topics;
DROP TABLE phpbb_forums_watch;
DROP TABLE phpbb_topics_watch;

-- Found the bug hacking phpBB so tables are in link
-- with what phpBB uses.

CREATE TABLE phpbb_users (user_id int, user_name text);

CREATE TABLE phpbb_topics (topic_id int, user_id int, forum_id int,
topic_title text);

CREATE TABLE phpbb_forums_watch (forum_id int, user_id int);

CREATE TABLE phpbb_topics_watch (topic_id int, user_id int);

INSERT INTO phpbb_users (user_id, user_name) VALUES (1, 'alexis');
INSERT INTO phpbb_users (user_id, user_name) VALUES (2, 'john');

INSERT INTO phpbb_topics (topic_id, user_id, forum_id, topic_title)
VALUES (1, 1, 1, 'Misc');

INSERT INTO phpbb_forums_watch (forum_id, user_id)
VALUES (1, 1);

-- This did not happen yet and therefore phpbb_topics_watch is empty
-- INSERT INTO phpbb_topics_watch (topic_id, user_id) VALUES (1, 1);

-- This is what I need to work, it works with this simple SELECT
SELECT 'The next SELECT finds 1 row. Perfect!' AS message;
SELECT u.user_id, u.user_name, t.topic_title
FROM phpbb_users u, phpbb_forums_watch fw, phpbb_topics t
WHERE
t.topic_id = 1 -- some variable id
AND fw.forum_id = t.forum_id
AND fw.user_id = u.user_id;

-- In this select, it detects that the phpbb_topics_watch is
-- empty and thus ignores the WHERE clause thinking since that
-- table is empty the SELECT will be empty
SELECT 'The next SELECT finds 0 row. It should find the same row!' AS
message;
SELECT u.user_id, u.user_name, t.topic_title
FROM phpbb_users u, phpbb_forums_watch fw, phpbb_topics t,
phpbb_topics_watch tw
WHERE
(t.topic_id = 1 -- some variable id
AND fw.forum_id = t.forum_id
AND fw.user_id = u.user_id)
OR
(tw.topic_id = 1
AND u.user_id = tw.user_id
AND t.topic_id = 1);

-- insert a dummy value...
INSERT INTO phpbb_topics_watch (topic_id, user_id)
VALUES (0, 0);

-- now it works!
SELECT 'Make the phpbb_topics_watch table ''non-empty'' and it works' AS
message;
SELECT u.user_id, u.user_name, t.topic_title
FROM phpbb_users u, phpbb_forums_watch fw, phpbb_topics t,
phpbb_topics_watch tw
WHERE
(t.topic_id = 1 -- some variable id
AND fw.forum_id = t.forum_id
AND fw.user_id = u.user_id)
OR
(tw.topic_id = 1
AND u.user_id = tw.user_id
AND t.topic_id = 1);

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [BUGS] BUG #2225: Backend crash -- BIG table

2006-02-04 Thread Patrick Rotsaert



Hmm, if you do an enable_hashagg=false and then run the query (without
explain) does it work then?
 


pointspp=# set enable_hashagg = false;
SET
pointspp=# select trid, count(*) from pptran group by trid having 
count(*) > 1;
ERROR:  could not write block 661582 of temporary file: No space left on 
device

HINT:  Perhaps out of disk space?

Still does not work, but it no longer consumes that same amount of memory

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] BUG #2225: Backend crash -- BIG table

2006-02-04 Thread Patrick Rotsaert



The problem is that the HashAgg will have to maintain a counter for
every distinct value of trid, not just those that occur more than
once.  So if there are a huge number of one-time-only values you could
still blow out memory (and HashAgg doesn't currently know how to spill
to disk).
 

One-time-only values are in my case more probable, so it will use a lot 
of counters.



That "rows=200" estimate looks suspiciously like a default.  Has this
table been ANALYZEd recently?  I'd expect the planner not to choose
HashAgg if it has a more realistic estimate of the number of groups.

regards, tom lane
 


I did a vacuum analyze, now the explain gives different results.

pointspp=# vacuum analyze;
VACUUM

pointspp=# explain select trid, count(*) from pptran group by trid 
having count(*) > 1;

  QUERY PLAN

GroupAggregate  (cost=9842885.29..10840821.57 rows=36288592 width=18)
  Filter: (count(*) > 1)
  ->  Sort  (cost=9842885.29..9933606.77 rows=36288592 width=18)
Sort Key: trid
->  Seq Scan on pptran  (cost=0.00..1039725.92 rows=36288592 
width=18)

(5 rows)

pointspp=# select trid, count(*) from pptran group by trid having 
count(*) > 1;
ERROR:  could not write block 661572 of temporary file: No space left on 
device

HINT:  Perhaps out of disk space?

I have 5.1GB of free disk space. If this is the cause, I have a 
problem... or is there another way to extract (and remove) duplicate rows?



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[BUGS] BUG #2238: Query failed: ERROR

2006-02-04 Thread HOBY

The following bug has been logged online:

Bug reference:  2238
Logged by:  HOBY
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 7.3.2
Operating system:   Mandrake 9.1
Description:Query failed: ERROR
Details: 

Warning: pg_query() [function.pg-query]: Query failed: ERROR: Attribute
unnamed_join.element must be GROUPed or used in an aggregate function . in
fiche.php on line 154.

Or in postgres 7.2 The same query works normally!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [BUGS] BUG #2225: Backend crash -- BIG table

2006-02-04 Thread Bruno Wolff III
On Fri, Feb 03, 2006 at 19:38:04 +0100,
  Patrick Rotsaert <[EMAIL PROTECTED]> wrote:
> 
> I have 5.1GB of free disk space. If this is the cause, I have a 
> problem... or is there another way to extract (and remove) duplicate rows?

How about processing a subset of the ids in one pass and then may make
multiple passes to check all of the ids. As long as you don't have to use
too small of chunks, this might work for you.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] BUG #2236: extremely slow to get unescaped bytea data from db

2006-02-04 Thread Alvaro Herrera
Kai Ronan wrote:

>  // Get the bytea data
>  $res = pg_query("SELECT data FROM image WHERE name='big.gif'");

Do you have an index in the image.name column?  What does an 

EXPLAIN ANALYZE SELECT data FROM image WHERE name='big.gif'

say?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [BUGS] BUG #2237: SELECT optimizer drops everything improperly

2006-02-04 Thread Tom Lane
"Alexis Wilke" <[EMAIL PROTECTED]> writes:
> -- In this select, it detects that the phpbb_topics_watch is
> -- empty and thus ignores the WHERE clause thinking since that
> -- table is empty the SELECT will be empty
> SELECT 'The next SELECT finds 0 row. It should find the same row!' AS
> message;
> SELECT u.user_id, u.user_name, t.topic_title
>   FROM phpbb_users u, phpbb_forums_watch fw, phpbb_topics t,
> phpbb_topics_watch tw
>   WHERE
>   (t.topic_id = 1 -- some variable id
>   AND fw.forum_id = t.forum_id
>   AND fw.user_id = u.user_id)
>   OR
>   (tw.topic_id = 1
>   AND u.user_id = tw.user_id
>   AND t.topic_id = 1);

I see no bug here.  This SELECT is defined to return the rows in the
cartesian product of the four FROM tables that satisfy the WHERE
condition.  Since one of the tables is empty, so is the cartesian
product.

Perhaps you meant to use a LEFT JOIN?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [BUGS] BUG #2238: Query failed: ERROR

2006-02-04 Thread Tom Lane
"HOBY" <[EMAIL PROTECTED]> writes:
> Warning: pg_query() [function.pg-query]: Query failed: ERROR: Attribute
> unnamed_join.element must be GROUPed or used in an aggregate function . in
> fiche.php on line 154.

It's unlikely that we are going to be able to help when you didn't show
us the query or the table definitions.  However, some browsing in the
CVS logs shows this 7.3.3 bug fix:

2003-04-03 13:04  tgl

* src/backend/parser/parse_agg.c (REL7_3_STABLE): Repair incorrect
checking of grouped/ungrouped variables in the presence of unnamed
joins; per pghackers discussion 31-Mar-03.

so my first advice is to update to something newer than 7.3.2 and see if
you still have a problem.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster