Is there some reason you can't add more swap space?
Yes, disk space. I have about 2 GB of swap space enabled.
How do you know it is Postgres that is using lots of memory? The OOM killer
doesn't just kill of memory hogs, so you can't just assume the processes
being killed tells you which pr
The following bug has been logged online:
Bug reference: 2235
Logged by: Per Lauvaas
Email address: [EMAIL PROTECTED]
PostgreSQL version: 8.0.3
Operating system: Windows Server 2003 SE SP1
Description:Wal archiving problems
Details:
Hi
We are using wal archiving. T
0 means overcommit is enabled. You want to set it to something other
than 0 to prevent overcommitting and the consequent suprising process
deaths. Exactly what other values are accepted varies, but 0 isn't the
one for you.
I do not understand how 0 could mean overcommit is enabled. I do
One question is what does the explain (without analyze) plan look like for
the above and are the row estimates valid in the case of one of the hash
plans.
pointspp=# explain select trid, count(*) from pptran group by trid
having count(*) > 1;
QUERY PLAN
--
The following bug has been logged online:
Bug reference: 2236
Logged by: Kai Ronan
Email address: [EMAIL PROTECTED]
PostgreSQL version: 8.0.1
Operating system: redhat linux
Description:extremely slow to get unescaped bytea data from db
Details:
Using php 5.1.2, tryi
The following bug has been logged online:
Bug reference: 2237
Logged by: Alexis Wilke
Email address: [EMAIL PROTECTED]
PostgreSQL version: 8.0.1
Operating system: Linux (RH9)
Description:SELECT optimizer drops everything improperly
Details:
Hi guys,
It looks like i
Hmm, if you do an enable_hashagg=false and then run the query (without
explain) does it work then?
pointspp=# set enable_hashagg = false;
SET
pointspp=# select trid, count(*) from pptran group by trid having
count(*) > 1;
ERROR: could not write block 661582 of temporary file: No space left
The problem is that the HashAgg will have to maintain a counter for
every distinct value of trid, not just those that occur more than
once. So if there are a huge number of one-time-only values you could
still blow out memory (and HashAgg doesn't currently know how to spill
to disk).
One-tim
The following bug has been logged online:
Bug reference: 2238
Logged by: HOBY
Email address: [EMAIL PROTECTED]
PostgreSQL version: 7.3.2
Operating system: Mandrake 9.1
Description:Query failed: ERROR
Details:
Warning: pg_query() [function.pg-query]: Query failed: ER
On Fri, Feb 03, 2006 at 19:38:04 +0100,
Patrick Rotsaert <[EMAIL PROTECTED]> wrote:
>
> I have 5.1GB of free disk space. If this is the cause, I have a
> problem... or is there another way to extract (and remove) duplicate rows?
How about processing a subset of the ids in one pass and then may
Kai Ronan wrote:
> // Get the bytea data
> $res = pg_query("SELECT data FROM image WHERE name='big.gif'");
Do you have an index in the image.name column? What does an
EXPLAIN ANALYZE SELECT data FROM image WHERE name='big.gif'
say?
--
Alvaro Herrerahttp://w
"Alexis Wilke" <[EMAIL PROTECTED]> writes:
> -- In this select, it detects that the phpbb_topics_watch is
> -- empty and thus ignores the WHERE clause thinking since that
> -- table is empty the SELECT will be empty
> SELECT 'The next SELECT finds 0 row. It should find the same row!' AS
> message;
"HOBY" <[EMAIL PROTECTED]> writes:
> Warning: pg_query() [function.pg-query]: Query failed: ERROR: Attribute
> unnamed_join.element must be GROUPed or used in an aggregate function . in
> fiche.php on line 154.
It's unlikely that we are going to be able to help when you didn't show
us the query or
13 matches
Mail list logo