henk de wit <[EMAIL PROTECTED]> writes:
> I'm not exactly sure what to look for in the log. I'll do my best though an=
> d see what I can come up with.
It'll be a bunch of lines like
TopMemoryContext: 49832 total in 6 blocks; 8528 free (6 chunks); 41304 used
immediately in front of the out-of-me
> This error should have produced a map of per-context memory use in the>
> postmaster log.
> Please show us that.
I'm not exactly sure what to look for in the log. I'll do my best though and
see what I can come up with.
_
Expre
> How is the memory consumed? How are you measuring it? I assume you
> mean the postgres process that is running the query uses the memory.
> If so, which tool(s) are you using and what's the output that shows it
> being used?
It's periodically measured and recorded by a script from which the rel
On 10/11/07, Andrew - Supernews <[EMAIL PROTECTED]> wrote:
> On 2007-10-10, Theo Kramer <[EMAIL PROTECTED]> wrote:
> > When doing a 'manual' prepare and explain analyze I get the following
> >
> > rascal=# prepare cq (char(12), smallint, integer) as SELECT oid,
> > calllog_mainteng, calllog_phase,
On 2007-10-10, Theo Kramer <[EMAIL PROTECTED]> wrote:
> When doing a 'manual' prepare and explain analyze I get the following
>
> rascal=# prepare cq (char(12), smallint, integer) as SELECT oid,
> calllog_mainteng, calllog_phase, calllog_self FROM calllog
> WHERE calllog_mainteng = $1
> AND calllog
Erik Jones <[EMAIL PROTECTED]> writes:
> Tom, are there any docs anywhere that explain how to interpret those =20
> per-context memory dumps?
No, not really. What you have to do is grovel around in the code and
see where contexts with particular names might get created.
> For example, when I see
On 10/11/07, Theo Kramer <[EMAIL PROTECTED]> wrote:
> On Thu, 2007-10-11 at 10:12 +0100, Richard Huxton wrote:
> > Theo Kramer wrote:
> > >
> > > So I suspect that there is something more fundamental here...
> >
> > OK, so there must be something different between the two scenarios. It
> > can only
On Oct 11, 2007, at 9:51 AM, Tom Lane wrote:
henk de wit <[EMAIL PROTECTED]> writes:
ERROR: out of memory
DETAIL: Failed on request of size 4194304.
This error should have produced a map of per-context memory use in the
postmaster log. Please show us that.
regards,
Oh, I see. I didn't look carefully at the EXPLAIN ANALYZE I posted.
So, is there a solution to the rank problem?
Benjamin
On Oct 11, 2007, at 8:53 AM, Tom Lane wrote:
Benjamin Arai <[EMAIL PROTECTED]> writes:
It appears that the ORDER BY rank operation is the slowing factor.
If I remove i
Benjamin Arai <[EMAIL PROTECTED]> writes:
> It appears that the ORDER BY rank operation is the slowing factor.
> If I remove it then the query is pretty fast. Is there another way
> to perform ORDER BY such that it does not do a sort?
I think you misunderstood: it's not the sort that's slow,
It appears that the ORDER BY rank operation is the slowing factor.
If I remove it then the query is pretty fast. Is there another way
to perform ORDER BY such that it does not do a sort?
Benjamin
On Oct 5, 2007, at 3:57 PM, Benjamin Arai wrote:
On Oct 5, 2007, at 8:32 AM, Oleg Bartunov
henk de wit <[EMAIL PROTECTED]> writes:
> ERROR: out of memory
> DETAIL: Failed on request of size 4194304.
This error should have produced a map of per-context memory use in the
postmaster log. Please show us that.
regards, tom lane
---(end of
On 10/11/07, henk de wit <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit Debian
> Etch/2x dual core C2D/8GB mem). The thing is that I have a huge transaction
> that does 2 things: 1) delete about 300.000 rows from a table with about 15
> mi
henk de wit wrote:
Hi,
I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit
Debian Etch/2x dual core C2D/8GB mem). The thing is that I have a
huge transaction that does 2 things: 1) delete about 300.000 rows
from a table with about 15 million rows and 2) do some (heavy)
calculatio
Hi,
I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit Debian
Etch/2x dual core C2D/8GB mem). The thing is that I have a huge transaction
that does 2 things: 1) delete about 300.000 rows from a table with about 15
million rows and 2) do some (heavy) calculations and re-insert a
On Thu, 2007-10-11 at 10:12 +0100, Richard Huxton wrote:
> Theo Kramer wrote:
> >
> > So I suspect that there is something more fundamental here...
>
> OK, so there must be something different between the two scenarios. It
> can only be one of:
>1. Query
> 2. DB Environment (user, locale, s
Theo Kramer wrote:
Thanks, had missed that, however, I am afraid that I fail to see how
preparing a query using PQprepare() and then executing it using
PQexecPrepared(), is 8 thousand times slower than directly executing
it.,, ( 403386.583ms/50.0ms = 8067 ).
When doing a 'manual' prepare and ex
Hi,
Le jeudi 11 octobre 2007, Kevin Kempter a écrit :
> I'm preparing to create a test suite of very complex queries that can be
> profiled in terms of load and performance. The ultimate goal is to define a
> load/performance profile during a run of the old application code base and
> then again w
Theo Kramer a écrit :
On Wed, 2007-10-10 at 17:00 +0200, Cédric Villemain wrote:
Reading the manual, you can learn that prepared statement can (not)
follow the same plan as direct query:
the plan is make before pg know the value of the variable.
See 'Notes' http://www.postgresql.org/doc
19 matches
Mail list logo