On Wed, Oct 01, 2003 at 09:01:23PM -0400, Neil Conway wrote:
> On Wed, 2003-10-01 at 20:25, Jingren Zhou wrote:
> > From the document, it seems that PREPARE/EXECUTE works only in the same
> > session. I am wondering whether postgres can prepare a query (save the plan)
> > for difference backends
On Wed, 2003-10-01 at 22:43, Tom Lane wrote:
> Another issue is that we currently don't have a mechanism for flushing
> query plans when they become obsolete (eg, an index is added or
> removed). Locally-cached plans are relatively easy to refresh: just
> start a fresh session. A shared plan cach
Neil Conway <[EMAIL PROTECTED]> writes:
> The decision to store prepared statements per-backend, rather than in
> shared memory, was made deliberately. In fact, an early version of the
> PREPARE/EXECUTE patch (written by Karel Zak) stored prepared statements
> in shared memory. But I decided to rem
On Wed, 1 Oct 2003, Jingren Zhou wrote:
> Hi,
>
> >From the document, it seems that PREPARE/EXECUTE works only in the same
> session. I am wondering whether postgres can prepare a query (save the plan)
> for difference backends.
>
> I am working on a project which requires executing "psql -c 'qu
On Wed, 2003-10-01 at 20:25, Jingren Zhou wrote:
> From the document, it seems that PREPARE/EXECUTE works only in the same
> session. I am wondering whether postgres can prepare a query (save the plan)
> for difference backends.
The decision to store prepared statements per-backend, rather than
Hi,
From the document, it seems that PREPARE/EXECUTE works only in the same
session. I am wondering whether postgres can prepare a query (save the plan)
for difference backends.
I am working on a project which requires executing "psql -c 'query'" in
command line multiple times. Since the perfo
The standard approach to such a scenario would imho be to write stored procedures
for the complex queries (e.g. plpgsql) and use that from the client.
Maybe even eliminate a few ping pongs between client and server.
Andreas
Does it reduce the time taken by the planner?
Are server side SQL f
On Wed, 2002-10-23 at 10:39, Greg Copeland wrote:
> If you were using them that frequently, couldn't you just keep a
> persistent connection? If it's not used that often, wouldn't the
> overhead of preparing the query following a new connection become noise?
Especially by the time you add in the
On Wed, Oct 23, 2002 at 11:02:14AM -0400, Tom Lane wrote:
> =?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <[EMAIL PROTECTED]> writes:
> > I wonder if there is a way to store a parsed/rewritten/planned query in
> > a table so that it can be loaded again.
>
> The original version of the PREPARE patch us
"Zeugswetter Andreas SB SD" <[EMAIL PROTECTED]> writes:
> The standard approach to such a scenario would imho be to write
> stored procedures for the complex queries (e.g. plpgsql) and use
> that from the client. Maybe even eliminate a few ping pongs between
> client and server.
Since PL/PgSQL ca
> The idea is not to have it accross multiple backends and having it in
> sync with the tables in the database. This is not the point.
> My problem is that I have seen many performance critical applications
> sending just a few complex queries to the server. The problem is: If you
> have many q
Greg Copeland wrote:
Could you use some form of connection proxy where the proxy is actually
keeping persistent connections but your application is making transient
connections to the proxy? I believe this would result in the desired
performance boost and behavior.
Now, the next obvious questio
Could you use some form of connection proxy where the proxy is actually
keeping persistent connections but your application is making transient
connections to the proxy? I believe this would result in the desired
performance boost and behavior.
Now, the next obvious question...anyone know of any
The idea is not to have it accross multiple backends and having it in
sync with the tables in the database. This is not the point.
My problem is that I have seen many performance critical applications
sending just a few complex queries to the server. The problem is: If you
have many queries wher
This is exactly what we do in case of complex stuff. I know that it can
help to reduce the problem for the planner.
However: If you have explicit joins across 10 tables the SQL statement
is not that readable any more and it is still slower than a prepared
execution plan.
I guess it is worth thi
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> Hans-Jürgen Schönig <[EMAIL PROTECTED]> wrote:
>> I have a join across 10 tables + 2 subselects across 4 tables
>> on the machine I use for testing:
>> planner: 12 seconds
>> executor: 1 second
> One option you have is to explicitly give the join or
On Wed, Oct 23, 2002 at 18:04:01 +0200,
Hans-Jürgen Schönig <[EMAIL PROTECTED]> wrote:
>
> An example:
> I have a join across 10 tables + 2 subselects across 4 tables
> on the machine I use for testing:
>planner: 12 seconds
>executor: 1 second
>
> The application will stay the same for
=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <[EMAIL PROTECTED]> writes:
> I wonder if there is a way to store a parsed/rewritten/planned query in
> a table so that it can be loaded again.
The original version of the PREPARE patch used a shared-across-backends
cache for PREPAREd statements. We rejec
If you were using them that frequently, couldn't you just keep a
persistent connection? If it's not used that often, wouldn't the
overhead of preparing the query following a new connection become noise?
Greg
On Wed, 2002-10-23 at 09:24, Hans-Jürgen Schönig wrote:
> First of all PREPARE/EXECUTE
First of all PREPARE/EXECUTE is a wonderful thing to speed up things
significantly.
I wonder if there is a way to store a parsed/rewritten/planned query in
a table so that it can be loaded again.
This might be useful when it comes to VERY complex queries (> 10 tables).
I many applications the si
20 matches
Mail list logo