On Mon, Dec 8, 2008 at 09:17, Tomasz Ostrowski <[EMAIL PROTECTED]> wrote:
> So:
>
> sql_md5 = md5(sql);
> try {
>PREPARE sql_md5 AS sql;
> } catch (SQLException e) {
>if (! e.getSQLState().equals("42P05")) {
>throw e;
>}
> }
> EXECUTE sql_md5;
Yeah, well, li
On Thu, Nov 20, 2008 at 19:19, Daniel Verite <[EMAIL PROTECTED]> wrote:
> By the way, why do the prepared statements require to be named at all?
> With other DBMS such as oracle or mysql, one can prepare statements without
> providing any name for them: the prepare() step returns a "statement handl
On Thu, Nov 20, 2008 at 20:40, Craig Ringer <[EMAIL PROTECTED]> wrote:
> I'm not even sure how you'd achieve that (exactly 3 randomly selected
> images per user) in with a single query. Then again, it's stupidly late
> here, so my brain may not be working. Any chance you can post a query
> that sho
On Thu, Nov 20, 2008 at 16:07, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> I guess if connections are persistent, you could clear them before each
> usage with DISCARD (8.3 only)
Again, I'd be losing the advantage of the already prepared statements.
Basically, what it comes down it is I want to be
On Thu, Nov 20, 2008 at 15:45, Sam Mason <[EMAIL PROTECTED]> wrote:
> On Wed, Nov 19, 2008 at 09:42:33PM +0200, WireSpot wrote:
>> I also imagined some workarounds in the code (PHP), such as defining a
>> global/static hash table and registering statement names with it. But
&
On Thu, Nov 20, 2008 at 15:05, Craig Ringer <[EMAIL PROTECTED]> wrote:
> That's probably going to be the case. PostgreSQL won't need to read the
> redundant info in from disk each time, and relative to the image data it's
> going to be pretty small. By doing it all in one join you're avoiding the
> Merlin Moncure escribió:
>> pg_prepared_statements (on recent versions of postgresql)
Thank you, that's one of the things I wanted to know.
On Thu, Nov 20, 2008 at 15:30, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> Merlin Moncure escribió:
>> also, watch out for race conditions.
>
> What race c
On Thu, Nov 20, 2008 at 10:56, Albe Laurenz <[EMAIL PROTECTED]> wrote:
> Do you still need the old prepared statement?
>
> If not, you can simple DEALLOCATE it and then try the PREPARE again.
Yes, I'd like to keep the old statements, that's part of the perks --
if a query will be repeated it will
I have a schema with galleries, people and images. Each person has a
bunch of private images. People can join any number of galleries and
can publish any of their images to the galleries they join (or not).
I'd like to retrieve a data set where for a given gallery id I get all
the people AND all t
I'm trying to use prepared statements in an application and I'm
running into this error: "Query failed: prepared statement already
exists".
The reason is obvious. What I want to know is the best way to avoid
getting this error. The client application sets statement names as MD5
of the actual query
On 10/25/05, Csaba Nagy <[EMAIL PROTECTED]> wrote:
> I guess your master table has a considerable tree of child records for
> each deleted record, that's why the deletion takes so long. We have this
> situation too on top level object deletion. Our solution is to make it
> asynchronous, i.e. the us
On 10/25/05, WireSpot <[EMAIL PROTECTED]> wrote:
> Thanks for the tip, guys. I'll go punch in some indexes and I'll be
> back to report how much of a difference it made.
Adding indexes made the dropping of entries for the "master" table
roughly 6 (six) times faster
On 10/25/05, Martijn van Oosterhout wrote:
> BTW, have you considered TRUNCATE? (although it may not work with
> foreign keys).
It doesn't :)
"TRUNCATE cannot be used if there are foreign-key references to the
table from other tables. Checking validity in such cases would require
table scans, an
On 10/25/05, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> Are there indexes on the foreign key columns? That is, given
> the following example,
>
> CREATE TABLE foo (id integer PRIMARY KEY);
> CREATE TABLE bar (fooid integer NOT NULL REFERENCES foo ON DELETE CASCADE);
>
> do you have an index on bar.
I have an application that makes heavy use of foreign keys all over
the tables. This is very nice since the data is very consistent. There
also this "central" table which holds "sites" in it. A site pretty
much is the crux of it all. Deleting a site will very precisely
eliminate all data regarding
On 10/7/05, David Fetter <[EMAIL PROTECTED]> wrote:
> Here's my thoughts on a summary:
>
> [-t [table | glob]]...# 0 or more -t options
> [-T [table | glob]]...# 0 or more -T options
> [--include-tables-from-file f]
> [--exclude-tables-from-file f]
>
> where globs get expand
On 10/6/05, A. Kretschmer <[EMAIL PROTECTED]> wrote:
> am 06.10.2005, um 13:59:44 +0300 mailte WireSpot folgendes:
> > Is it possible to dump an entire database but to skip one or two tables? Or,
> > conversely, to restore an entire dump except for one or two tables?
> &g
Is it possible to dump an entire database but to skip one or two
tables? Or, conversely, to restore an entire dump except for one or two
tables? (Although I'd prefer the first version.)
The only related option for both pg_dump and pg_restore is --table,
which only takes 1 (one) table name. If only
I have the exact same database on two different systems, both using
8.0.3. I've installed the database from the same dump. Here's the
query I'm trying on each:
EXPLAIN ANALYZE SELECT
answers.*,i18n.field1,i18n.field2,i18n.field3,i18n.field4,i18n.field5
FROM answers LEFT OUTER JOIN i18n on answers.
I have two practically identical Debian-testing systems installed on
two harddrives on the same machine. I've compiled Postgres 8.0.3 with
exactly the same options on both. Both HDD use the same kernel, have
DMA enabled and so on. I have the same database and web applications
installed in both syst
On Wed, 02 Feb 2005 14:48:41 +, Richard Huxton wrote:
> Think about it, you'd need an index that ordered use_name so that
> (john_doe, Ajohn_doe, Zjohn_doe1234) were all next to each other.
>
> If you anchor the search (LIKE 'john_doe%') and are using the C locale
> then an index can be u
I have a table with about 200.000 entries. Among other things, it
contains an integer field I use as a timestamp, and a variable
character field I use for user names. Certain queries are taking too
long IMO. I'm trying this on both 7.4 and 8.0.
If I do a direct comparison (using =) on the user nam
22 matches
Mail list logo