On Nov 24, 2005, at 12:14 PM, Bealach-na Bo wrote:
The consensus seems to be that I need more indexes and I also need to
look into the NOT IN statement as a possible bottleneck. I've
introduced the indexes which has led to a DRAMATIC change in response
time. Now I have to experiment with INNER
- Original Message -
From: "Andreas Pflug" <[EMAIL PROTECTED]>
Create a table "sibling" with parent_id, sibling_id and appropriate FKs,
allowing the model to reflect the relation. At the same time, you can drop
"mother" and "father", because this relation is covered too
Something
On Sat, 3 Dec 2005, Luke Lonergan wrote:
Tom,
On 12/3/05 12:32 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
Last I looked at the Postgres binary dump format, it was not portable or
efficient enough to suit the need. The efficiency problem with it was
Rodrigo Madera wrote:
Imagine a table named Person with "first_name" and "age".
Now let's make it fancy and put a "mother" and "father" field that is
a reference to the own table (Person). And to get even fuzzier, let's
drop in some siblings:
CREATE TABLE person(
id bigint PRIMARY KEY,
fir
On Fri, 2005-12-02 at 23:03 -0500, Luke Lonergan wrote:
> And how do we compose the binary data on the client? Do we trust that the
> client encoding conversion logic is identical to the backend's?
Well, my newbieness is undoubtedly showing already, so I might as well
continue with my line of du
Imagine a table named Person with "first_name" and "age".
Now let's make it fancy and put a "mother" and "father" field that is
a reference to the own table (Person). And to get even fuzzier, let's
drop in some siblings:
CREATE TABLE person(
id bigint PRIMARY KEY,
first_name TEXT,
age IN
Tom,
On 12/3/05 12:32 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> "Luke Lonergan" <[EMAIL PROTECTED]> writes:
>> Last I looked at the Postgres binary dump format, it was not portable or
>> efficient enough to suit the need. The efficiency problem with it was that
>> there was descriptive informa
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
> Last I looked at the Postgres binary dump format, it was not portable or
> efficient enough to suit the need. The efficiency problem with it was that
> there was descriptive information attached to each individual data item, as
> compared to the approa
Tom,
On 12/2/05 3:00 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>
> Sure it does ... at least as long as you are willing to assume everybody
> uses IEEE floats, and if they don't you have semantic problems
> translating float datums anyhow.
>
> What we lack is documentation, more than functionali
On 12/3/2005 11:41 AM, Michael Riess wrote:
Alvaro Herrera schrieb:
Michael Riess wrote:
Shared memory ... I currently use 1500 buffers for 50 connections, and
performance really suffered when I used 3000 buffers. The problem is
that it is a 1GB machine, and Apache + Tomcat need about 400MB.
On 12/3/05, Michael Riess <[EMAIL PROTECTED]> wrote:
> Alvaro Herrera schrieb:
> > Michael Riess wrote:
> >
> >> Shared memory ... I currently use 1500 buffers for 50 connections, and
> >> performance really suffered when I used 3000 buffers. The problem is
> >> that it is a 1GB machine, and Apache
Alvaro Herrera schrieb:
Michael Riess wrote:
Shared memory ... I currently use 1500 buffers for 50 connections, and
performance really suffered when I used 3000 buffers. The problem is
that it is a 1GB machine, and Apache + Tomcat need about 400MB.
Well, I'd think that's were your problem is
Michael Riess wrote:
> Shared memory ... I currently use 1500 buffers for 50 connections, and
> performance really suffered when I used 3000 buffers. The problem is
> that it is a 1GB machine, and Apache + Tomcat need about 400MB.
Well, I'd think that's were your problem is. Not only you have
Jan Wieck schrieb:
On 12/2/2005 6:01 PM, Michael Riess wrote:
Hi,
thanks for your comments so far - I appreciate it. I'd like to narrow
down my problem a bit:
As I said in the other thread, I estimate that only 20% of the 15,000
tables are accessed regularly. So I don't think that vacuumin
On 12/2/2005 6:01 PM, Michael Riess wrote:
Hi,
thanks for your comments so far - I appreciate it. I'd like to narrow
down my problem a bit:
As I said in the other thread, I estimate that only 20% of the 15,000
tables are accessed regularly. So I don't think that vacuuming or the
number of
On Fri, 2005-12-02 at 15:18 -0500, Stephen Frost wrote:
> The other thought, of course, is that you could use PITR for your
> backups instead of pgdump...
Yes, it is much faster that way.
Over on -hackers a few optimizations of COPY have been discussed; one of
those is to optimize COPY when it i
On Fri, 2 Dec 2005, Luke Lonergan wrote:
And how do we compose the binary data on the client? Do we trust that
the client encoding conversion logic is identical to the backend's? If
there is a difference, what happens if the same file loaded from
different client machines has different resul
On Fri, 2 Dec 2005, Luke Lonergan wrote:
Micahel,
On 12/2/05 1:46 PM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
Not necessarily; you may be betting that it's more *efficient* to do the
parsing on a bunch of lightly loaded clients than your server. Even if
you're using the same code this may
On Fri, 2 Dec 2005, Luke Lonergan wrote:
Stephen,
On 12/2/05 1:19 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
I've used the binary mode stuff before, sure, Postgres may have to
convert some things but I have a hard time believing it'd be more
expensive to do a network_encoding -> host_enc
On Fri, 2 Dec 2005, Michael Stone wrote:
On Fri, Dec 02, 2005 at 01:24:31PM -0800, Luke Lonergan wrote:
From a performance standpoint no argument, although you're betting that you
can do parsing / conversion faster than the COPY core in the backend can
Not necessarily; you may be betting tha
On Fri, 2 Dec 2005, Luke Lonergan wrote:
Stephen,
On 12/2/05 12:18 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
Just a thought, but couldn't psql be made to use the binary mode of
libpq and do at least some of the conversion on the client side? Or
does binary mode not work with copy (that
21 matches
Mail list logo