Hi Ulrich, do you try with
SELECT p.speed FROM processor p
INNER JOIN users_processors up ON p.id=up.processorid
AND up.userid=1
?
Or your question is only about IN and EXIST?
regards,
Sergio Gabriel Rodriguez
Corrientes - Argentina
http://www.3trex.com.ar
On Mon, Jun 30
Hi Ulrich, do you try with
SELECT p.speed FROM processor p
INNER JOIN users_processors up ON p.id=up.processorid
AND up.userid=1
?
Or your question is only about IN and EXIST?
regards,
Sergio Gabriel Rodriguez
Corrientes - Argentina
http://www.3trex.com.ar
On Sat, Jun 28
Our production database, postgres 8.4 has an approximate size of 200 GB,
most of the data are large objects (174 GB), until a few months ago we used
pg_dump to perform backups, took about 3-4 hours to perform all the
process. Some time ago the process became interminable, take one or two
days to pr
On Thu, Sep 20, 2012 at 11:35 AM, Tom Lane wrote:
> You wouldn't happen to be
> trying to use a 9.0 or later pg_dump would you? Exactly what 8.4.x
> release is this, anyway?
>
>
>
Tom, thanks for replying, yes, we tried it with postgres postgres 9.1 and
9.2 and the behavior is exactly the same.
32 GB RAM
OS SLES 10 + logs --> raid 6
data-->raid 6
thanks!
On Thu, Sep 20, 2012 at 12:53 PM, Sergio Gabriel Rodriguez <
sgrodrig...@gmail.com> wrote:
> On Thu, Sep 20, 2012 at 11:35 AM, Tom Lane wrote:
>
>> You wouldn't happen to be
>> trying to use a 9.0 o
On Thu, Oct 11, 2012 at 7:16 PM, Tom Lane wrote:
>
> It's pretty hard to say without knowing a lot more info about your system
> than you provided. One thing that would shed some light is if you spent
> some time finding out where the time is going --- is the system
> constantly I/O busy, or is
On Fri, Oct 12, 2012 at 10:31 PM, Tom Lane wrote:
> So I think the original assumption that we didn't need to optimize
> pg_dump's object management infrastructure for blobs still holds good.
> If there's anything that is worth fixing here, it's the number of server
> roundtrips being used ...
>