Re: [GENERAL] [postgis-users] Query with LIMIT but as random result set?

2013-01-11 Thread Bosco Rama
On 01/11/13 09:31, Gavin Flower wrote:
> -- theta in radians
> -- for radius = 100
> 
> INSERT INTO ranpoint
>  (id, theta, r)
> VALUES
>  (generate_series(1, 10), pi() * random(), 100 * random());

Shouldn't the value for theta be:
 2 * pi() * random()

Bosco.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [postgis-users] Query with LIMIT but as random result set?

2013-01-11 Thread Gavin Flower

On 12/01/13 06:45, Bosco Rama wrote:

On 01/11/13 09:31, Gavin Flower wrote:

-- theta in radians
-- for radius = 100

INSERT INTO ranpoint
  (id, theta, r)
VALUES
  (generate_series(1, 10), pi() * random(), 100 * random());

Shouldn't the value for theta be:
  2 * pi() * random()

Bosco.



Very definitely! :-)

Me bad, as the saying goes...


Cheers,
Gavin


[GENERAL] psql copy from through bash

2013-01-11 Thread Kirk Wythers
Can anyone see what I'm misisng? I am trying to run a psql "copy from" command 
through a bash script to load a buch of cdv files into the same table. I'm 
getting an error about the file "infile" not existing?

#!/bin/sh

for infile in /path_to_files/*.csv
do
   cat infile | psql dbname -c "\copy table_name FROM stdin with delimiter as 
',' NULL AS 'NA' CSV HEADER"
done


Thanks in advance

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql copy from through bash

2013-01-11 Thread Szymon Guz
On 11 January 2013 19:13, Kirk Wythers  wrote:

> Can anyone see what I'm misisng? I am trying to run a psql "copy from"
> command through a bash script to load a buch of cdv files into the same
> table. I'm getting an error about the file "infile" not existing?
>
> #!/bin/sh
>
> for infile in /path_to_files/*.csv
> do
>cat infile | psql dbname -c "\copy table_name FROM stdin with delimiter
> as ',' NULL AS 'NA' CSV HEADER"
> done
>
>
> Thanks in advance
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Hi Kirk,
maybe try this:

cat $infile |

- Szymon


Re: [GENERAL] psql copy from through bash

2013-01-11 Thread Szymon Guz
On 11 January 2013 19:32, Kirk Wythers  wrote:

>
> On Jan 11, 2013, at 12:18 PM, Szymon Guz  wrote:
>
>
>
>
> On 11 January 2013 19:13, Kirk Wythers  wrote:
>
>> Can anyone see what I'm misisng? I am trying to run a psql "copy from"
>> command through a bash script to load a buch of cdv files into the same
>> table. I'm getting an error about the file "infile" not existing?
>>
>> #!/bin/sh
>>
>> for infile in /path_to_files/*.csv
>> do
>>cat infile | psql dbname -c "\copy table_name FROM stdin with
>> delimiter as ',' NULL AS 'NA' CSV HEADER"
>> done
>>
>>
>> Thanks in advance
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
> Hi Kirk,
> maybe try this:
>
> cat $infile |
>
>
>
> Oh my goodness! Thanks you.
>
> Once more quickie. It seems that I am going to be asked for my password
> every time psql loops through the copy statement.
>
> What is considered best practices to handle authentication? I am
> connecting locally, as myself as the user and I'm being asked for my user
> password. I added the -w (no-password) to the psql statement, but now
> assume I need to add a .pgpass file or something.
>
> Suggestions?
>
>
Add the password to ~/.pgpass
http://www.postgresql.org/docs/9.1/static/libpq-pgpass.html

Szymon


Re: [GENERAL] psql copy from through bash

2013-01-11 Thread Pavel Stehule
Hello

>>
>> Once more quickie. It seems that I am going to be asked for my password
>> every time psql loops through the copy statement.
>>
>> What is considered best practices to handle authentication? I am
>> connecting locally, as myself as the user and I'm being asked for my user
>> password. I added the -w (no-password) to the psql statement, but now assume
>> I need to add a .pgpass file or something.
>>
>> Suggestions?
>>
>
> Add the password to ~/.pgpass
> http://www.postgresql.org/docs/9.1/static/libpq-pgpass.html

or

PGPASSWORD=mypassword psql database -c "copy ..."

Regards

Pavel
>
> Szymon


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql copy from through bash

2013-01-11 Thread Jerry Sievers
Kirk Wythers  writes:

> Can anyone see what I'm misisng? I am trying to run a psql "copy from" 
> command through a bash script to load a buch of cdv files into the same 
> table. I'm getting an error about the file "infile" not existing?
>
> #!/bin/sh
>
> for infile in /path_to_files/*.csv
> do
>cat infile | psql dbname -c "\copy table_name FROM stdin with delimiter as 
> ',' NULL AS 'NA' CSV HEADER"
> done

Well, I don't know what else could be wrong but suggest you get rid of
the backslash as in \copy and just say COPY which is the SQL command.
\copy is a psql macro and I'm not sure it's appropriate here.

And you win the "useless use of cat award" here too.

psql ...  by mistake instead of < and clobber your data.  Er, some shells
have a no-clobber option though.

HTH
> Thanks in advance
>
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] changes "during checkpointing"

2013-01-11 Thread Sahagian, David
In regards to 9.1.x, I would like to learn some details of the nature of 
"checkpointing"

=== Question 1 ===

- page 123 is dirty

- "checkpointing" starts

- page 123 gets written to disk, as part of this checkpoint

- page 123 gets modified again
  ? Does it get written to disk again, as part of this checkpoint?

- "checkpointing" finishes


=== Question 2 ===

- page 123 is dirty

- "checkpointing" starts

- page 123 gets modified again

- page 123 gets written to disk, as part of this checkpoint
  ? So does the most recent mod get written to disk, even if that mod is not 
committed yet ?

- "checkpointing" finishes


=== Question 3 ===

When does the full-page-writing to WAL happen ?
Is it after the start of the "checkpointing" or after the finish of the 
"checkpointing" ?


Thanks,
-dvs-



[GENERAL] >

2013-01-11 Thread Gavan Schneider

On Saturday, January 12, 2013 at 04:49, Gavin Flower wrote:


On 12/01/13 06:45, Bosco Rama wrote:

Shouldn't the value for theta be:
2 * pi() * random()

Bosco.



Very definitely! :-)


One could also ask if the value for theta shouldn't be:
tau() * random()

 :-)

Regards
Gavan



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] >

2013-01-11 Thread Gavin Flower

On 12/01/13 10:44, Gavan Schneider wrote:

On Saturday, January 12, 2013 at 04:49, Gavin Flower wrote:


On 12/01/13 06:45, Bosco Rama wrote:

Shouldn't the value for theta be:
2 * pi() * random()

Bosco.



Very definitely! :-)


One could also ask if the value for theta shouldn't be:
tau() * random()

 :-)

Regards
Gavan




Well Gavan,

I must bow down before your greater wisdom, as I am forced to agree with 
you!


Especially as your name sorts before mine, yet our names are almost 
exactly the same.  :-)



Cheers,
Gavin

P.S. Is tau() a standard part of pg core- if not, when will it be?



[GENERAL] Getting Mysql data into Postgres: least painful methods?

2013-01-11 Thread Ken Tanzer
I'm wondering if anyone can point me towards a good method for moving mysql
data into Postgres?  I've done some web searching, and found documentation
from various years, but it's not clear what's current and what works best.
Much of what I found seems to be flame war material (why Postgres is
better), or is both old and seemingly involved and complex.

Here's the fuller description of what I'm trying to do.  I've got a dataset
(a UMLS* *Metathesaurus subset) that I need to get into a Postgres
database.  It's all reference data, and so will be read-only.  There's no
functions or logic involved. I anticipate having to update it at least
quarterly, so I'd like to get to a well-grooved import process.

The data as distributed can be had in Oracle or Mysql formats.  (I already
gave them my two cents to include Postgres.)  I did see some information
about modifying the Mysql distribution files to make them
Postgres-compatible, but I thought (perhaps foolishly) it would be easier
to bring them into Mysql, and from there export them to Postgres.

A recurring idea seemed to be to use:

mysqldump -v --compatible=postgresql umls_test > dumpfile.sql

followed by

sed -i "s/\\\'/\'\'/g" dumpfile.sql


but that didn't bring me much success.  I figure this has to be a fairly
common need, and hopefully by 2013 there's an easy solution.  Thanks in
advance!

Ken

-- 
AGENCY Software
A data system that puts you in control
*http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801


Re: [GENERAL] Getting Mysql data into Postgres: least painful methods?

2013-01-11 Thread Adrian Klaver
On 01/11/2013 03:54 PM, Ken Tanzer wrote:
>
> 
> 
> but that didn't bring me much success.  I figure this has to be a fairly 
> common need, and hopefully by 2013 there's an easy solution.  Thanks in 
> advance!

Have you looked at Foreign Data Wrappers(FDW):

http://www.postgresql.org/docs/9.1/static/sql-createforeigndatawrapper.html

If you use Python there is Multicorn:
http://www.postgresql.org/docs/9.1/static/sql-createforeigndatawrapper.html

There is also mysql_fdw:
http://wiki.postgresql.org/wiki/Foreign_data_wrappers#mysql_fdw

> 
> Ken
> 
> -- 


-- 
Adrian Klaver
adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting Mysql data into Postgres: least painful methods?

2013-01-11 Thread wd
You can search from google,
https://www.google.com/search?q=mysql2pg&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a


On Sat, Jan 12, 2013 at 7:54 AM, Ken Tanzer  wrote:

> I'm wondering if anyone can point me towards a good method for moving
> mysql data into Postgres?  I've done some web searching, and found
> documentation from various years, but it's not clear what's current and
> what works best.  Much of what I found seems to be flame war material (why
> Postgres is better), or is both old and seemingly involved and complex.
>
> Here's the fuller description of what I'm trying to do.  I've got a
> dataset (a UMLS* *Metathesaurus subset) that I need to get into a
> Postgres database.  It's all reference data, and so will be read-only.
> There's no functions or logic involved. I anticipate having to update it at
> least quarterly, so I'd like to get to a well-grooved import process.
>
> The data as distributed can be had in Oracle or Mysql formats.  (I already
> gave them my two cents to include Postgres.)  I did see some information
> about modifying the Mysql distribution files to make them
> Postgres-compatible, but I thought (perhaps foolishly) it would be easier
> to bring them into Mysql, and from there export them to Postgres.
>
> A recurring idea seemed to be to use:
>
> mysqldump -v --compatible=postgresql umls_test > dumpfile.sql
>
> followed by
>
> sed -i "s/\\\'/\'\'/g" dumpfile.sql
>
>
> but that didn't bring me much success.  I figure this has to be a fairly
> common need, and hopefully by 2013 there's an easy solution.  Thanks in
> advance!
>
> Ken
>
> --
> AGENCY Software
> A data system that puts you in control
> *http://agency-software.org/*
> ken.tan...@agency-software.org
> (253) 245-3801
>
>


Re: [GENERAL] Getting Mysql data into Postgres: least painful methods?

2013-01-11 Thread Rich Shepard

On Fri, 11 Jan 2013, Ken Tanzer wrote:


I'm wondering if anyone can point me towards a good method for moving
mysql data into Postgres?


  I had to do this last year with the ITIS (Integrated Taxonomic Information
System) maintained by the US Geological Survey.

  Some MySQL key words were immediately recognized and I used emac's
global-search-and-replace to change them to postgres words. Then I tried
reading in individual tables to a newly created database and redirected
errors to a disk file. I fixed the errors postgres identified, dropped the
table, and repeated until there were no errors. Took a bit of time but
worked just fine.

  Then I sent the USGS database maintainer a dump of the postgres database
because he wanted to migrate from mysql to postgres there. I think of it as
a public service. :-)

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql copy from through bash

2013-01-11 Thread Kirk Wythers

On Jan 11, 2013, at 12:18 PM, Szymon Guz  wrote:

> 
> 
> 
> On 11 January 2013 19:13, Kirk Wythers  wrote:
> Can anyone see what I'm misisng? I am trying to run a psql "copy from" 
> command through a bash script to load a buch of cdv files into the same 
> table. I'm getting an error about the file "infile" not existing?
> 
> #!/bin/sh
> 
> for infile in /path_to_files/*.csv
> do
>cat infile | psql dbname -c "\copy table_name FROM stdin with delimiter as 
> ',' NULL AS 'NA' CSV HEADER"
> done
> 
> 
> Thanks in advance
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 
> 
> Hi Kirk,
> maybe try this:
> 
> cat $infile |
> 
> 

Oh my goodness! Thanks you. 

Once more quickie. It seems that I am going to be asked for my password every 
time psql loops through the copy statement. 

What is considered best practices to handle authentication? I am connecting 
locally, as myself as the user and I'm being asked for my user password. I 
added the -w (no-password) to the psql statement, but now assume I need to add 
a .pgpass file or something. 

Suggestions?



[GENERAL] Libpq and multithreading

2013-01-11 Thread Asia
Hello,

I am trying to use libpq in two threads, the issue is that I am getting access 
violation after several successful connections.
Each thread connects to different db and disconnects immediately after making a 
conenction.

So my question is if this is supported by libpq? Is it possible to use it in 
more than one thread and make connections at the same time?

Kind regards,
Joanna


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database connections seemingly hanging

2013-01-11 Thread Fredrik . HuitfeldtMadsen
Hi All, 

@ Tom
Thank you for your response. While working on your suggestions, we seem to 
have found the cause of our problems.

@ Yugo
Thank you for your response. We are running pgpool in replication mode 
with load balancing enabled. If you have further questions to aid in 
debugging the situation, please let me know. 


It seems that the root cause was that pgpool acquired the locks in the 
wrong order. If the resource is called A it seems that pgpool allows child 
X to acquire A on node1 and at the same time, child Y acquires A on node2. 
This leaves X wanting A on node2 and Y wanting A on node1. This leaves 
both children hanging indefinitely. It also leaves both postgres'es 
blissfully unaware of the deadlock, whereby it escapes postgres'es 
deadlock detection.

We have included a summary of the system state here:
http://pastebin.com/9f6gjxLA

We have used netstat to trace the connections between the pgpool children 
and the postgress'es. pgpool child 7606 has acquired a lock on the .204 
server but waits for the same lock on the .202 server. At the same time 
pgpool child 7681 has the lock on the .202 server and waits for it on the 
.204 server. Pgpool is running on the .204 server. 

If anyone is interested, we have included the full outputs in the 
following pastebins:

pg_locks on 10.216.73.202: http://pastebin.com/uRQh5Env
pg_locks on 10.216.73.204: http://pastebin.com/BXpirVQ2
netstat -p on 10.216.73.202: http://pastebin.com/b9kV7Wz4
netstat -p on 10.216.73.204: http://pastebin.com/tPz8gwRG

Kind regards,
Fredrik & friends





Tom Lane  
2013/01/10 05:30

To
fredrik.huitfeldtmad...@schneider-electric.com
cc
pgsql-general@postgresql.org, pgpool-gene...@pgpool.net
Subject
Re: [GENERAL] Database connections seemingly hanging






fredrik.huitfeldtmad...@schneider-electric.com writes:
> We have a setup where 2 JBoss (5.1) servers communicate with 1 instance 
of 
> PgPool (3.04), which again communicates with 2 Postgresql (8.4) servers. 

> The JBoss servers host some Java code for us and as part of that they 
run 
> some quartz jobs. 

> These jobs are triggered right after startup and as part of that we get 
> what seems to get stuck. At least when we can see in the database that 
> when inspecting pg_locks, there exists a virtual transaction that has 
all 
> desired locks granted but seems to be stuck. When we inspect 
> pg_stat_activity, it seems that the process is still waiting for the 
query 
> (SELECT ... FOR UPDATE) to finish.

> The locking transaction is described here: http://pastebin.com/3pEn6vPe

What that shows is several sessions running SELECT FOR UPDATE, but none
of them seem to be waiting.  What else is going on?  In particular, are
there any idle-in-transaction sessions?  Also, would any of these
SELECTs return enough rows that the sessions might be blocked trying to
send data back to their clients?  That wouldn't show as waiting = true,
though I think you could detect it by strace'ing the backends to see if
they are stopped in a send() kernel call.

> We suspect that a connection to the database acquires its locks but 
> somehow does not return to the application. If this is true, it would 
> either be a postgresql or a pgpool problem. We would appreciate any help 

> in further debugging or resolving the situation. 

It seems like a good guess would be that you have a deadlock situation
that cannot be detected by the database because part of the blockage is
on the client side --- that is, client thread A is waiting on its
database query, that query is waiting on some lock held by client thread
B's database session, and thread B is somehow waiting for A on the
client side.  It's not too hard to get into this type of situation when
B is sitting on an open idle-in-transaction session: B isn't waiting for
the database to do anything, but is doing something itself, and so it's
not obvious that there's any risk.  Thus my question about what idle
sessions there might be.  This does usually lead to a visibly waiting
database session for client A, though, so it's probably too simple as an
explanation for your issue.  We have seen some harder-to-debug cases
where the database sessions weren't visibly "waiting" because they were
blocked on client I/O, so maybe you've got something like that.

Another line of thought to pursue is possible misuse of pgpool.  If
pgpool doesn't realize you're inside a transaction and swaps the
connection to some other client thread, all kinds of confusion ensues.

Also, I hope you're running a reasonably recent 8.4.x minor release.
A quick look through the commit logs didn't show anything about deadlock
fixes in the 8.4 branch, but I might have missed something that was
fixed a long time ago.

 regards, tom lane

__
This email has been scanned by the Symantec Email Security.cloud service.

Re: [GENERAL] Libpq and multithreading

2013-01-11 Thread Bruce Momjian
On Fri, Jan 11, 2013 at 04:27:42PM +0100, Asia wrote:
> Hello,
> 
> I am trying to use libpq in two threads, the issue is that I am getting 
> access violation after several successful connections.
> Each thread connects to different db and disconnects immediately after making 
> a conenction.
> 
> So my question is if this is supported by libpq? Is it possible to use it in 
> more than one thread and make connections at the same time?

Each connection can be created and accessed from only one thread.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] reducing number of ANDs speeds up query

2013-01-11 Thread T. E. Lawrence
Hello,

I have a pretty standard query with two tables:

SELECT table_a.id FROM table_a a, table_b b WHERE ... AND ... AND b.value=...;

With the last "AND b.value=..." the query is extremely slow (did not wait for 
it to end, but more than a minute), because the value column is not indexed 
(contains items longer than 8K).

However the previous conditions "WHERE ... AND ... AND" should have already 
reduced the candidate rows to just a few (table_b contains over 50m rows). And 
indeed, removing the last "AND b.value=..." speeds the query to just a 
millisecond.

Is there a way to instruct PostgreSQL to do first the initial "WHERE ... AND 
... AND" and then the last "AND b.value=..." on the (very small) result?

Thank you and kind regards,
T.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql copy from through bash

2013-01-11 Thread Rob Sargentg

On 01/11/2013 11:32 AM, Kirk Wythers wrote:


On Jan 11, 2013, at 12:18 PM, Szymon Guz > wrote:






On 11 January 2013 19:13, Kirk Wythers > wrote:


Can anyone see what I'm misisng? I am trying to run a psql "copy
from" command through a bash script to load a buch of cdv files
into the same table. I'm getting an error about the file "infile"
not existing?

#!/bin/sh

for infile in /path_to_files/*.csv
do
   cat infile | psql dbname -c "\copy table_name FROM stdin with
delimiter as ',' NULL AS 'NA' CSV HEADER"
done


Thanks in advance

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Hi Kirk,
maybe try this:

cat $infile |




Oh my goodness! Thanks you.

Once more quickie. It seems that I am going to be asked for my 
password every time psql loops through the copy statement.


What is considered best practices to handle authentication? I am 
connecting locally, as myself as the user and I'm being asked for my 
user password. I added the -w (no-password) to the psql statement, but 
now assume I need to add a .pgpass file or something.


Suggestions?



Yes a .pgpass file would work nicely




Re: [GENERAL] Getting Mysql data into Postgres: least painful methods?

2013-01-11 Thread John R Pierce

On 1/11/2013 3:54 PM, Ken Tanzer wrote:
Here's the fuller description of what I'm trying to do.  I've got a 
dataset (a UMLS//Metathesaurus subset) that I need to get into a 
Postgres database.  It's all reference data, and so will be 
read-only.  There's no functions or logic involved. I anticipate 
having to update it at least quarterly, so I'd like to get to a 
well-grooved import process.



how many tables?  if its just one or a couple tables, can you get the 
data as CSV?   then it would be trivial to import into postgres, using 
the COPY command (or, \c from psql)...


another alternative, investigate "ETL" tools, these are general purpose 
data manglers that can connect to a source database (usually any of 
about 20 supported), extract data, transform it if needed, and load it 
into a destination database (from a list of 20 or so typically supported)






Re: [GENERAL] reducing number of ANDs speeds up query

2013-01-11 Thread Amit kapila

On Saturday, January 12, 2013 7:17 AM T. E. Lawrence wrote:
> Hello,

> I have a pretty standard query with two tables:

> SELECT table_a.id FROM table_a a, table_b b WHERE ... AND ... AND b.value=...;

> With the last "AND b.value=..." the query is extremely slow (did not wait for 
> it to end, but more than a minute), because the value column is not indexed 
> (contains items longer than 8K).

> However the previous conditions "WHERE ... AND ... AND" should have already 
> reduced the candidate rows to just a few (table_b contains over 50m rows). 
> And indeed, removing the last "AND b.value=..." speeds the query to just a 
> millisecond.

> Is there a way to instruct PostgreSQL to do first the initial "WHERE ... AND 
> ... AND" and then the last "AND b.value=..." on the (very small) result?

You can try once with below query:
Select * from (SELECT a.id,b.value FROM table_a a, table_b b WHERE ... AND ... 
) X where X.value=...;

If this doesn't work can you send the Explain .. output for both queries(the 
query you are using and the query I have suggested)


With Regards,
Amit Kapila.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general