Merlin Moncure wrote:
> On 2/10/07, Mark Stosberg <[EMAIL PROTECTED]> wrote:
>>
>> With the help of some of this list, I was able to successfully set up
>> and benchmark a cube-based replacement for geo_distance() calculations.
>>
>> On a development box, the cube-based variations benchmarked cons
On 2/12/07, Mark Stosberg <[EMAIL PROTECTED]> wrote:
Merlin Moncure wrote:
> On 2/10/07, Mark Stosberg <[EMAIL PROTECTED]> wrote:
>>
>> With the help of some of this list, I was able to successfully set up
>> and benchmark a cube-based replacement for geo_distance() calculations.
>>
>> On a deve
On 2/12/07, Merlin Moncure <[EMAIL PROTECTED]> wrote:
cube_distance(pets.earth_coords, earth_coords ) / 1609.344 AS RADIUS
this should read:
cube_distance(pets.earth_coords, zipcodes.earth_coords ) / 1609.344 AS RADIUS
merlin
---(end of broadcast)-
Merlin--
Thanks so much for your help. Some follow-ups are below.
Merlin Moncure wrote:
>
>> Here the basic query I'm using:
>> SELECT
>> -- 1609.344 is a constant for "meters per mile"
>> cube_distance( (SELECT earth_coords from zipcodes WHERE zipcode =
>> '90210') , earth_coords)/1609.344
>>
There are 1.9M rows in ts_defects and indexes on b.ts_id (primary key)
d.ts_biz_event_id and d.ts_occur_date. Both queries below return 0
rows. The 1st runs fast and the 2nd > 400x slower. The 2nd query
differs from the 1st only by the addition of "limit 1".
Why the big difference in perfor
Brian Cox wrote:
There are 1.9M rows in ts_defects and indexes on b.ts_id (primary key)
d.ts_biz_event_id and d.ts_occur_date. Both queries below return 0
rows. The 1st runs fast and the 2nd > 400x slower. The 2nd query
differs from the 1st only by the addition of "limit 1".
Why the big di
Hi Heikki,
Thanks for your response.
Please run EXPLAIN ANALYZE on both queries, and send back the results.
[EMAIL PROTECTED] jsp]$ PGPASSWORD=quality psql -U admin -d cemdb -h
192.168.1.30 -c 'explain analyze select * from ts_defects d join
ts_biz_events b on b.ts_id = d.ts_biz_event_id wh
Brian Cox <[EMAIL PROTECTED]> writes:
>> Please run EXPLAIN ANALYZE on both queries, and send back the results.
> [ results... ]
The reason the hash plan is fairly fast is that the hash join code has a
special hack: if it reads the inner relation and finds it contains no
rows, it knows there can b