Re: [PERFORM] Bad row estimates

2006-03-04 Thread Greg Stark

Greg Stark <[EMAIL PROTECTED]> writes:

> The "now() BETWEEN test_table_1.start_ts AND test_table_1.end_ts can't be
> answered completely using a btree index. You could try using a GIST index here
> but I'm not clear how much it would help you (or how much work it would be).

To add to my own comment you could also try creating two separate indexes on
start_ts and end_ts. Postgres can combine the two indexes using a bitmap scan.
It's not a complete solution like a GIST index would be though.

It also doesn't help at all with the planner estimating how many records will
actually match.

-- 
greg


---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] How to query and index for customer with lastname and city

2006-03-04 Thread Joost Kraaijeveld




Hi,

I have two tables:

Customer: objectid, lastname, fk_address
Address: objectid, city

I want to select all customers with a name >= some_name and living in a city >= some_city, all comparisons case insensitive

Below is what I actually have. Given the fact that it takes forever to get a result (> 6 seconds) , there must be something wrong with my solution or my expectation. Can anyone tell what I should do to make this query go faster ( or convince me to wait for the result ;-()?


SELECT customers.objectid FROM prototype.customers,prototype.addresses 
WHERE
customers.contactAddress = addresses.objectId 
AND 
( 
 TRIM(UPPER(lastName)) >= TRIM(UPPER('some_name'))
 AND 
 TRIM(UPPER(city)) >= TRIM(UPPER('some_city'))
)  
order by TRIM(UPPER(lastname)), TRIM(UPPER(city)) 

Explain analyze after a full alayse vacuum:

Sort  (cost=54710.68..54954.39 rows=97484 width=111) (actual time=7398.971..7680.405 rows=96041 loops=1)
  Sort Key: btrim(upper(customers.lastname)), btrim(upper(addresses.city))
  ->  Hash Join  (cost=14341.12..46632.73 rows=97484 width=111) (actual time=1068.862..5472.788 rows=96041 loops=1)
    Hash Cond: ("outer".contactaddress = "inner".objectid)
    ->  Seq Scan on customers  (cost=0.00..24094.01 rows=227197 width=116) (actual time=0.018..1902.646 rows=223990 loops=1)
  Filter: (btrim(upper(lastname)) >= 'JANSEN'::text)
    ->  Hash  (cost=13944.94..13944.94 rows=158473 width=75) (actual time=1068.467..1068.467 rows=158003 loops=1)
  ->  Bitmap Heap Scan on addresses  (cost=1189.66..13944.94 rows=158473 width=75) (actual time=71.259..530.986 rows=158003 loops=1)
    Recheck Cond: (btrim(upper(city)) >= 'NIJMEGEN'::text)
    ->  Bitmap Index Scan on prototype_addresses_trim_upper_city  (cost=0.00..1189.66 rows=158473 width=0) (actual time=68.290..68.290 rows=158003 loops=1)
  Index Cond: (btrim(upper(city)) >= 'NIJMEGEN'::text)
Total runtime: 7941.095 ms


I have indices on :
fki_customers_addresses
customer.lastname (both lastname and trim(uppercase(lastname)) 
addresses.city (both city and trim(uppercase(city)) 

I 






-- 
Groeten,

Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl









Re: [PERFORM] How to query and index for customer with lastname and city

2006-03-04 Thread hubert depesz lubaczewski
On 3/4/06, Joost Kraaijeveld <[EMAIL PROTECTED]> wrote:
>  Below is what I actually have. Given the fact that it takes forever to get
> a result (> 6 seconds) , there must be something wrong with my solution or
> my expectation. Can anyone tell what I should do to make this query go
> faster ( or convince me to wait for the result ;-()?
>  Explain analyze after a full alayse vacuum:
>  Sort  (cost=54710.68..54954.39 rows=97484 width=111) (actual
> time=7398.971..7680.405 rows=96041 loops=1)
>Sort Key: btrim(upper(customers.lastname)), btrim(upper(addresses.city))
>->  Hash Join  (cost=14341.12..46632.73 rows=97484 width=111) (actual
> time=1068.862..5472.788 rows=96041 loops=1)
>  Hash Cond: ("outer".contactaddress = "inner".objectid)
>  ->  Seq Scan on customers  (cost=0.00..24094.01 rows=227197
> width=116) (actual time=0.018..1902.646 rows=223990 loops=1)
>Filter: (btrim(upper(lastname)) >= 'JANSEN'::text)
>  ->  Hash  (cost=13944.94..13944.94 rows=158473 width=75) (actual
> time=1068.467..1068.467 rows=158003 loops=1)
>->  Bitmap Heap Scan on addresses  (cost=1189.66..13944.94
> rows=158473 width=75) (actual time=71.259..530.986 rows=158003 loops=1)
>  Recheck Cond: (btrim(upper(city)) >= 'NIJMEGEN'::text)
>  ->  Bitmap Index Scan on
> prototype_addresses_trim_upper_city  (cost=0.00..1189.66
> rows=158473 width=0) (actual time=68.290..68.290 rows=158003 loops=1)
>Index Cond: (btrim(upper(city)) >=
> 'NIJMEGEN'::text)
>  Total runtime: 7941.095 ms

explain clearly shows, that index is used for addresses scan, but it
is not so for users.
explain estimates that 227197 customers match the lastname criteria -
which looks awfuly high.
how many record do you have in the customers table?

i would try to create index test on customers(contactAddress,
trim(uppercase(lastname)));
or with other ordring of fields.

try this - create the index, make analyze of customers table, and
recheck explain.
then try the second index in the same manner.

maybe this could of some help...

depesz

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] How to query and index for customer with lastname

2006-03-04 Thread Joost Kraaijeveld
Hi Hubert,

On Sat, 2006-03-04 at 14:49 +0100, hubert depesz lubaczewski wrote:
> >  Sort  (cost=54710.68..54954.39 rows=97484 width=111) (actual
> > time=7398.971..7680.405 rows=96041 loops=1)
> >Sort Key: btrim(upper(customers.lastname)), btrim(upper(addresses.city))
> >->  Hash Join  (cost=14341.12..46632.73 rows=97484 width=111) (actual 
> > time=1068.862..5472.788 rows=96041 loops=1)
> >  Hash Cond: ("outer".contactaddress = "inner".objectid)
> >  ->  Seq Scan on customers  (cost=0.00..24094.01 rows=227197 
> > width=116) (actual time=0.018..1902.646 rows=223990 loops=1)
> >Filter: (btrim(upper(lastname)) >= 'JANSEN'::text)
> >  ->  Hash  (cost=13944.94..13944.94 rows=158473 width=75) (actual 
> > time=1068.467..1068.467 rows=158003 loops=1)
> >->  Bitmap Heap Scan on addresses  (cost=1189.66..13944.94 
> > rows=158473 width=75) (actual time=71.259..530.986 rows=158003 loops=1)
> >  Recheck Cond: (btrim(upper(city)) >= 'NIJMEGEN'::text)
> >  ->  Bitmap Index Scan on 
> > prototype_addresses_trim_upper_city  (cost=0.00..1189.66 rows=158473 
> > width=0) (actual time=68.290..68.290 rows=158003 loops=1)
> >Index Cond: (btrim(upper(city)) >=> 
> > 'NIJMEGEN'::text)
> >  Total runtime: 7941.095 ms
> 
> explain clearly shows, that index is used for addresses scan, but it
Yes, but I do not understand why I have both a "Bitmap Index Scan"  and
a "Bitmap Heap Scan" on (btrim(upper(city)) >=> 'NIJMEGEN'::text)?

> is not so for users.
> explain estimates that 227197 customers match the lastname criteria -
> which looks awfuly high.
> how many record do you have in the customers table?
368915 of which 222465 actually meet the condition. 

>From what I understand from the mailing list, PostgreSQL prefers a table
scan whenever it expects that the number of records in the resultset
will be ~ > 10 % of the total number of records in the table. Which
explains the table scan for customers, but than again, it does not
explain why it uses the index on addresses: it has 369337 addresses of
which 158003 meet the condition

> i would try to create index test on customers(contactAddress,
> trim(uppercase(lastname)));
> or with other ordring of fields.
> 
> try this - create the index, make analyze of customers table, and
> recheck explain.
> then try the second index in the same manner.
Makes no difference.


-- 
Groeten,

Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] How to query and index for customer with lastname and city

2006-03-04 Thread hubert depesz lubaczewski
On 3/4/06, Joost Kraaijeveld <[EMAIL PROTECTED]> wrote:
> > how many record do you have in the customers table?
> 368915 of which 222465 actually meet the condition.
> >From what I understand from the mailing list, PostgreSQL prefers a table
> scan whenever it expects that the number of records in the resultset
> will be ~ > 10 % of the total number of records in the table. Which
> explains the table scan for customers, but than again, it does not
> explain why it uses the index on addresses: it has 369337 addresses of
> which 158003 meet the condition


bitmap index scan is faster than sequential table scan. that's all. it
was introduced in 8.1 as far as i remember.
basically - i doubt if you can get better performace from query when
the result row-count is that high.

out of curiosity though - why do you need so many rows? it's not
possible to view them, nor do anything meaningful with 200 thousand
rows!

depesz

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Bad row estimates

2006-03-04 Thread Jim C. Nasby
On Sat, Mar 04, 2006 at 02:01:35AM -0500, Greg Stark wrote:
> Alex Adriaanse <[EMAIL PROTECTED]> writes:
> 
> > SELECT count(*) FROM test_table_1
> > INNER JOIN test_table_2 ON
> > (test_table_2.s_id = 13300613 AND test_table_1.id = 
> > test_table_2.n_id)
> > WHERE now() BETWEEN test_table_1.start_ts AND test_table_1.end_ts
> > AND test_table_1.id = test_table_1.g_id;

Something else that helps in cases like this is to place both an upper
and lower boundary on one (or both) fields if possible. For example, if
you know that start_ts and end_ts will always be within 1 hour of each
other, adding the following will help:

AND start_ts >= now()-'1 hour'::interval AND end_ts <= now()+'1
hour'::interval
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Bad row estimates

2006-03-04 Thread Tom Lane
Greg Stark <[EMAIL PROTECTED]> writes:
> (I don't think the end_ts in the index is buying you much, despite its
> appearance in the Index Cond in the plan.) 

Well, it saves some trips to the heap, but the indexscan is still going
to run from the beginning of the index to start_ts = now(), because
btree has no idea that there's any correlation between the two index
columns.

If you could put some a-priori bound on the interval width, you could
add a WHERE constraint "AND now() - max_width <= start_ts", which'd
constrain the index scan and possibly also get you a better planner
estimate.  Otherwise I think you really need a special datatype for time
intervals and a GIST or r-tree index on it :-(.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Bad row estimates

2006-03-04 Thread Greg Stark
Tom Lane <[EMAIL PROTECTED]> writes:

> Otherwise I think you really need a special datatype for time
> intervals and a GIST or r-tree index on it :-(.

You could actually take short cuts using expression indexes to do this. If it
works out well then you might want to implement a real data type to avoid the
overhead of the SQL conversion functions.

Here's an example. If I were to do this for real I would look for a better
datatype than the box datatype and I would wrap the whole conversion in an SQL
function. But this will serve to demonstrate:

stark=> create table interval_test (start_ts timestamp with time zone, end_ts 
timestamp with time zone);
CREATE TABLE

stark=> create index interval_idx on interval_test using gist 
(box(point(start_ts::abstime::integer, end_ts::abstime::integer) , 
point(start_ts::abstime::integer, end_ts::abstime::integer)));
CREATE INDEX

stark=> explain select * from interval_test where 
box(point(now()::abstime::integer,now()::abstime::integer),point(now()::abstime::integer,now()::abstime::integer))
 ~ box(point(start_ts::abstime::integer, end_ts::abstime::integer) , 
point(start_ts::abstime::integer, end_ts::abstime::integer));


  QUERY PLAN


   
---
 Index Scan using interval_idx on interval_test  (cost=0.07..8.36 rows=2 
width=16)
   Index Cond: (box(pointnow())::abstime)::integer)::double precision, 
(((now())::abstime)::integer)::double precision), 
pointnow())::abstime)::integer)::double precision, 
(((now())::abstime)::integer)::double precision)) ~ 
box(pointstart_ts)::abstime)::integer)::double precision, 
(((end_ts)::abstime)::integer)::double precision), 
pointstart_ts)::abstime)::integer)::double precision, 
(((end_ts)::abstime)::integer)::double precision)))
(2 rows)

-- 
greg


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] How to query and index for customer with lastname and city

2006-03-04 Thread Kevin Brown
On Saturday 04 March 2006 08:23, hubert depesz lubaczewski wrote:
> On 3/4/06, Joost Kraaijeveld <[EMAIL PROTECTED]> wrote:
> > > how many record do you have in the customers table?
> >
> > 368915 of which 222465 actually meet the condition.
> >
> > >From what I understand from the mailing list, PostgreSQL prefers a table
> >
> > scan whenever it expects that the number of records in the resultset
> > will be ~ > 10 % of the total number of records in the table. Which
> > explains the table scan for customers, but than again, it does not
> > explain why it uses the index on addresses: it has 369337 addresses of
> > which 158003 meet the condition
>
> bitmap index scan is faster than sequential table scan. that's all. it
> was introduced in 8.1 as far as i remember.
> basically - i doubt if you can get better performace from query when
> the result row-count is that high.
>
> out of curiosity though - why do you need so many rows? it's not
> possible to view them, nor do anything meaningful with 200 thousand
> rows!
>
> depesz

If you're just displaying, use limit and offset to grab one page at a time.  
If you're manipulating it would be a good idea to do something in a stored 
procedure.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly