Hello Richard, Wednesday, March 15, 2006, 3:35:26 PM, you wrote:
> Abbath wrote: >> is slow for the first time (7-15 sec), but then using the same keyword >> next time it is fast (10-100 ms). The reason is, as I read, first time >> it is not cached at all, but next time the index pages are >> cached so it is fast. >> >> I think in a real word application, in this form, it is useless, because >> the 15 sec search time is not allowable for any user. > What, never? Even if this facility is only used once a year by one user > and you have 1000 other users who need their queries to complete in 0.2 > secs at most? What you mean is that it's not useful for *your* > application - don't assume the same applies to all applications. The search function will be a frequently used one so it shouldn't be slow. Ok, maybe it is not suitable for me if I want a user friendly search function. >> Is there a way >> to initially cache the index or tune some postgres parameter? (I tried >> to increase "shared_buffers", "effective_cache_size", "work_mem" but >> had no effect on it) Or I should look for another search technology? (I >> just want to provide a 2 sec maximum search time at 1 million records, >> I think it is not a big expectation nowadays) > If you want to force the data to be cached, just put a cron-job in to > run a query for "abc" or whatever once a minute. I can't guess what the user want to search. > Of course, if it turns out that your users aren't running the query very > often then you're wasting resources, and if they are running it often > then again you're wasting resources. But - if you really need to, that's > the solution. It seems if I periodically use VACUUM ANALYZE, it helps to reduce the search time...I will see how can I tune it more. Thanks for the reply. Abbath ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq