Hi there, Thanks for your suggestions. I do have an application running on the machine all the time. In fact, the application keeps writing real-time monitoring data into the database. Based on my understanding of your messages, I can't do anything to speed up the first-time-searching. Probably I can give a waiting process bar to the users and let them wait for the results. Thanks for your help.
ouyang 2009/5/27 Scott Marlowe <scott.marl...@gmail.com> > On Tue, May 26, 2009 at 7:43 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > > Greg Smith <gsm...@gregsmith.com> writes: > >> On Tue, 26 May 2009, Scott Marlowe wrote: > >>> Also, in the morning, have a cron job crank up that does "select * from > >>> mybigtable" for each big table to load it into cache. > > > >> Just to clarify: on 8.3 and later versions, doing this doesn't do what > >> some people expect. Sequential scans like that will continuously re-use > a > >> 256KB section of the PostgreSQL shared_buffers space, so this won't > cause > >> all of that to get paged back in if the problem is related to it being > >> swapped out. It will pass everything through the OS buffer cache though > >> and prime it usefully, which might be all that's actually needed. > > > > Bearing in mind that this is a Windows server ... I seem to recall that > > the conventional wisdom is still to keep shared_buffers relatively small > > on Windows. So priming the OS cache is exactly what it's about. > > (Keeping that down should also help avoid the other scenario Scott was > > worried about, where shared memory itself gets paged out.) > > Yeah, I thought it was pretty obvious I was talking OS cache up there. >