Mike Rylander wrote:

On 8/17/05, Manfred Koizar <[EMAIL PROTECTED]> wrote:
On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy
<[EMAIL PROTECTED]> wrote:
and because the number of possible search terms is so large, it
would be nice if the entire index could somehow be preloaded into memory
and encouraged to stay there.
You could try to copy the relevant index
file(s) to /dev/null to populate the OS cache ...

That actually works fine.  When I had big problems with a large GiST
index I just used cat to dump it at /dev/null and the OS grabbed it. Of course, that was on linux so YMMV.

Thanks, Manfred & Mike. That is a very nice solution. And just for the sake of the archive ... I can find the filename of the relevant index or table file name(s) by finding pg_class.relfilenode where pg_class.relname is the name of the entity, then doing, e.g.: sudo -u postgres find /usr/local/pgsql/data -name "somerelfilenode*".

-Kevin Murphy



---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to