We have a fixed amount of 16 PG backend processes. Once created they stay forever. OLTP load is distributed over them in a SQF fashion.

As loading a TSearch dictionary takes a few moments, we have a script that connects to each backend on start up and pushes the dictionary into ram by simply calling ts_debug('foo');

The dictionary has a file size of 9.8MB.

If we don't call the script `free' prints:

             total       used       free     shared    buffers     cached
Mem:       4048056     953192    3094864          0          4     359300
-/+ buffers/cache:     593888    3454168

After the script has called ts_debug('foo') on each backend:

             total       used       free     shared    buffers     cached
Mem:       4048056    2374508    1673548          0          4     370340
-/+ buffers/cache:    2004164    2043892


Is it supposed to blast so much memory?

This is PG 8.2.4 on x86_64.


--
Regards,
Hannes Dorbath


--
Regards,
Hannes Dorbath

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to