On Wed, 18 Nov 2015 20:10:00 -0500
Jonathan Vanasco <postg...@2xlp.com> wrote:

> As a temporary fix I need to write some uploaded image files to PostgreSQL 
> until a task server can read/process/delete them.  
> 
> The problem I've run into (via server load tests that model our production 
> environment), is that these read/writes end up pushing the indexes used by 
> other queries out of memory -- causing them to be re-read from disk.   These 
> files can be anywhere from 200k to 5MB.
> 
> has anyone dealt with situations like this before and has any suggestions?  I 
> could use a dedicated db connection if that would introduce any options. 

PostgreSQL doesn't have any provisions for preferring one thing
or another for storing in memory.

The easiest thing I can think would be to add memory to the machine
(or configure Postgres to use more) such that those files aren't
pushing enough other pages out of memory to have a problematic
impact.

Another idea would be to put the image database on a different
physical server, or run 2 instances of Postgres on a single
server with the files in one database configured with a low
shared_buffers value, and the rest of the data on the other
database server configured with higher shared_buffers.

I know these probably aren't the kind of answers you're looking
for, but I don't have anything better to suggest; and the rest
of the mailing list seems to be devoid of ideas as well.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to