On 2018-03-01 14:49:26 -0500, Robert Haas wrote:
> On Thu, Mar 1, 2018 at 2:29 PM, Andres Freund <and...@anarazel.de> wrote:
> > Right. Which might be very painful latency wise. And with poolers it's
> > pretty easy to get into situations like that, without the app
> > influencing it.
> 
> Really?  I'm not sure I believe that.  You're talking perhaps a few
> milliseconds - maybe less - of additional latency on a connection
> that's been idle for many minutes.

I've seen latency increases in second+ ranges due to empty cat/sys/rel
caches.  And the connection doesn't have to be idle, it might just have
been active for a different application doing different things, thus
accessing different cache entries.  With a pooler you can trivially end
up switch connections occasionally between different [parts of]
applications, and you don't want performance to suck after each time.
You also don't want to use up all memory, I entirely agree on that.


> Anyway, I don't mind making the exact timeout a GUC (with 0 disabling
> the feature altogether) if that addresses your concern, but in general
> I think that it's reasonable to accept that a connection that's been
> idle for a long time may have a little bit more latency than usual
> when you start using it again.

I don't think that'd quite address my concern. I just don't think that
the granularity (drop all entries older than xxx sec at the next resize)
is right. For one I don't want to drop stuff if the cache size isn't a
problem for the current memory budget. For another, I'm not convinced
that dropping entries from the current "generation" at resize won't end
up throwing away too much.

If we'd a guc 'syscache_memory_target' and we'd only start pruning if
above it, I'd be much happier.


Greetings,

Andres Freund

Reply via email to