In article <[EMAIL PROTECTED]>,
Edmund Dengler <[EMAIL PROTECTED]> writes:
> But on the other hand, general algorithms which are designed to work under
> a wide variety of circumstances may fail in specific cases. I am thinking
> of VACUUM which would kill most caching algorithms simply because we
Greetings!
On Fri, 2 Jul 2004, Mike Rylander wrote:
> I find that experience does not bear this out. There is a saying a coworker
> of mine has about apps that try to solve problems, in this case caching,
> that are well understood and generally handled well at other levels of the
> "software st
> It's not that making the cache bigger is inefficient, it's that the cache
is
> not used the way you are thinking.
Ok, I think I've got it now. The missing piece of the puzzle was the
existence of the Linux buffer cache. So that's what the
effective_buffer_cache value is for(!)
I read Shridhar
>If all the data fits into memory, then this cache thrashing won't occur,
yes?
No - it *can* occur in a 2 tier cache strategy.
The critical question here is: *If* the data postgresql needs is in the
linux buffer cache, what (if anything) does the OS have to do to make it
available to the postmas
Hello again Mike,
Thanks for the replies! Here's my next salvo!
> Perhaps I'm a bit naive about complex data structure caching strategies,
but
> it seems to me that the overhead of tracking tuples (which is what you
> would want if you are going to manage your own cache, as opposed to simply
> ca
Hello Shridhar,
Thanks for the reply.
> There is no reason why you should not do it. How remains to be a point of
> disagreement though. You don't allocate 16GB of shared buffers to
postgresql.
> That won't give you performance you need.
I think in the other thread, Tom was alluding to this too.
> There's a good bit of depth in the archives of this list. I would start
> searching back for discussions of effective_cache_size, as that is
involved
> in *costing* the caching job that the OS is doing.
Thanks - that's just what I need to sink my teeth into. I'll have a trawl
and get back later
> Briefly, my problem:
> --
> I expect the database I'm working on to reach something in the order of
> 12-16 Gigabytes, and I am interested in understanding as much as I can
> about how I can make this go as fast as possible on a linux system. I
> haven't run such a large database