If I bought one of these boxes/OS combos as a postgresql database server,
would postgresql be able to make the best use of it with a huge (e.g. 40GB)
database?
Box: HP ProLiant DL585, with 4 AMD64 CPUs and 64GB of RAM. (other
vendor options also exist)
OS: SUSE enterprise 8 linux for AMD
Hello,
Sorry for this newbish question.
Briefly, my problem:
--
I expect the database I'm working on to reach something in the order of
12-16 Gigabytes, and I am interested in understanding as much as I can about
how I can make this go as fast as possible on a linux system. I have
Ok - just to end this thread, I think I understand what I was missing.
I'll stop this thread, and just comment on my first thread.
Thank you everyone who helped
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an inde
> There's a good bit of depth in the archives of this list. I would start
> searching back for discussions of effective_cache_size, as that is
involved
> in *costing* the caching job that the OS is doing.
Thanks - that's just what I need to sink my teeth into. I'll have a trawl
and get back later
Hello Shridhar,
Thanks for the reply.
> There is no reason why you should not do it. How remains to be a point of
> disagreement though. You don't allocate 16GB of shared buffers to
postgresql.
> That won't give you performance you need.
I think in the other thread, Tom was alluding to this too.
> I get the feeling that, that regardless 64bit support or not, that the
> *concept* of a database which just happens to all easily fit within RAM
> isn't one that gets the thumbs down...
Oops, I meant to say '*is*' one that gets the thumbs down...
---(end of broadcast)-
Hello again Mike,
Thanks for the replies! Here's my next salvo!
> Perhaps I'm a bit naive about complex data structure caching strategies,
but
> it seems to me that the overhead of tracking tuples (which is what you
> would want if you are going to manage your own cache, as opposed to simply
> ca
>If all the data fits into memory, then this cache thrashing won't occur,
yes?
No - it *can* occur in a 2 tier cache strategy.
The critical question here is: *If* the data postgresql needs is in the
linux buffer cache, what (if anything) does the OS have to do to make it
available to the postmas
> It's not that making the cache bigger is inefficient, it's that the cache
is
> not used the way you are thinking.
Ok, I think I've got it now. The missing piece of the puzzle was the
existence of the Linux buffer cache. So that's what the
effective_buffer_cache value is for(!)
I read Shridhar
Hello again and thanks to everyone for the replies so far.
Tom, and all, I hear what you are all saying, and furthermore, in cases
where the amount of RAM is much smaller than the database size, I agree
totally. However, I'm *only* talking about a particular scenario which, till
now, has really on
10 matches
Mail list logo