On Wed, 2004-10-27 at 01:39, Josh Berkus wrote:
> Thomas,
>
> > As a result, I was intending to inflate the value of
> > effective_cache_size to closer to the amount of unused RAM on some of
> > the machines I admin (once I've verified that they all have a unified
> > buffer cache). Is that correc
Jan Wieck <[EMAIL PROTECTED]> writes:
> On 10/26/2004 1:53 AM, Tom Lane wrote:
>> Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>>> Another issue is what we do with the effective_cache_size value once we
>>> have a number we trust. We can't readily change the size o
On 10/26/2004 1:53 AM, Tom Lane wrote:
Greg Stark <[EMAIL PROTECTED]> writes:
Tom Lane <[EMAIL PROTECTED]> writes:
Another issue is what we do with the effective_cache_size value once we
have a number we trust. We can't readily change the size of the ARC
lists on the fly.
Huh? I thought effective
Tom Lane wrote:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > So I would suggest using something like 100us as the threshold for
> > determining whether a buffer fetch came from cache.
>
> I see no reason to hardwire such a number. On any hardware, the
> distribution is going to be double-humped,
On Mon, Oct 25, 2004 at 11:34:25AM -0400, Jan Wieck wrote:
> On 10/22/2004 4:09 PM, Kenneth Marshall wrote:
>
> > On Fri, Oct 22, 2004 at 03:35:49PM -0400, Jan Wieck wrote:
> >> On 10/22/2004 2:50 PM, Simon Riggs wrote:
> >>
> >> >I've been using the ARC debug options to analyse memory usage on t
On Mon, Oct 25, 2004 at 05:53:25PM -0400, Tom Lane wrote:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > So I would suggest using something like 100us as the threshold for
> > determining whether a buffer fetch came from cache.
>
> I see no reason to hardwire such a number. On any hardware, the
> d
On Mon, 2004-10-25 at 23:53, Tom Lane wrote:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Tom Lane <[EMAIL PROTECTED]> writes:
> >> Another issue is what we do with the effective_cache_size value once we
> >> have a number we trust. We can't readily change the size of the ARC
> >> lists on the fly
On Wed, 26 Oct 2004, Greg Stark wrote:
> > I don't see why mmap is any more out of reach than O_DIRECT; it's not
> > all that much harder to implement, and mmap (and madvise!) is more
> > widely available.
>
> Because there's no way to prevent a write-out from occurring and no way to be
> notified
Thomas,
> As a result, I was intending to inflate the value of
> effective_cache_size to closer to the amount of unused RAM on some of
> the machines I admin (once I've verified that they all have a unified
> buffer cache). Is that correct?
Currently, yes. Right now, e_c_s is used just to inform
Curt Sampson <[EMAIL PROTECTED]> writes:
> On Tue, 26 Oct 2004, Greg Stark wrote:
>
> > I see mmap or O_DIRECT being the only viable long-term stable states. My
> > natural inclination was the former but after the latest thread on the subject
> > I suspect it'll be forever out of reach. That mak
On Tue, 2004-10-26 at 09:49, Simon Riggs wrote:
> On Mon, 2004-10-25 at 16:34, Jan Wieck wrote:
> > The problem is, with a too small directory ARC cannot guesstimate what
> > might be in the kernel buffers. Nor can it guesstimate what recently was
> > in the kernel buffers and got pushed out fro
On Tue, 2004-10-26 at 06:53, Tom Lane wrote:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Tom Lane <[EMAIL PROTECTED]> writes:
> >> Another issue is what we do with the effective_cache_size value once we
> >> have a number we trust. We can't readily change the size of the ARC
> >> lists on the fly
On Mon, 2004-10-25 at 16:34, Jan Wieck wrote:
> The problem is, with a too small directory ARC cannot guesstimate what
> might be in the kernel buffers. Nor can it guesstimate what recently was
> in the kernel buffers and got pushed out from there. That results in a
> way too small B1 list, and
On Tue, 26 Oct 2004, Greg Stark wrote:
> I see mmap or O_DIRECT being the only viable long-term stable states. My
> natural inclination was the former but after the latest thread on the subject
> I suspect it'll be forever out of reach. That makes O_DIRECT And a Postgres
> managed cache the only r
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> Another issue is what we do with the effective_cache_size value once we
>> have a number we trust. We can't readily change the size of the ARC
>> lists on the fly.
> Huh? I thought effective_cache_size was just used
Is something broken with the list software? I'm receiving other emails from
the list but I haven't received any of the mails in this thread. I'm only able
to follow the thread based on the emails people are cc'ing to me directly.
I think I've caught this behaviour in the past as well. Is it a mis
Tom Lane <[EMAIL PROTECTED]> writes:
> I see no reason to hardwire such a number. On any hardware, the
> distribution is going to be double-humped, and it will be pretty easy to
> determine a cutoff after minimal accumulation of data.
Well my stats-fu isn't up to the task. My hunch is that th
Kenneth Marshall <[EMAIL PROTECTED]> writes:
> How invasive would reading the "CPU counter" be, if it is available?
Invasive or not, this is out of the question; too unportable.
regards, tom lane
---(end of broadcast)---
TIP
Greg Stark <[EMAIL PROTECTED]> writes:
> So I would suggest using something like 100us as the threshold for
> determining whether a buffer fetch came from cache.
I see no reason to hardwire such a number. On any hardware, the
distribution is going to be double-humped, and it will be pretty easy t
Greg Stark <[EMAIL PROTECTED]> writes:
> However I wonder about another approach entirely. If postgres timed how long
> reads took it shouldn't find it very hard to distinguish between a cached
> buffer being copied and an actual i/o operation. It should be able to track
> the percentage of time t
Tom Lane <[EMAIL PROTECTED]> writes:
> However, I'm still really nervous about the idea of using
> effective_cache_size to control the ARC algorithm. That number is
> usually entirely bogus.
It wouldn't be too hard to have a port-specific function that tries to guess
the total amount of memor
Jan Wieck <[EMAIL PROTECTED]> writes:
> This all only holds water, if the OS is allowed to swap out shared
> memory. And that was my initial question, how likely is it to find this
> to be true these days?
I think it's more likely that not that the OS will consider shared
memory to be potentiall
On 10/22/2004 4:09 PM, Kenneth Marshall wrote:
On Fri, Oct 22, 2004 at 03:35:49PM -0400, Jan Wieck wrote:
On 10/22/2004 2:50 PM, Simon Riggs wrote:
>I've been using the ARC debug options to analyse memory usage on the
>PostgreSQL 8.0 server. This is a precursor to more complex performance
>analysis
On Fri, Oct 22, 2004 at 03:35:49PM -0400, Jan Wieck wrote:
> On 10/22/2004 2:50 PM, Simon Riggs wrote:
>
> >I've been using the ARC debug options to analyse memory usage on the
> >PostgreSQL 8.0 server. This is a precursor to more complex performance
> >analysis work on the OSDL test suite.
> >
>
On Fri, 2004-10-22 at 21:45, Tom Lane wrote:
> Jan Wieck <[EMAIL PROTECTED]> writes:
> > What do you think about my other theory to make C actually 2x effective
> > cache size and NOT to keep T1 in shared buffers but to assume T1 lives
> > in the OS buffer cache?
>
> What will you do when initia
Jan Wieck <[EMAIL PROTECTED]> writes:
> What do you think about my other theory to make C actually 2x effective
> cache size and NOT to keep T1 in shared buffers but to assume T1 lives
> in the OS buffer cache?
What will you do when initially fetching a page? It's not supposed to
go directly in
On 10/22/2004 4:21 PM, Simon Riggs wrote:
On Fri, 2004-10-22 at 20:35, Jan Wieck wrote:
On 10/22/2004 2:50 PM, Simon Riggs wrote:
>
> My proposal is to alter the code to allow an array of memory linked
> lists. The actual list would be [0] - other additional lists would be
> created dynamically a
On Fri, 2004-10-22 at 20:35, Jan Wieck wrote:
> On 10/22/2004 2:50 PM, Simon Riggs wrote:
>
> >
> > My proposal is to alter the code to allow an array of memory linked
> > lists. The actual list would be [0] - other additional lists would be
> > created dynamically as required i.e. not using IFD
On 10/22/2004 2:50 PM, Simon Riggs wrote:
I've been using the ARC debug options to analyse memory usage on the
PostgreSQL 8.0 server. This is a precursor to more complex performance
analysis work on the OSDL test suite.
I've simplified some of the ARC reporting into a single log line, which
is encl
I've been using the ARC debug options to analyse memory usage on the
PostgreSQL 8.0 server. This is a precursor to more complex performance
analysis work on the OSDL test suite.
I've simplified some of the ARC reporting into a single log line, which
is enclosed here as a patch on freelist.c. This
30 matches
Mail list logo