I wrote:
> I don't have the URL at hand but it was posted just a few days ago.
... actually, it was the beginning of this here thread ...
regards, tom lane
---(end of broadcast)---
TIP 2: you can get off all lists at once wi
Curt Sampson <[EMAIL PROTECTED]> writes:
> On Sun, 24 Oct 2004, Tom Lane wrote:
>> Considering that the available numbers suggest we could win just a few
>> percent...
> I must confess that I was completely unaware of these "numbers." Where
> do I find them?
The only numbers I've seen that direct
On Sun, 24 Oct 2004, Tom Lane wrote:
> Considering that the available numbers suggest we could win just a few
> percent...
I must confess that I was completely unaware of these "numbers." Where
do I find them?
cjs
--
Curt Sampson <[EMAIL PROTECTED]> +81 90 7737 2974 http://www.NetBSD.org
Curt Sampson <[EMAIL PROTECTED]> writes:
> I see the OS issues related to mapping that much memory as a much bigger
> potential problem.
I see potential problems everywhere I look ;-)
Considering that the available numbers suggest we could win just a few
percent (and that's assuming that all this
On Sun, 24 Oct 2004, Tom Lane wrote:
> > Well, one really can't know without testing, but memory copies are
> > extremely expensive if they go outside of the cache.
>
> Sure, but what about all the copying from write queue to page?
There's a pretty big difference between few-hundred-bytes-on-writ
Curt Sampson <[EMAIL PROTECTED]> writes:
> On Sat, 23 Oct 2004, Tom Lane wrote:
>> Seems to me the overhead of any such scheme would swamp the savings from
>> avoiding kernel/userspace copies ...
> Well, one really can't know without testing, but memory copies are
> extremely expensive if they go
On Sat, 23 Oct 2004, Tom Lane wrote:
> Seems to me the overhead of any such scheme would swamp the savings from
> avoiding kernel/userspace copies ...
Well, one really can't know without testing, but memory copies are
extremely expensive if they go outside of the cache.
> the locking issues alon
Curt Sampson <[EMAIL PROTECTED]> writes:
> Back when I was working out how to do this, I reckoned that you could
> use mmap by keeping a write queue for each modified page. Reading,
> you'd have to read the datum from the page and then check the write
> queue for that page to see if that datum had
On Sat, 9 Oct 2004, Tom Lane wrote:
> mmap provides msync which is comparable to fsync, but AFAICS it
> provides no way to prevent an in-memory change from reaching disk too
> soon. This would mean that WAL entries would have to be written *and
> flushed* before we could make the data change at a
On 10/14/2004 8:10 PM, Christopher Browne wrote:
Quoth [EMAIL PROTECTED] ("Simon Riggs"):
I say this: ARC in 8.0 PostgreSQL allows us to sensibly allocate as
large a shared_buffers cache as is required by the database
workload, and this should not be constrained to a small percentage
of server RAM.
On 10/14/2004 6:36 PM, Simon Riggs wrote:
[...]
I think Jan has said this also in far fewer words, but I'll leave that to
Jan to agree/disagree...
I do agree. The total DB size has as little to do with the optimum
shared buffer cache size as the total available RAM of the machine.
After reading y
this. The SUS text is a bit weaselly ("the application must ensure
correct synchronization") but the HPUX mmap man page, among others,
lays it on the line:
It is also unspecified whether write references to a memory region
mapped with MAP_SHARED are visible to processes reading the file
Tom Lane wrote:
Kevin Brown <[EMAIL PROTECTED]> writes:
Hmm...something just occurred to me about this.
Would a hybrid approach be possible? That is, use mmap() to handle
reads, and use write() to handle writes?
Nope. Have you read the specs regarding mmap-vs-stdio synchronization?
B
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > Hmm...something just occurred to me about this.
>
> > Would a hybrid approach be possible? That is, use mmap() to handle
> > reads, and use write() to handle writes?
>
> Nope. Have you read the specs regarding mmap-vs-stdio synchroni
Kevin Brown <[EMAIL PROTECTED]> writes:
> Hmm...something just occurred to me about this.
> Would a hybrid approach be possible? That is, use mmap() to handle
> reads, and use write() to handle writes?
Nope. Have you read the specs regarding mmap-vs-stdio synchronization?
Basically it says that
Quoth [EMAIL PROTECTED] ("Simon Riggs"):
> I say this: ARC in 8.0 PostgreSQL allows us to sensibly allocate as
> large a shared_buffers cache as is required by the database
> workload, and this should not be constrained to a small percentage
> of server RAM.
I don't think that this particularly fo
Simon,
> If you draw a graph of speedup (y) against cache size as a
> % of total database size, the graph looks like an upside-down "L" - i.e.
> the graph rises steeply as you give it more memory, then turns sharply at a
> particular point, after which it flattens out. The "turning point" is th
First off, I'd like to get involved with these tests - pressure of other
work only has prevented me.
Here's my take on the results so far:
I think taking the ratio of the memory allocated to shared_buffers against
the total memory available on the server is completely fallacious. That is
why the
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> mmap() is Right Out because it does not afford us sufficient control
> >> over when changes to the in-memory data will propagate to disk.
>
> > ... that's especially true if we simply cannot
> > have the page writte
Jan Wieck <[EMAIL PROTECTED]> writes:
> On 10/8/2004 10:10 PM, Christopher Browne wrote:
>
> > [EMAIL PROTECTED] (Josh Berkus) wrote:
> >> I've been trying to peg the "sweet spot" for shared memory using
> >> OSDL's equipment. With Jan's new ARC patch, I was expecting that
> >> the desired amoun
On 10/13/2004 11:52 PM, Greg Stark wrote:
Jan Wieck <[EMAIL PROTECTED]> writes:
On 10/8/2004 10:10 PM, Christopher Browne wrote:
> [EMAIL PROTECTED] (Josh Berkus) wrote:
>> I've been trying to peg the "sweet spot" for shared memory using
>> OSDL's equipment. With Jan's new ARC patch, I was expecti
Jan Wieck <[EMAIL PROTECTED]> writes:
> Which would require that shared memory is not allowed to be swapped out, and
> that is allowed in Linux by default IIRC, not to completely distort the entire
> test.
Well if it's getting swapped out then it's clearly not being used effectively.
There are A
On 10/14/2004 12:22 AM, Greg Stark wrote:
Jan Wieck <[EMAIL PROTECTED]> writes:
Which would require that shared memory is not allowed to be swapped out, and
that is allowed in Linux by default IIRC, not to completely distort the entire
test.
Well if it's getting swapped out then it's clearly not be
On 10/9/2004 7:20 AM, Kevin Brown wrote:
Christopher Browne wrote:
Increasing the number of cache buffers _is_ likely to lead to some
slowdowns:
- Data that passes through the cache also passes through kernel
cache, so it's recorded twice, and read twice...
Even worse, memory that's used for th
On Fri, 8 Oct 2004, Josh Berkus wrote:
> As you can see, the "sweet spot" appears to be between 5% and 10% of RAM,
> which is if anything *lower* than recommendations for 7.4!
What recommendation is that? To have shared buffers being about 10% of the
ram sounds familiar to me. What was recomm
Kevin Brown <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> mmap() is Right Out because it does not afford us sufficient control
>> over when changes to the in-memory data will propagate to disk.
> ... that's especially true if we simply cannot
> have the page written to disk in a partially-modif
I wrote:
> That said, if it's typical for many changes to made to a page
> internally before PG needs to commit that page to disk, then your
> argument makes sense, and that's especially true if we simply cannot
> have the page written to disk in a partially-modified state (something
> I can easily
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > This is why I sometimes wonder whether or not it would be a win to use
> > mmap() to access the data and index files --
>
> mmap() is Right Out because it does not afford us sufficient control
> over when changes to the in-memory data
Kevin Brown <[EMAIL PROTECTED]> writes:
> This is why I sometimes wonder whether or not it would be a win to use
> mmap() to access the data and index files --
mmap() is Right Out because it does not afford us sufficient control
over when changes to the in-memory data will propagate to disk. The
Christopher Browne wrote:
[EMAIL PROTECTED] (Josh Berkus) wrote:
This result is so surprising that I want people to take a look at it
and tell me if there's something wrong with the tests or some
bottlenecking factor that I've not seen.
I'm aware of two conspicuous scenarios where ARC would
Christopher Browne wrote:
> Increasing the number of cache buffers _is_ likely to lead to some
> slowdowns:
>
> - Data that passes through the cache also passes through kernel
>cache, so it's recorded twice, and read twice...
Even worse, memory that's used for the PG cache is memory that's n
[EMAIL PROTECTED] (Josh Berkus) wrote:
> I've been trying to peg the "sweet spot" for shared memory using
> OSDL's equipment. With Jan's new ARC patch, I was expecting that
> the desired amount of shared_buffers to be greatly increased. This
> has not turned out to be the case.
That doesn't surp
Tom,
> BTW, what is the actual size of the test database (disk footprint wise)
> and how much of that do you think is heavily accessed during the run?
> It's possible that the test conditions are such that adjusting
> shared_buffers isn't going to mean anything anyway.
The raw data is 32GB, but a
Josh Berkus <[EMAIL PROTECTED]> writes:
> Here's a top-level summary:
> shared_buffers% RAM NOTPM20*
> 1000 0.2%1287
> 23000 5% 1507
> 46000 10% 1481
> 69000 15% 1382
34 matches
Mail list logo