Curt Sampson <[EMAIL PROTECTED]> writes:
> Back when I was working out how to do this, I reckoned that you could
> use mmap by keeping a write queue for each modified page. Reading,
> you'd have to read the datum from the page and then check the write
> queue for that page to see if that datum had
this. The SUS text is a bit weaselly ("the application must ensure
correct synchronization") but the HPUX mmap man page, among others,
lays it on the line:
It is also unspecified whether write references to a memory region
mapped with MAP_SHARED are visible to processes reading the file
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > Hmm...something just occurred to me about this.
>
> > Would a hybrid approach be possible? That is, use mmap() to handle
> > reads, and use write() to handle writes?
>
> Nope. Have you read the specs regarding mmap-vs-stdio synchroni
Kevin Brown <[EMAIL PROTECTED]> writes:
> Hmm...something just occurred to me about this.
> Would a hybrid approach be possible? That is, use mmap() to handle
> reads, and use write() to handle writes?
Nope. Have you read the specs regarding mmap-vs-stdio synchronization?
Basically it says that
Quoth [EMAIL PROTECTED] ("Simon Riggs"):
> I say this: ARC in 8.0 PostgreSQL allows us to sensibly allocate as
> large a shared_buffers cache as is required by the database
> workload, and this should not be constrained to a small percentage
> of server RAM.
I don't think that this particularly fo
Simon,
> If you draw a graph of speedup (y) against cache size as a
> % of total database size, the graph looks like an upside-down "L" - i.e.
> the graph rises steeply as you give it more memory, then turns sharply at a
> particular point, after which it flattens out. The "turning point" is th
First off, I'd like to get involved with these tests - pressure of other
work only has prevented me.
Here's my take on the results so far:
I think taking the ratio of the memory allocated to shared_buffers against
the total memory available on the server is completely fallacious. That is
why the
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> mmap() is Right Out because it does not afford us sufficient control
> >> over when changes to the in-memory data will propagate to disk.
>
> > ... that's especially true if we simply cannot
> > have the page writte
Kevin Brown <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> mmap() is Right Out because it does not afford us sufficient control
>> over when changes to the in-memory data will propagate to disk.
> ... that's especially true if we simply cannot
> have the page written to disk in a partially-modif
I wrote:
> That said, if it's typical for many changes to made to a page
> internally before PG needs to commit that page to disk, then your
> argument makes sense, and that's especially true if we simply cannot
> have the page written to disk in a partially-modified state (something
> I can easily
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > This is why I sometimes wonder whether or not it would be a win to use
> > mmap() to access the data and index files --
>
> mmap() is Right Out because it does not afford us sufficient control
> over when changes to the in-memory data
Kevin Brown <[EMAIL PROTECTED]> writes:
> This is why I sometimes wonder whether or not it would be a win to use
> mmap() to access the data and index files --
mmap() is Right Out because it does not afford us sufficient control
over when changes to the in-memory data will propagate to disk. The
Christopher Browne wrote:
> Increasing the number of cache buffers _is_ likely to lead to some
> slowdowns:
>
> - Data that passes through the cache also passes through kernel
>cache, so it's recorded twice, and read twice...
Even worse, memory that's used for the PG cache is memory that's n
[EMAIL PROTECTED] (Josh Berkus) wrote:
> I've been trying to peg the "sweet spot" for shared memory using
> OSDL's equipment. With Jan's new ARC patch, I was expecting that
> the desired amount of shared_buffers to be greatly increased. This
> has not turned out to be the case.
That doesn't surp
Tom,
> BTW, what is the actual size of the test database (disk footprint wise)
> and how much of that do you think is heavily accessed during the run?
> It's possible that the test conditions are such that adjusting
> shared_buffers isn't going to mean anything anyway.
The raw data is 32GB, but a
Josh Berkus <[EMAIL PROTECTED]> writes:
> Here's a top-level summary:
> shared_buffers% RAM NOTPM20*
> 1000 0.2%1287
> 23000 5% 1507
> 46000 10% 1481
> 69000 15% 1382
16 matches
Mail list logo