Josh Berkus wrote:
> The whole point of tablespaces is to
> allow placing individual tables and indexes
> on seperate volumes.
That was one reason. I seem to recall several more:
* Putting data on cost appropriate media
Mentioned previously in this thread
* Balancing I/O across spindles
Also menti
Has anyone seriously looked at how it would impact things to give the
DBA the option of storing certain indices in RAM instead of on disk?
Queries (both select and insert/update) against heavily indexed tables
do most of the reads and writes to the index trees and relatively little
reading and
I combed the archives but could not find a discussion on this and am
amazed this hasn't been discussed.
My experience with Oracle (and now limited experience with Pg) is that
the major choke point in performance is not the CPU or read I/O, it is
the log performance of big update and select stat
I can see that and considered it.
The seed state would need to be saved, or any particular command that is
not reproducible would need to be exempted from this sort of logging.
Again, this would apply only to situations where a small SQL command
created huge changes.
Marty
Rod Taylor wrote:
I
updates happen atomically so that they don't disrupt web activity.
Maybe this is not a "traditional" RDBMS app, but I am not in the mood to
write my own storage infrastructure for it.
Then again, maybe I don't know what I am talking about...
Marty
Sailesh Krishnamurthy wro
ice as often.
I did that and it helped tremendously. Without proper tuning, I just
made the numbers pretty large:
shared_buffers = 10
sort_mem = 131072
vacuum_mem = 65536
wal_buffers = 8192
checkpoint_segments = 32
Thanks for your feedback.
Sincerely,
Marty
Tom Lane wrote:
Marty Sc
think you may be right. I suspect that most "busy" installations run
a large number of "light" update/delete/insert statements.
In this scenario, the kind of logging I am talking about would make
things worse, much worse.
Marty
Rod Taylor wrote:
On Thu, 2004-03-11 at
nder it hasn't been implemented yet. ;-)
Thanks again,
Marty
Sailesh Krishnamurthy wrote:
(Just a note: my comments are not pg-specific .. indeed I don't know
much about pg recovery).
"Marty" == Marty Scholes <[EMAIL PROTECTED]> writes:
Marty> If the DB state cannot be
If I understand WAL correctly (and I may not), it is essentially a write
cache for writes to the data files, because:
1. Data file writes are notoriously random, and writing the log is
sequential. Ironically, the sectors mapped by the OS to the disk are
likely not at all sequential, but they l
Tom Lane wrote:
Your analysis is missing an important point, which is what happens when
multiple transactions successively modify the same page. With a
sync-the-data-files approach, we'd have to write the data page again for
each commit. With WAL, the data page will likely not get written at all
10 matches
Mail list logo