On Oct 17, 2008, at 4:30 AM, Vladimir Sitnikov wrote:
Decibel! <[EMAIL PROTECTED]> wrote:I had tried to use a normal table for store stats information, but several acrobatic hacks are needed to keep performance.I guess it is not really required to synchronize the stats into some physical table immediately. I would suggest keeping all the data in memory, and having a job that periodically dumps snapshots into physical tables (with WAL etc). In that case one would be able to compute database workload as a difference between two given snapshots. From my point of view, it does not look like a performance killer to have snapshots every 15 minutes. It does not look too bad to get the statistics of last 15 minutes lost in case of database crash either.
Yeah, that's exactly what I had in mind. I agree that trying to write to a real table for every counter update would be insane.
My thought was to treat the shared memory area as a buffer of stats counters. When you go to increment a counter, if it's not in the buffer then you'd read it out of the table, stick it in the buffer and increment it. As items age, they'd get pushed out of the buffer.
-- Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature