> [EMAIL PROTECTED] writes: >>> No, it's all about time penalties and loss of concurrency. > >> I don't think that the amount of time it would take to calculate and >> test >> the sum is even important. It may be in older CPUs, but these days CPUs >> are so fast in RAM and a block is very small. On x86 systems, depending >> on >> page alignment, we are talking about two or three pages that will be "in >> memory" (They were used to read the block from disk or previously >> accessed). > > Your optimism is showing ;-). XLogInsert routinely shows up as a major > CPU hog in any update-intensive test, and AFAICT that's mostly from the > CRC calculation for WAL records. > > We could possibly use something cheaper than a real CRC, though. A > word-wide XOR (ie, effectively a parity calculation) would be sufficient > to detect most problems.
That was something that I mentioned in my first response. if the *only* purpose of the check is to generate a "pass" or "fail" status, and not something to be used to find where in the block it is corrupted or attempt to regenerate the data, then we could certainly optimize the check algorithm. A simple checksum may be good enough. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers