At 12:41 PM 4/23/02 -0400, Bruce Momjian wrote: >This is an interesting point, that an index scan may fit in the cache >while a sequential scan may not.
If so, I would expect that the number of pages read is significantly smaller than it was with a sequential scan. If that's the case, doesn't that mean that the optimizer made the wrong choice anyway? BTW, I just did a quick walk down this chain of code to see what happens during a sequential scan: access/heap/heapam.c storage/buffer/bufmgr.c storage/smgr/smgr.c storage/smgr/md.c and it looks to me like individual reads are being done in BLKSIZE chunks, whether we're scanning or not. During a sequential scan, I've heard that it's more efficient to read in multiples of your blocksize, say, 64K chunks rather than 8K chunks, for each read operation you pass to the OS. Does anybody have any experience to know if this is indeed the case? Has anybody ever added this to postgresql and benchmarked it? Certainly if there's a transaction based limit on disk I/O, as well as a throughput limit, it would be better to read in larger chunks. cjs -- Curt Sampson <[EMAIL PROTECTED]> +81 90 7737 2974 http://www.netbsd.org Don't you know, in this new Dark Age, we're all light. --XTC ---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly