On Wed, Dec 20, 2023 at 7:11 AM Tomas Vondra <tomas.von...@enterprisedb.com> wrote: > I was going through to understand the idea, couple of observations
-- + for (int i = 0; i < PREFETCH_LRU_SIZE; i++) + { + entry = &prefetch->prefetchCache[lru * PREFETCH_LRU_SIZE + i]; + + /* Is this the oldest prefetch request in this LRU? */ + if (entry->request < oldestRequest) + { + oldestRequest = entry->request; + oldestIndex = i; + } + + /* + * If the entry is unused (identified by request being set to 0), + * we're done. Notice the field is uint64, so empty entry is + * guaranteed to be the oldest one. + */ + if (entry->request == 0) + continue; If the 'entry->request == 0' then we should break instead of continue, right? --- /* * Used to detect sequential patterns (and disable prefetching). */ #define PREFETCH_QUEUE_HISTORY 8 #define PREFETCH_SEQ_PATTERN_BLOCKS 4 If for sequential patterns we search only 4 blocks then why we are maintaining history for 8 blocks --- + * + * XXX Perhaps this should be tied to effective_io_concurrency somehow? + * + * XXX Could it be harmful that we read the queue backwards? Maybe memory + * prefetching works better for the forward direction? + */ + for (int i = 1; i < PREFETCH_SEQ_PATTERN_BLOCKS; i++) Correct, I think if we fetch this forward it will have an advantage with memory prefetching. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com