On Wed, Apr 22, 2020 at 3:55 PM Ranier Vilela <ranier...@gmail.com> wrote: > per Coverity.
Some Postgres hackers have access to a dedicated coverity installation, and this issue has probably already been dismissed. > 1. assign_zero: Assigning: opaque = NULL. > 2. Condition buf < 0, taking true branch. > 3. Condition !(((PageHeader)page)->pd_upper == 0), taking false branch. > 4. Condition blkno != orig_blkno, taking true branch. > 5. Condition _bt_page_recyclable(page), taking false branch. > CID 1314742 (#2 of 2): Explicit null dereferenced (FORWARD_NULL) > 6. var_deref_op: Dereferencing null pointer opaque. This is a false positive. btvacuumpage() is supposed to be a recursive function, but in practice the only caller always uses the same block number for both blkno and orig_blkno -- the tail recursion is actually implemented using goto/a loop. Maybe we should make the btvacuumpage() orig_blkno argument into a local variable, though. It doesn't feel particularly natural to think of btvacuumpage() as a recursive function. I don't like this comment: /* * This is really tail recursion, but if the compiler is too stupid to * optimize it as such, we'd eat an uncomfortably large amount of stack * space per recursion level (due to the arrays used to track details of * deletable/updatable items). A failure is improbable since the number * of levels isn't likely to be large ... but just in case, let's * hand-optimize into a loop. */ if (recurse_to != P_NONE) { blkno = recurse_to; goto restart; } This almost sounds like it's talking about the number of levels of the tree, where having more than 5 levels is highly unlikely, and having more than 10 levels is probably absolutely impossible with standard BLCKSZ, even with the largest possible tuples (I have tried). We process levels of the tree recursively during page splits (and page deletions, which are quite unrelated to this code), but this is very different. Roughly speaking, it's bound in size by the number of page splits that happen while the VACUUM begins. I'm willing to believe that that's quite rare, but I also think that there might be workloads where the physical scan has to "backtrack" thousands of times. -- Peter Geoghegan