Hi Robert, I was rebasing my patch to convert RELSEG_SIZE into an initdb-time setting, and thus runtime variable, and I noticed this stack variable in src/backend/backup/basebackup_incremental.c:
GetFileBackupMethod(IncrementalBackupInfo *ib, const char *path, BlockNumber *relative_block_numbers, unsigned *truncation_block_length) { BlockNumber absolute_block_numbers[RELSEG_SIZE]; I'll have to move that sucker onto the heap (we banned C99 variable length arrays and we don't use nonstandard alloca()), but I think the coding in master is already a bit dodgy: that's 131072 * sizeof(BlockNumber) = 512kB with default configure options, but --with-segsize X could create a stack variable up to 16GB, corresponding to segment size 32TB (meaning no segmentation at all). That wouldn't work. Shouldn't we move it to the stack? See attached draft patch. Even on the heap, 16GB is too much to assume we can allocate during a base backup. I don't claim that's a real-world problem for incremental backup right now in master, because I don't have any evidence that anyone ever really uses --with-segsize (do they?), but if we make it an initdb option it will be more popular and this will become a problem. Hmm.
From 1d183245e9676ef45ca6a93e7d442ee903a2a14c Mon Sep 17 00:00:00 2001 From: Thomas Munro <thomas.mu...@gmail.com> Date: Wed, 6 Mar 2024 17:44:19 +1300 Subject: [PATCH] Fix potential stack overflow in incremental basebackup. Since the user can set to RELSEG_SIZE to a high number at compile time, it seems unwise to use it to size an array on the stack. --- src/backend/backup/basebackup_incremental.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/src/backend/backup/basebackup_incremental.c b/src/backend/backup/basebackup_incremental.c index ebc41f28be5..3d0e7894fc3 100644 --- a/src/backend/backup/basebackup_incremental.c +++ b/src/backend/backup/basebackup_incremental.c @@ -28,6 +28,7 @@ #include "common/int.h" #include "datatype/timestamp.h" #include "postmaster/walsummarizer.h" +#include "storage/smgr.h" #include "utils/timestamp.h" #define BLOCKS_PER_READ 512 @@ -712,7 +713,7 @@ GetFileBackupMethod(IncrementalBackupInfo *ib, const char *path, BlockNumber *relative_block_numbers, unsigned *truncation_block_length) { - BlockNumber absolute_block_numbers[RELSEG_SIZE]; + BlockNumber *absolute_block_numbers; BlockNumber limit_block; BlockNumber start_blkno; BlockNumber stop_blkno; @@ -839,8 +840,10 @@ GetFileBackupMethod(IncrementalBackupInfo *ib, const char *path, errcode(ERRCODE_INTERNAL_ERROR), errmsg_internal("overflow computing block number bounds for segment %u with size %zu", segno, size)); + absolute_block_numbers = palloc(sizeof(BlockNumber) * RELSEG_SIZE); nblocks = BlockRefTableEntryGetBlocks(brtentry, start_blkno, stop_blkno, - absolute_block_numbers, RELSEG_SIZE); + absolute_block_numbers, + RELSEG_SIZE); Assert(nblocks <= RELSEG_SIZE); /* @@ -856,7 +859,10 @@ GetFileBackupMethod(IncrementalBackupInfo *ib, const char *path, * nothing good about sending an incremental file in that case. */ if (nblocks * BLCKSZ > size * 0.9) + { + pfree(absolute_block_numbers); return BACK_UP_FILE_FULLY; + } /* * Looks like we can send an incremental file, so sort the absolute the @@ -872,6 +878,7 @@ GetFileBackupMethod(IncrementalBackupInfo *ib, const char *path, compare_block_numbers); for (i = 0; i < nblocks; ++i) relative_block_numbers[i] = absolute_block_numbers[i] - start_blkno; + pfree(absolute_block_numbers); *num_blocks_required = nblocks; /* -- 2.43.0