Hi! On Sun, 2022-11-13 at 00:17:36 +0100, Aurelien Jarno wrote: > On 2022-11-12 22:28, Guillem Jover wrote: > > On Fri, 2022-11-11 at 19:15:59 +0100, Manuel A. Fernandez Montecelo wrote: > > > Package: dpkg > > > Version: 1.21.9 > > > Severity: normal > > > X-Debbugs-Cc: m...@debian.org, debian-wb-team@lists.debian.org > > > > > After some investigation by aurel32 and myself, this was traced back to > > > the > > > commit f8d254943051e085040367d689048c00f31514c3 [2], in which the > > > calculation of > > > the memory that can be used, to determine the number of threads to use, > > > was > > > changed from half of the physical mem to be based on the memory available. > > > > Ah, thanks for tracking this down! I think the problem is the usual > > "available" memory does not really mean what people think it means. :/ > > And I unfortunately missed that (even thought I was aware of it) when > > reviewing the patch. > > > > Attached is something I just quickly prepared, which I'll clean up and > > merge for the upcoming 1.21.10. Let me know if that solves the issue > > for you, otherwise we'd need to look for further changes. > > Thanks for providing a patch. I have not been able yet to try it for the > case where we have found the issue, i.e. building linux. However I have > tried to setup a similar environment:
> - I took a just booted VM with 4 GB RAM, 4 GB swap and 4 GB tmpfs, and very > few > things running on it. > - I filled the tmpfs with 4 GB of random data, which means that after > moving the content of the tmpfs to the swap, 4 GB could still be used > without issue. > - I ended up with the following /proc/meminfo: > MemTotal: 3951508 kB > MemFree: 130976 kB > MemAvailable: 10584 kB > Buffers: 2448 kB > Cached: 3694676 kB > SwapCached: 12936 kB > Active: 3111920 kB > Inactive: 610376 kB > Active(anon): 3102668 kB > Inactive(anon): 606952 kB > Active(file): 9252 kB > Inactive(file): 3424 kB > Unevictable: 0 kB > Mlocked: 0 kB > SwapTotal: 4194300 kB > SwapFree: 3777400 kB > Zswap: 0 kB > Zswapped: 0 kB > Dirty: 0 kB > Writeback: 0 kB > AnonPages: 12960 kB > Mapped: 6700 kB > Shmem: 3684416 kB > KReclaimable: 27616 kB > Slab: 54652 kB > SReclaimable: 27616 kB > SUnreclaim: 27036 kB > KernelStack: 2496 kB > PageTables: 1516 kB > NFS_Unstable: 0 kB > Bounce: 0 kB > WritebackTmp: 0 kB > CommitLimit: 6170052 kB > Committed_AS: 4212940 kB > VmallocTotal: 34359738367 kB > VmallocUsed: 16116 kB > VmallocChunk: 0 kB > Percpu: 2288 kB > HardwareCorrupted: 0 kB > AnonHugePages: 0 kB > ShmemHugePages: 0 kB > ShmemPmdMapped: 0 kB > FileHugePages: 0 kB > FilePmdMapped: 0 kB > HugePages_Total: 0 > HugePages_Free: 0 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 2048 kB > Hugetlb: 0 kB > DirectMap4k: 110452 kB > DirectMap2M: 5132288 kB > DirectMap1G: 5242880 kB > With the current version of dpkg, it means it consider 10584 kB are available > (not however that there is 130976 kB of unused physical RAM). With your patch, > it's a bit better, as it would be 123408 kB. Still far less that one the VM is > capable of. Err sorry, the patch was computing the used memory and not the truly available one! The updated patch should do better. :) Thanks, Guillem
diff --git i/lib/dpkg/compress.c w/lib/dpkg/compress.c index 8cfba80cc..9b02b48b7 100644 --- i/lib/dpkg/compress.c +++ w/lib/dpkg/compress.c @@ -605,8 +605,14 @@ filter_lzma_error(struct io_lzma *io, lzma_ret ret) * page cache may be purged, not everything will be reclaimed that might be * reclaimed, watermarks are considered. */ -static const char str_MemAvailable[] = "MemAvailable"; -static const size_t len_MemAvailable = sizeof(str_MemAvailable) - 1; + +struct mem_field { + const char *name; + ssize_t len; + int tag; + uint64_t *var; +}; +#define MEM_FIELD(name, tag, var) name, sizeof(name) - 1, tag, &var static int get_avail_mem(uint64_t *val) @@ -615,6 +621,14 @@ get_avail_mem(uint64_t *val) char *str; ssize_t bytes; int fd; + uint64_t mem_free, mem_buffers, mem_cached; + struct mem_field fields[] = { + { MEM_FIELD("MemFree", 0x1, mem_free) }, + { MEM_FIELD("Buffers", 0x2, mem_buffers) }, + { MEM_FIELD("Cached", 0x4, mem_cached) }, + }; + const int want_tags = 0x7; + int seen_tags = 0; *val = 0; @@ -632,14 +646,23 @@ get_avail_mem(uint64_t *val) str = buf; while (1) { + struct mem_field *field = NULL; char *end; + size_t f; end = strchr(str, ':'); if (end == 0) break; - if ((end - str) == len_MemAvailable && - strncmp(str, str_MemAvailable, len_MemAvailable) == 0) { + for (f = 0; f < array_count(fields); f++) { + if ((end - str) == fields[f].len && + strncmp(str, fields[f].name, fields[f].len) == 0) { + field = &fields[f]; + break; + } + } + + if (field) { intmax_t num; str = end + 1; @@ -657,16 +680,25 @@ get_avail_mem(uint64_t *val) /* This should not overflow, but just in case. */ if (num < (INTMAX_MAX / 1024)) num *= 1024; - *val = num; - return 0; + + *field->var = num; + seen_tags |= field->tag; } + if (seen_tags == want_tags) + break; + end = strchr(end + 1, '\n'); if (end == 0) break; str = end + 1; } - return -1; + + if (seen_tags != want_tags) + return -1; + + *val = mem_free + mem_buffers + mem_cached; + return 0; } # else static int