johannes hanika (2019-Jan-28, excerpt):
> re: malloc() and 0: linux overcommits, i.e. it will likely never
> return 0 even if your memory is all full. this means checking for 0 is
> completely useless in this context.

Actually, Linux allows to disable overcommitment [1], and some admins
consider this a good idea (I'm not judging this.  A quick search
should give you plenty of arguments on either side).  You can easily
try this:

    # sysctl vm.overcommit_memory=2

Unfortunately overcommitment has led to the programming style to not
check what `malloc` returns.  It should be a matter of routine to
react on a failed `malloc`, maybe by just using a simple wrapper.
Whatever the wrapper does, it cannot be much worse than continuing to
run assuming the memory is available.  While a plain `err(1, "out of
memory")` might end the program with locked databases and such,
similar errors may arise from an ignored failed `malloc`.

It's not only `malloc` though: Some time ago I've looked into the
darktable source dealing with sidecar files, but found that even
return codes from `open` were ignored.  It was just programmed for the
happy case.

____________________
[1] https://www.kernel.org/doc/Documentation/vm/overcommit-accounting


-- 
http://stefan-klinger.de                                        o/X
I prefer receiving plain text messages, not exceeding 32kB.     /\/
                                                                  \
___________________________________________________________________________
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Reply via email to