Re: [PATCHv2] libelf: Return already gotten Elf_Data from elf_getdata_rawchunk
Hi, On Fri, 2022-04-01 at 16:15 +0200, Mark Wielaard wrote: > elf_getdata_rawchunk keeps a list of Elf_Data_Chunk to track which > Elf_Data structures have already been requested. This allows elf_end > to clean up all internal data structures and the Elf_Data d_buf if > it was malloced. > > But it didn't check if a chunk was already requested earlier. This > meant that if for example dwelf_elf_gnu_build_id was called multiple > times to lookup a build-id from the phdrs a new Elf_Data_Chunk was > created. This could slowly leak memory. > > So also keep track of the offset from which the size and type of > the rawdata was requested so we can return existing data if it is > requested multiple times. > > Note that the current cache is a simple linked list but the chain > is normally not that long. It is normally used to get chunks from > the phdrs, and there are normally less than 10. I pushed this. Cheers, Mark
[Bug debuginfod/28708] run-debuginfod-webapi-concurrency.sh seems to be flaky
https://sourceware.org/bugzilla/show_bug.cgi?id=28708 --- Comment #13 from Mark Wielaard --- (In reply to Evgeny Vereshchagin from comment #12) > FWIW with > https://sourceware.org/git/?p=elfutils.git;a=commit; > h=e646e363e72e06e0ed5574c929236d815ddcbbaf applied the test appears to be > flaky on Packit on s390x: > https://copr-be.cloud.fedoraproject.org/results/packit/evverx-elfutils-73/ > fedora-35-s390x/03942110-elfutils/builder-live.log.gz So that log contains the feared: error_count{libmicrohttpd="Server reached connection limit. Closing inbound connection.\n"} 35 And sadly I have also been able to replicate that on another s390x setup even with all the latest patches. The thing they seem to have in common is that they are both s390x and have only 2 cores. If I lower the -C100 to -C32 in run-debuginfod-webapi-concurrency.sh it does seem to always pass. But with -C50 or higher is does occasionally fail (the higher to more frequent it fails). BTW. run-debuginfod-webapi-concurrency.sh seems stable on any other system I've thrown it at. So it isn't exactly clear what "such a system" is? Is it s390x specific? -- You are receiving this mail because: You are on the CC list for the bug.
[Bug debuginfod/29022] 000-permissions files cause problems for backups
https://sourceware.org/bugzilla/show_bug.cgi?id=29022 --- Comment #3 from Frank Ch. Eigler --- I can't come up with a convincing example why 0-length files would be bad, just general unease at the ambiguity. I don't mind switching to it for now; at worst, later on we may have to revise and then arrange to clean out caches once. -- You are receiving this mail because: You are on the CC list for the bug.
Re: Using libcurl in another library, when/if to call curl_global_init?
* Mark Wielaard: > Hi, > > On Thu, Mar 31, 2022 at 04:00:16PM +0300, Catalin Raceanu via curl-library > wrote: >> On 31-Mar-22 15:04, Mark Wielaard wrote: >> > whether there is a thread-safe way to call >> > curl_global_init at a later time (to get rid of the library constructor >> > init function). >> >> I believe that this is an exact fit for C==11's std::call_once(). Boost also >> has an equivalent, that most likely predates the other, in case older c++ >> standard is used. > > Thanks. Our library is pure C, but we can probably rely on > pthread_once if it is allowable to call curl_global_init at a later > time when multiple threads are already running. The problem is that every caller of pthread_once needs to use the same pthread_once_t synchronization object, so you still have the same problem. I strongly believe that library-safe code needs to perform lazy initialization, that is, initialize the library on first use, and do that in a thread-safe manner. It solves the synchronization issue between different users of the API, and it's the only way to report errors properly (for example, a failure in an ELF constructor can only be reported via process termination). At least on Linux, the need to support multiple different threading libraries is a thing of the past. Thanks, Florian
Re: Using libcurl in another library, when/if to call curl_global_init?
On Thu, 31 Mar 2022, Mark Wielaard via curl-library wrote: But we are struggling a bit with how to safely/correctly initialize libcurl. Are you struggling to meet the requirement as per the documentation or are you seeing actual runtime issues? There isn't much left in the third party libraries that isn't threadsafe so I'm quite interested in knowing. The "not thread-safe" part is mostly theoretic now with modern versions of libraries (such as OpenSSL). -- / daniel.haxx.se | Commercial curl support up to 24x7 is available! | Private help, bug fixes, support, ports, new features | https://curl.se/support.html
Re: Using libcurl in another library, when/if to call curl_global_init?
On Tue, 2022-04-05 at 17:36 +0200, Daniel Stenberg wrote: > On Thu, 31 Mar 2022, Mark Wielaard via curl-library wrote: > > > But we are struggling a bit with how to safely/correctly initialize > > libcurl. > > Are you struggling to meet the requirement as per the documentation > or are you seeing actual runtime issues? Struggling with the requirements as per the documentation "You must not call it [curl_global_init] when any other thread in the program is running". Especially combined with having users who would not like to pay the cost of initializing libcurl (in FIPS mode) when the program isn't explicitly using the remote resource functionality of our library (but is still linked to it). We are a library that is (sometimes) dlopened, so we cannot guarantee that when we call curl_global_init no other thread in the program is running. But we try to mitigate that by having a __attribute__((constructor)) ctor that just does curl_global_init(CURL_GLOBAL_DEFAULT); Which in most cases (especially if we aren't dlopened) should make sure we run before the program has any chance to start any new threads. But that of course also makes sure that we (or the program linking to our library) always pays the cost for calling curl_global_init, which at least in FIPS mode does some non-trivial checks. So we are wondering if instead we could do lazy initialization just before we know we will actually need to load a remote resource. We could wrap such an initialization inside a pthread_once, so at least with multiple threads we don't race against ourselves given that curl_global_init itself is not thread-safe (it uses a unguarded initialized int). But Florian just pointed out that we would still race against other uses of libcurl in the program, in case they also didn't call curl_global_init before starting any other threads. Cheers, Mark
[PATCH] PR29022: 000-permissions files cause problems for backups
000-permission files currently used for negative caching can cause permission problems for some backup software and disk usage checkers. Fix this by using empty files to for negative caching instead. https://sourceware.org/bugzilla/show_bug.cgi?id=29022 Signed-off-by: Aaron Merey --- debuginfod/ChangeLog | 5 ++ debuginfod/debuginfod-client.c| 90 +++ tests/ChangeLog | 5 ++ tests/Makefile.am | 4 +- tests/run-debuginfod-federation-link.sh | 4 +- tests/run-debuginfod-federation-metrics.sh| 4 +- tests/run-debuginfod-federation-sqlite.sh | 4 +- ...on.sh => run-debuginfod-negative-cache.sh} | 6 +- tests/run-debuginfod-tmp-home.sh | 2 +- 9 files changed, 77 insertions(+), 47 deletions(-) rename tests/{run-debuginfod-000-permission.sh => run-debuginfod-negative-cache.sh} (92%) diff --git a/debuginfod/ChangeLog b/debuginfod/ChangeLog index 578d951d..6c2edbdf 100644 --- a/debuginfod/ChangeLog +++ b/debuginfod/ChangeLog @@ -1,3 +1,8 @@ +2022-04-05 Aaron Merey + + * debuginfod-client.c: Represent negative caching with empty files + instead of 000-permission files. + 2022-04-03 Frank Ch. Eigler * debuginfod.cxx (main): Use single dual-stack daemon setup, diff --git a/debuginfod/debuginfod-client.c b/debuginfod/debuginfod-client.c index 41ba88a5..0fe791a0 100644 --- a/debuginfod/debuginfod-client.c +++ b/debuginfod/debuginfod-client.c @@ -138,7 +138,7 @@ static const char *cache_clean_interval_filename = "cache_clean_interval_s"; static const long cache_clean_default_interval_s = 86400; /* 1 day */ /* The cache_miss_default_s within the debuginfod cache specifies how - frequently the 000-permision file should be released.*/ + frequently the empty file should be released.*/ static const long cache_miss_default_s = 600; /* 10 min */ static const char *cache_miss_filename = "cache_miss_s"; @@ -767,41 +767,61 @@ debuginfod_query_server (debuginfod_client *c, if (rc != 0) goto out; - struct stat st; - /* Check if the file exists and it's of permission 000; must check - explicitly rather than trying to open it first (PR28240). */ - if (stat(target_cache_path, &st) == 0 - && (st.st_mode & 0777) == 0) + /* Check if the target is already in the cache. */ + int fd = open(target_cache_path, O_RDONLY); + if (fd >= 0) { - time_t cache_miss; - - rc = debuginfod_config_cache(cache_miss_path, cache_miss_default_s, &st); - if (rc < 0) -goto out; + struct stat st; + if (fstat(fd, &st) != 0) +{ + close (fd); + rc = -errno; + goto out; +} - cache_miss = (time_t)rc; - if (time(NULL) - st.st_mtime <= cache_miss) + /* If the file is non-empty, then we are done. */ + if (st.st_size > 0) { - rc = -ENOENT; - goto out; - } + if (path != NULL) +{ + *path = strdup(target_cache_path); + if (*path == NULL) +{ + close (fd); + rc = -errno; + goto out; +} +} + /* Success */ + rc = fd; + goto out; +} else -/* TOCTOU non-problem: if another task races, puts a working - download or a 000 file in its place, unlinking here just - means WE will try to download again as uncached. */ -unlink(target_cache_path); -} - - /* If the target is already in the cache (known not-000 - PR28240), - then we are done. */ - int fd = open (target_cache_path, O_RDONLY); - if (fd >= 0) -{ - /* Success */ - if (path != NULL) -*path = strdup(target_cache_path); - rc = fd; - goto out; +{ + /* The file is empty. Attempt to download only if enough time + has passed since the last attempt. */ + time_t cache_miss; + rc = debuginfod_config_cache(cache_miss_path, + cache_miss_default_s, &st); + if (rc < 0) +{ + close(fd); + goto out; +} + + cache_miss = (time_t)rc; + if (time(NULL) - st.st_mtime <= cache_miss) +{ + close(fd); + rc = -ENOENT; + goto out; +} + else +/* TOCTOU non-problem: if another task races, puts a working + download or an empty file in its place, unlinking here just + means WE will try to download again as uncached. */ +unlink(target_cache_path); +} } long timeout = default_timeout; @@ -1298,11 +1318,11 @@ debuginfod_query_server (debuginfod_client *c, } } while (num_msg > 0); - /* Create a 000-permission file named as $HOME/.