Hey Samuel,

For me llvm build takes an ungodly amount of time inside Hurd (limited
memory, limited CPU).
I have looked at the paths that i see as suspect but couldn't detect a
definitive problem. I also wasn't able to create a program that reproduces
the bug.

Im assuming you have a place where you build things...How about you just
patch ext2fs/libdiskfs with this small patch I'm sending here (so not for
prod, just for the llvm build place)
and then retry building the llvm.

This new assert SHOULD fail and if it does it will give us the info needed
to fix the problem.
If it doesn't actually fail than we will eliminate a red herring.
In any case a step forward.

let me know,
Milos



On Sun, Feb 15, 2026 at 9:34 AM Samuel Thibault <[email protected]>
wrote:

> Milos Nikic, le dim. 15 févr. 2026 09:06:30 -0800, a ecrit:
> > Just a quick question has this started happening recently?
>
> I have already seen such odd file accesses in the past. It probably
> happens more on llvm-toolchain-21 nowadays just because it has grown so
> much.
>
> Samuel
>
From abbfcf41329a190c3568f421cc7ec7e774d74c00 Mon Sep 17 00:00:00 2001
From: Milos Nikic <[email protected]>
Date: Fri, 27 Feb 2026 12:02:10 -0800
Subject: [PATCH] Probe llvm failure

---
 libdiskfs/name-cache.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/libdiskfs/name-cache.c b/libdiskfs/name-cache.c
index 99591754..7a84fdb3 100644
--- a/libdiskfs/name-cache.c
+++ b/libdiskfs/name-cache.c
@@ -197,7 +197,11 @@ diskfs_enter_lookup_cache (struct node *dir, struct node *np, const char *name)
     add_entry (bucket, i, name, key, dir->cache_id, value);
   else
     if (bucket->node_cache_id[i] != value)
-      bucket->node_cache_id[i] = value;
+      {
+	/* DO NOT MERGE: Catch threads clobbering valid entries with negative ones */
+	assert_backtrace (!(value == 0 && bucket->node_cache_id[i] != 0));
+	bucket->node_cache_id[i] = value;
+      }
 
   pthread_mutex_unlock (&cache_lock);
 }
-- 
2.53.0

Reply via email to