On 4/25/25 19:31, Christoph Lameter (Ampere) wrote:
> On Fri, 25 Apr 2025, Vlastimil Babka wrote:
> 
>> @@ -4195,7 +4793,11 @@ static __fastpath_inline void *slab_alloc_node(struct 
>> kmem_cache *s, struct list
>>      if (unlikely(object))
>>              goto out;
>>
>> -    object = __slab_alloc_node(s, gfpflags, node, addr, orig_size);
>> +    if (s->cpu_sheaves && node == NUMA_NO_NODE)
>> +            object = alloc_from_pcs(s, gfpflags);
> 
> The node to use is determined in __slab_alloc_node() only based on the
> memory policy etc. NUMA_NO_NODE allocations can be redirected by memory
> policies and this check disables it.

To handle that, alloc_from_pcs() contains this:

#ifdef CONFIG_NUMA
        if (static_branch_unlikely(&strict_numa)) {
                if (current->mempolicy)
                        return NULL;
        }
#endif

And so there will be a fallback. It doesn't (currently) try to evaluate if
the local node is compatible as this is before taking the local lock (and
thus preventing migration).


>> @@ -4653,7 +5483,10 @@ void slab_free(struct kmem_cache *s, struct slab 
>> *slab, void *object,
>>      memcg_slab_free_hook(s, slab, &object, 1);
>>      alloc_tagging_slab_free_hook(s, slab, &object, 1);
>>
>> -    if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false)))
>> +    if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), 
>> false)))
>> +            return;
>> +
>> +    if (!s->cpu_sheaves || !free_to_pcs(s, object))
>>              do_slab_free(s, slab, object, object, 1, addr);
>>  }
> 
> We free to pcs even if the object is remote?
> 


Reply via email to