On 7/30/19 1:46 PM, Uladzislau Rezki wrote:
>> +            /*
>> +             * If required width exeeds current VA block, move
>> +             * base downwards and then recheck.
>> +             */
>> +            if (base + end > va->va_end) {
>> +                    base = pvm_determine_end_from_reverse(&va, align) - end;
>> +                    term_area = area;
>> +                    continue;
>> +            }
>> +
>>              /*
>>               * If this VA does not fit, move base downwards and recheck.
>>               */
>> -            if (base + start < va->va_start || base + end > va->va_end) {
>> +            if (base + start < va->va_start) {
>>                      va = node_to_va(rb_prev(&va->rb_node));
>>                      base = pvm_determine_end_from_reverse(&va, align) - end;
>>                      term_area = area;
>> -- 
>> 2.21.0
>>
> I guess it is NUMA related issue, i mean when we have several
> areas/sizes/offsets. Is that correct?

I don't think NUMA has anything to do with it.  The vmalloc() area
itself doesn't have any NUMA properties I can think of.  We don't, for
instance, partition it into per-node areas that I know of.

I did encounter this issue on a system with ~100 logical CPUs, which is
a moderate amount these days.

Reply via email to