On 26/03/2025 10:15 am, Jan Beulich wrote:
> On 25.03.2025 18:04, Oleksii Kurochko wrote:
>> On 3/25/25 5:46 PM, Jan Beulich wrote:
>>> On 25.03.2025 17:35, Oleksii Kurochko wrote:
>>>> On 3/7/25 1:12 PM, Andrew Cooper wrote:
>>>>> On 07/03/2025 12:01 pm, Jan Beulich wrote:
>>>>>> On 07.03.2025 12:50, Oleksii Kurochko wrote:
>>>>>>> On 3/6/25 9:19 PM, Andrew Cooper wrote:
>>>>>>>> On 05/03/2025 7:34 am, Jan Beulich wrote:
>>>>>>>>> I was actually hoping to eliminate BITS_PER_LONG at some point, in 
>>>>>>>>> favor
>>>>>>>>> of using sizeof(long) * BITS_PER_BYTE. (Surely in common code we could
>>>>>>>>> retain a shorthand of that name, if so desired, but I see no reason 
>>>>>>>>> why
>>>>>>>>> each arch would need to provide all three BITS_PER_{BYTE,INT,LONG}.)
>>>>>>>> The concern is legibility and clarity.
>>>>>>>>
>>>>>>>> This:
>>>>>>>>
>>>>>>>>        ((x) ? 32 - __builtin_clz(x) : 0)
>>>>>>>>
>>>>>>>> is a clear expression in a way that this:
>>>>>>>>
>>>>>>>>        ((x) ? (sizeof(int) * BITS_PER_BYTE) - __builtin_clz(x) : 0)
>>>>>>>>
>>>>>>>> is not.  The problem is the extra binary expression, and this:
>>>>>>>>
>>>>>>>>        ((x) ? BITS_PER_INT - __builtin_clz(x) : 0)
>>>>>>>>
>>>>>>>> is still clear, because the reader doesn't have to perform a multiply 
>>>>>>>> to
>>>>>>>> just to figure out what's going on.
>>>>>>>>
>>>>>>>>
>>>>>>>> It is definitely stupid to have each architecture provide their own
>>>>>>>> BITS_PER_*.  The compiler is in a superior position to provide those
>>>>>>>> details, and it should be in a common location.
>>>>>>>>
>>>>>>>> I don't particularly mind how those constants are derived, but one key
>>>>>>>> thing that BITS_PER_* can do which sizeof() can't is be used in 
>>>>>>>> #ifdef/etc.
>>>>>>> What about moving them to xen/config.h? (if it isn't the best one 
>>>>>>> place, any suggestion which is better?)
>>>>>>>
>>>>>>> #define BYTES_PER_INT  (1 << INT_BYTEORDER)
>>>>>>> #define BITS_PER_INT  (BYTES_PER_INT << 3)
>>>>>>>
>>>>>>> #define BYTES_PER_LONG (1 << LONG_BYTEORDER)
>>>>>>> #define BITS_PER_LONG (BYTES_PER_LONG << 3)
>>>>>>> #define BITS_PER_BYTE 8
>>>>> The *_BYTEORDER's are useless indirection.  BITS_PER_* should be defined
>>>>> straight up, and this will simplify quite a lot of preprocessing.
>>>> Could we really drop *_BYTEORDER?
>>>>
>>>> For example, LONG_BYTEORDER for Arm could be either 2 or 3 depends on 
>>>> Arm32 or Arm64 is used.
>>> The can still #define BITS_PER_LONG to 32 or 64 respectively, can't they?
>> Yes, but then if we want to move it to xen/config.h then it will be needed 
>> to:
>> in Arm's asm/config.h to have something like:
>>    #ifdef CONFIG_ARM_32
>>        #define BITS_PER_LONG 32
>>    #endif
>>
>> and then in xen/config.h
>>    #ifndef BITS_PER_LONG
>>        #define BITS_PER_LONG 64
>>    #endif
>>
>> But I wanted to not have #ifndef BITS_PER_LONG in xen/config.h. (or using 
>> CONFIG_ARM_32 in xen/config.h)
> Whatever the fundamental definitions that can vary per arch - those should
> imo live in asm/config.h. For the case here, if we deem *_BYTEORDER as
> fundamental, they go there (and BITS_PER_* go into xen/config.h). If we deem
> BITS_PER_* fundamental, they go into asm/config.h.
>
> Stuff that we expect to remain uniform across ports (BITS_PER_BYTE, possibly
> also BITS_PER_INT BITS_PER_LLONG) may also go into xen/config.h, as long as
> we're okay with the possible future churn if a port appeared that has
> different needs.
>
> I agree that there better wouldn't be #ifndef for such in xen/config.h.

With a new toolchain baseline, we get both of these concepts directly
from the compiler environment.

I don't see any need for arch-specific overrides to these.  They
ultimately come from the -march used.

~Andrew

Reply via email to