On 01.06.2024 03:47, Andrew Cooper wrote:
> On 28/05/2024 2:12 pm, Jan Beulich wrote:
>> On 28.05.2024 14:30, Andrew Cooper wrote:
>>> On 27/05/2024 2:37 pm, Jan Beulich wrote:
On 27.05.2024 15:27, Jan Beulich wrote:
> On 24.05.2024 22:03, Andrew Cooper wrote:
>> --- a/xen/arch/x86/inc
On 28/05/2024 2:12 pm, Jan Beulich wrote:
> On 28.05.2024 14:30, Andrew Cooper wrote:
>> On 27/05/2024 2:37 pm, Jan Beulich wrote:
>>> On 27.05.2024 15:27, Jan Beulich wrote:
On 24.05.2024 22:03, Andrew Cooper wrote:
> --- a/xen/arch/x86/include/asm/bitops.h
> +++ b/xen/arch/x86/includ
On 28.05.2024 14:30, Andrew Cooper wrote:
> On 27/05/2024 2:37 pm, Jan Beulich wrote:
>> On 27.05.2024 15:27, Jan Beulich wrote:
>>> On 24.05.2024 22:03, Andrew Cooper wrote:
--- a/xen/arch/x86/include/asm/bitops.h
+++ b/xen/arch/x86/include/asm/bitops.h
@@ -432,12 +432,28 @@ static
On 27/05/2024 2:37 pm, Jan Beulich wrote:
> On 27.05.2024 15:27, Jan Beulich wrote:
>> On 24.05.2024 22:03, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/include/asm/bitops.h
>>> +++ b/xen/arch/x86/include/asm/bitops.h
>>> @@ -432,12 +432,28 @@ static inline int ffsl(unsigned long x)
>>>
>>> stati
On 27.05.2024 15:27, Jan Beulich wrote:
> On 24.05.2024 22:03, Andrew Cooper wrote:
>> --- a/xen/arch/x86/include/asm/bitops.h
>> +++ b/xen/arch/x86/include/asm/bitops.h
>> @@ -432,12 +432,28 @@ static inline int ffsl(unsigned long x)
>>
>> static always_inline unsigned int arch_ffs(unsigned int
On 24.05.2024 22:03, Andrew Cooper wrote:
> --- a/xen/arch/x86/include/asm/bitops.h
> +++ b/xen/arch/x86/include/asm/bitops.h
> @@ -432,12 +432,28 @@ static inline int ffsl(unsigned long x)
>
> static always_inline unsigned int arch_ffs(unsigned int x)
> {
> -int r;
> +unsigned int r;
>
On 24.05.2024 22:03, Andrew Cooper wrote:
> The asm in arch_ffs() is safe but inefficient.
>
> CMOV would be an improvement over a conditional branch, but for 64bit CPUs
> both Intel and AMD have provided enough details about the behaviour for a zero
> input. It is safe to pre-load the destinatio
The asm in arch_ffs() is safe but inefficient.
CMOV would be an improvement over a conditional branch, but for 64bit CPUs
both Intel and AMD have provided enough details about the behaviour for a zero
input. It is safe to pre-load the destination register with -1 and drop the
conditional logic.