On Sat, Mar 23, 2024, at 00:40, Jeremy Linton wrote:
> On 3/8/24 14:29, Arnd Bergmann wrote:
>> On Fri, Mar 8, 2024, at 17:49, Jeremy Linton wrote:
>>> On 3/7/24 05:10, Arnd Bergmann wrote:
I'm not sure I understand the logic. Do you mean that accessing
CNTVCT itself is slow, or that
Hi,
Sorry about the delay here, PTO and I actually wanted to verify my
assumptions.
On 3/8/24 14:29, Arnd Bergmann wrote:
On Fri, Mar 8, 2024, at 17:49, Jeremy Linton wrote:
On 3/7/24 05:10, Arnd Bergmann wrote:
I'm not sure I understand the logic. Do you mean that accessing
CNTVCT itself
On Fri, Mar 8, 2024, at 17:49, Jeremy Linton wrote:
> On 3/7/24 05:10, Arnd Bergmann wrote:
>>
>> I'm not sure I understand the logic. Do you mean that accessing
>> CNTVCT itself is slow, or that reseeding based on CNTVCT is slow
>> because of the overhead of reseeding?
>
> Slow, as in, its running
Hi,
On 3/7/24 05:10, Arnd Bergmann wrote:
On Wed, Mar 6, 2024, at 22:54, Jeremy Linton wrote:
On 3/6/24 14:46, Arnd Bergmann wrote:
On Wed, Mar 6, 2024, at 00:33, Kees Cook wrote:
On Tue, Mar 05, 2024 at 04:18:24PM -0600, Jeremy Linton wrote:
The existing arm64 stack randomization uses the k
On Thu, Mar 7, 2024, at 20:15, Kees Cook wrote:
> On Thu, Mar 07, 2024 at 12:10:34PM +0100, Arnd Bergmann wrote:
>> There is not even any attempt to use the most random bits of
>> the cycle counter, as both the high 22 to 24 bits get masked
>> out (to keep the wasted stack space small) and the low
On Thu, Mar 7, 2024, at 20:10, Kees Cook wrote:
> On Thu, Mar 07, 2024 at 12:10:34PM +0100, Arnd Bergmann wrote:
>> For the strength, we have at least four options:
>>
>> - strong rng, most expensive
>> - your new prng, less strong but somewhat cheaper and/or more
>> predictable overhead
>> - cy
On Thu, Mar 07, 2024 at 12:10:34PM +0100, Arnd Bergmann wrote:
> There is not even any attempt to use the most random bits of
> the cycle counter, as both the high 22 to 24 bits get masked
> out (to keep the wasted stack space small) and the low 3 or 4
> bits get ignored because of stack alignment.
On Thu, Mar 07, 2024 at 12:10:34PM +0100, Arnd Bergmann wrote:
> For the strength, we have at least four options:
>
> - strong rng, most expensive
> - your new prng, less strong but somewhat cheaper and/or more
> predictable overhead
> - cycle counter, cheap but probably even less strong,
> ne
Hi Jeremy,
kernel test robot noticed the following build warnings:
[auto build test WARNING on arm64/for-next/core]
[also build test WARNING on arm/for-next arm/fixes kvmarm/next soc/for-next
linus/master v6.8-rc7 next-20240307]
[If your patch is applied to the wrong git tree, kindly drop us a n
On Wed, Mar 6, 2024, at 22:54, Jeremy Linton wrote:
> On 3/6/24 14:46, Arnd Bergmann wrote:
>> On Wed, Mar 6, 2024, at 00:33, Kees Cook wrote:
>>> On Tue, Mar 05, 2024 at 04:18:24PM -0600, Jeremy Linton wrote:
The existing arm64 stack randomization uses the kernel rng to acquire
5 bits of
Hi,
On 3/6/24 14:46, Arnd Bergmann wrote:
On Wed, Mar 6, 2024, at 00:33, Kees Cook wrote:
On Tue, Mar 05, 2024 at 04:18:24PM -0600, Jeremy Linton wrote:
The existing arm64 stack randomization uses the kernel rng to acquire
5 bits of address space randomization. This is problematic because it
c
On Wed, Mar 6, 2024, at 00:33, Kees Cook wrote:
> On Tue, Mar 05, 2024 at 04:18:24PM -0600, Jeremy Linton wrote:
>> The existing arm64 stack randomization uses the kernel rng to acquire
>> 5 bits of address space randomization. This is problematic because it
>> creates non determinism in the syscal
On Tue, Mar 05, 2024 at 04:18:24PM -0600, Jeremy Linton wrote:
> The existing arm64 stack randomization uses the kernel rng to acquire
> 5 bits of address space randomization. This is problematic because it
> creates non determinism in the syscall path when the rng needs to be
> generated or reseed
13 matches
Mail list logo