these
tests, and any other suggestions would also be greatly appreciated.
This was used in conjunction with lib/test_user_copy.c to help verify
the following usercopy fixup patches:
https://lore.kernel.org/lkml/20200914150958.2200-1-oli.sw...@arm.com/
Thanks in advance,
Oli
Oliver Swede (1
that the user buffer base
address is set to be close to an invalid page so the copy intentionally
faults at the specified index. The return value (number of bytes not
copied due to the fault) is then compared against the number of bytes
remaining in the user buffer.
Signed-off-by: Oliver Swede
d-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/memmove.S | 232 +--
1 file changed, 78 insertions(+), 154 deletions(-)
diff --git a/arch/arm64/lib/memmove.S b/arch/arm64/lib/memmove.S
index 02cda2e33bde..d0977d0ad745 100644
--- a/arch/arm6
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/memcmp.S | 333 ++--
1 file changed, 117 insertions(+), 216 deletions(-)
diff --git a/arch/arm64/lib/memcmp.S b/arch/arm64/lib/memcmp.S
index c0671e793ea9..580dd0b12ccb 100644
--- a/arch/arm6
o separate patch, use UL(), expand commit message ]
Signed-off-by: Robin Murphy
[ os: move insertion to condition block for rebase onto bpf changes]
Signed-off-by: Oliver Swede
---
arch/arm64/include/asm/assembler.h | 9 +
arch/arm64/include/asm/extable.h | 11 ++-
arc
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/strlen.S | 247 +++-
1 file changed, 168 insertions(+), 79 deletions(-)
diff --git a/arch/arm64/lib/strlen.S b/arch/arm64/lib/strlen.S
index ee3ed882dd79..974b67dcc186 100644
--- a/arch/arm6
world is already very on fire.
Thus we can reasonably drop the call from kprobe_fault_handler() and
leave uaccess fixups to the regular flow.
Signed-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/kernel/probes/kprobes.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/
re-evaluated when importing new optimized copy
routines to determine if the property still holds, or e.g. if N needs
to be increased, to ensure the fixup remains precise.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_user_fixup.S | 16
1 file changed, 16 insertions(+)
diff
cases.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_user_fixup.S | 13 +++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/lib/copy_user_fixup.S b/arch/arm64/lib/copy_user_fixup.S
index 6a7b2406d948..4858edd55994 100644
--- a/arch/arm64/lib
fixup from the second instruction fault.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_from_user.S | 18 ++-
arch/arm64/lib/copy_in_user.S | 16 ++
arch/arm64/lib/copy_template_user.S | 2 ++
arch/arm64/lib/copy_to_user.S | 16 ++
arch/arm64
pointer is restored to its initial position either from
the fixup code in the case of a fault, or at the end of the copy
algorithm otherwise. The .Luaccess_finish directive is also moved
to copy_template_user.S as the code is common to all usercopy
functions.
Signed-off-by: Oliver Swede
---
arch/arm64
From: Robin Murphy
To match the way the USER() shorthand wraps _asm_extable entries,
introduce USER_F() to wrap _asm_extable_faultaddr and clean up a bit.
Signed-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/include/asm/assembler.h | 4 ++
arch/arm64/lib/copy_from_user.S
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/strncmp.S | 363 ++-
1 file changed, 163 insertions(+), 200 deletions(-)
diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S
index 2a7ee949ed47..b954e0fd93be 100644
--- a/arch/arm6
for explicitly from within the routines themselves), and
independent of any specific implementation, it should be suitable
to return the full copy width back to the kernel code path calling
the usercopy function.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_from_user.S | 24
4c175c8be12 in
https://github.com/ARM-software/optimized-routines.
Signed-off-by: Sam Tebbs
[ rm: add UAO fixups, streamline copy_exit paths, expand commit message ]
Signed-off-by: Robin Murphy
[ os: import newer memcpy algorithm, update commit message ]
Signed-off-by: Oliver Swede
---
arch/arm64/inclu
s 9-14 for clarity and to reflect the
new changes.
This revision was tested on two machines (UAO & non-UAO) internally using a
custom test module (planning on posting this shortly).
v4:
https://lore.kernel.org/linux-arm-kernel/f52401d9-787c-667b-c1ec-8b91106d6...@arm.com/
Oliver Swede (5):
a
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/strcmp.S | 272 +---
1 file changed, 113 insertions(+), 159 deletions(-)
diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
index 4e79566726c8..e00ff46c4ffc 100644
--- a/arch/arm6
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/memcmp.S | 333 ++--
1 file changed, 117 insertions(+), 216 deletions(-)
diff --git a/arch/arm64/lib/memcmp.S b/arch/arm64/lib/memcmp.S
index c0671e793ea9..580dd0b12ccb 100644
--- a/arch/arm6
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/strncmp.S | 363 ++-
1 file changed, 163 insertions(+), 200 deletions(-)
diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S
index 2a7ee949ed47..b954e0fd93be 100644
--- a/arch/arm6
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/strcmp.S | 272 +---
1 file changed, 113 insertions(+), 159 deletions(-)
diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
index 4e79566726c8..e00ff46c4ffc 100644
--- a/arch/arm6
requirement to follow through with the copying of data that may
reside in temporary registers on a fault, as this would greatly
increase the fixup's complexity.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_user_fixup.S | 172 ++-
1 file changed, 168 inser
from analysing the copy algorithm)
enable fixups to be written that are modular and accurate for each
case. The fixup logic should be straightforward to modify in the
future, e.g. if there are further improvements to the memcpy routine.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_from_u
From: Robin Murphy
To match the way the USER() shorthand wraps _asm_extable entries,
introduce USER_F() to wrap _asm_extable_faultaddr and clean up a bit.
Signed-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/include/asm/assembler.h | 4 ++
arch/arm64/lib/copy_from_user.S
o separate patch, use UL(), expand commit message ]
Signed-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/include/asm/assembler.h | 9 +
arch/arm64/include/asm/extable.h | 10 +-
arch/arm64/mm/extable.c| 13 +
arch/arm64/
alternatives are inserted in-line.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_user_fixup.S | 47
1 file changed, 41 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/lib/copy_user_fixup.S b/arch/arm64/lib/copy_user_fixup.S
index 37ca3d99a02a..2d413f9ba5d3
world is already very on fire.
Thus we can reasonably drop the call from kprobe_fault_handler() and
leave uaccess fixups to the regular flow.
Signed-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/kernel/probes/kprobes.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/
sage ]
Signed-off-by: Oliver Swede
---
arch/arm64/include/asm/alternative.h | 36 ---
arch/arm64/lib/copy_from_user.S | 113 ++--
arch/arm64/lib/copy_in_user.S| 129 +++--
arch/arm64/lib/copy_template.S | 375 +++
arch/arm64/lib/copy_template_us
).
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_from_user.S | 3 ---
arch/arm64/lib/copy_in_user.S | 3 ---
arch/arm64/lib/copy_template_user.S | 6 ++
arch/arm64/lib/copy_to_user.S | 3 ---
arch/arm64/lib/copy_user_fixup.S| 1 +
5 files changed, 7 insertions(+), 9
is specific to the new copy template,
which uses the latest memcpy implementation.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_user_fixup.S | 217 ++-
1 file changed, 215 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/lib/copy_user_fixup.S b/arch/arm64
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/strlen.S | 247 +++-
1 file changed, 168 insertions(+), 79 deletions(-)
diff --git a/arch/arm64/lib/strlen.S b/arch/arm64/lib/strlen.S
index ee3ed882dd79..974b67dcc186 100644
--- a/arch/arm6
d-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/memmove.S | 232 +--
1 file changed, 78 insertions(+), 154 deletions(-)
diff --git a/arch/arm64/lib/memmove.S b/arch/arm64/lib/memmove.S
index 02cda2e33bde..d0977d0ad745 100644
--- a/arch/arm6
se of the fault address to return the exact
number of bytes that haven't yet copied.
[1]
https://lore.kernel.org/linux-arm-kernel/e70f7b9de7e601b9e4a6fedad8eaf64d304b1637.1571326276.git.robin.mur...@arm.com/
Oliver Swede (5):
arm64: Store the arguments to copy_*_user on the stack
a
world is already very on fire.
Thus we can reasonably drop the call from kprobe_fault_handler() and
leave uaccess fixups to the regular flow.
Signed-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/kernel/probes/kprobes.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/strlen.S | 247 +++-
1 file changed, 168 insertions(+), 79 deletions(-)
diff --git a/arch/arm64/lib/strlen.S b/arch/arm64/lib/strlen.S
index ee3ed882dd79..974b67dcc186 100644
--- a/arch/arm6
d-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/memmove.S | 232 +--
1 file changed, 78 insertions(+), 154 deletions(-)
diff --git a/arch/arm64/lib/memmove.S b/arch/arm64/lib/memmove.S
index 02cda2e33bde..d0977d0ad745 100644
--- a/arch/arm6
From: Robin Murphy
To match the way the USER() shorthand wraps _asm_extable entries,
introduce USER_F() to wrap _asm_extable_faultaddr and clean up a bit.
Signed-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/include/asm/assembler.h | 4 ++
arch/arm64/lib/copy_from_user.S
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/strcmp.S | 272 +---
1 file changed, 113 insertions(+), 159 deletions(-)
diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
index 4e79566726c8..e00ff46c4ffc 100644
--- a/arch/arm6
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/memcmp.S | 333 ++--
1 file changed, 117 insertions(+), 216 deletions(-)
diff --git a/arch/arm64/lib/memcmp.S b/arch/arm64/lib/memcmp.S
index c0671e793ea9..580dd0b12ccb 100644
--- a/arch/arm6
).
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_from_user.S | 3 ---
arch/arm64/lib/copy_in_user.S | 3 ---
arch/arm64/lib/copy_template_user.S | 6 ++
arch/arm64/lib/copy_to_user.S | 3 ---
arch/arm64/lib/copy_user_fixup.S| 1 +
5 files changed, 7 insertions(+), 9
e usercopy fixup routine in v2 with multiple longer
fixups that each make use of the fault address to return the exact
number of bytes that haven't yet copied.
[1]
https://lore.kernel.org/linux-arm-kernel/e70f7b9de7e601b9e4a6fedad8eaf64d304b1637.1571326276.git.robin.mur...@arm.com/
M
requirement to follow through with the copying of data that may
reside in temporary registers on a fault, as this would greatly
increase the fixup's complexity.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_user_fixup.S | 170 ++-
1 file changed, 165 inser
is specific to the new copy template,
which uses the latest memcpy implementation.
Signed-off-by: Oliver Swede
---
arch/arm64/lib/copy_user_fixup.S | 96
1 file changed, 96 insertions(+)
diff --git a/arch/arm64/lib/copy_user_fixup.S b/arch/arm64/lib
sage ]
Signed-off-by: Oliver Swede
---
arch/arm64/include/asm/alternative.h | 36 ---
arch/arm64/lib/copy_from_user.S | 115 ++--
arch/arm64/lib/copy_in_user.S| 130 --
arch/arm64/lib/copy_template.S | 375 +++
arch/arm64/lib/copy_template_us
ff-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/lib/strncmp.S | 363 ++-
1 file changed, 163 insertions(+), 200 deletions(-)
diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S
index 2a7ee949ed47..b954e0fd93be 100644
--- a/arch/arm6
lysis of the copy algorithm,
enable fixups to be written that are modular and accurate for each
case. In this way the fixup logic should be straightforward to
modify in the future, e.g. if there are further improvements to the
memcpy routine.
Signed-off-by: Oliver Swede
---
arch/arm6
o separate patch, use UL(), expand commit message ]
Signed-off-by: Robin Murphy
Signed-off-by: Oliver Swede
---
arch/arm64/include/asm/assembler.h | 9 +
arch/arm64/include/asm/extable.h | 10 +-
arch/arm64/mm/extable.c| 13 +
arch/arm64/
46 matches
Mail list logo