I've just 'fallen over' this 'optimisation' on a new Intel Atom processor.
AFAICT all the copy functions get patched to 'rep movsb'.
The problem arises when one of the buffers is uncached, in this
case the 'rep movsb' has to perform single byte transfers.
So memcpy_toio() and memcpy_fromio() need
On Thu, Mar 05, 2015 at 01:26:40AM +0100, Ingo Molnar wrote:
> I.e. while I'd not want to patch in memcpy_orig (it's legacy really),
> but the other two variants, ERMS and REP MOVSQ could be patched in
> directly via ALTERNATIVE_2()?
Certainly worth the try, it is on the TODO list.
--
Regards/
* Borislav Petkov wrote:
> On Wed, Mar 04, 2015 at 08:26:33AM +0100, Ingo Molnar wrote:
> > Since most CPUs we care about have ERMS, wouldn't it be better to
> > patch in the actual memcpy_erms sequence into the primary memcpy()
> > function? It's just about 9 bytes AFAICT.
>
> Actually, most s
On Wed, Mar 04, 2015 at 08:26:33AM +0100, Ingo Molnar wrote:
> Since most CPUs we care about have ERMS, wouldn't it be better to
> patch in the actual memcpy_erms sequence into the primary memcpy()
> function? It's just about 9 bytes AFAICT.
Actually, most set REP_GOOD - all Intel family 6 and all
* Borislav Petkov wrote:
> From: Borislav Petkov
>
> Make REP_GOOD variant the default after alternatives have run.
>
> Signed-off-by: Borislav Petkov
> ---
> arch/x86/lib/memcpy_64.S | 68
> +++-
> 1 file changed, 21 insertions(+), 47 deletions(
From: Borislav Petkov
Make REP_GOOD variant the default after alternatives have run.
Signed-off-by: Borislav Petkov
---
arch/x86/lib/memcpy_64.S | 68 +++-
1 file changed, 21 insertions(+), 47 deletions(-)
diff --git a/arch/x86/lib/memcpy_64.S b/arc
6 matches
Mail list logo