Re: [PATCH 2/2] docs: lzo: fix incorrect statement about distance zero for EOS

2020-05-22 Thread Dave Rodgman
Looks good to me, thanks Dave On 22/05/2020, 15:11, "C. Masloch" wrote: The encoded distance bits are zero, but the distance that is calculated from this is actually equal to 16384. So correct this statement to read that the 0001HLLL instruction means EOS when a distance of 163

Re: [PATCH 1/2] docs: lzo: fix first byte interpretation off-by-one

2020-05-22 Thread Dave Rodgman
Your update looks correct to me, thanks. Dave On 22/05/2020, 15:11, "C. Masloch" wrote: There was an error in the description of the initial byte's interpretation. While "18..21" was listed as "copy 0..3 literals", it should actually be interpreted as "copy 1..4 literals". Th

[PATCH] lib/lzo: fix ambiguous encoding bug in lzo-rle

2020-05-07 Thread Dave Rodgman
input sequences; for all of these cases, updated lzo-rle worked correctly. There is no significant impact to performance or compression ratio. Cc: sta...@vger.kernel.org Signed-off-by: Dave Rodgman --- Documentation/lzo.txt| 8 ++-- lib/lzo/lzo1x_compress.c | 13 + 2 files

[PATCH] lib/lzo: fix alignment bug in lzo-rle

2019-09-12 Thread Dave Rodgman
Fix an unaligned access which breaks on platforms where this is not permitted (e.g., Sparc). Signed-off-by: Dave Rodgman --- lib/lzo/lzo1x_compress.c | 14 -- 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c index

Re: [sparc64] Kernel unaligned access at TPC lzo1x_1_do_compress

2019-09-09 Thread Dave Rodgman
Thanks Anatoly, I'll take a look at this. Could you please let me know the exact hardware you're running on? thanks Dave From: linux-kernel-ow...@vger.kernel.org on behalf of Anatoly Pugachev Sent: 08 September 2019 11:58:08 To: Sparc kernel list Cc:

[PATCH] lib/lzo: fix bugs for very short or empty input

2019-03-26 Thread Dave Rodgman
> 0 require a minimum stream length of 5. Also fixes a bug in handling the tail for very short inputs when a bitstream version is present. Change-Id: Ifcf7a1b9acc46a25cb3ef746eccfe26937209560 Signed-off-by: Dave Rodgman --- Documentation/lzo.txt | 8 +--- lib/lzo/lzo1x_compres

Re: Kernel LZO compressor

2019-03-25 Thread Dave Rodgman
On 19/03/2019 8:15 pm, Nick Terrell wrote: > Hi Dave, > > I just saw you patches adding LZO-RLE, so I decided to fuzz the LZO > compressor and decompressor. I didn't find any crashes, but I found some edge > cases in the decompressor. Hi Nick, Thanks - I will take a look at this. These cases won'

[PATCH v5 2/3] lib/lzo: separate lzo-rle from lzo

2019-02-05 Thread Dave Rodgman
To prevent any issues with persistent data, separate lzo-rle from lzo so that it is treated as a separate algorithm, and lzo is still available. Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Matt Sealey Cc: Minchan Kim

[PATCH v5 1/3] lib/lzo: implement run-length encoding

2019-02-05 Thread Dave Rodgman
nimal. Compression ratio is within a few percent in all cases. This modifies the bitstream in a way which is backwards compatible (i.e., we can decompress old bitstreams, but old versions of lzo cannot decompress new bitstreams). Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartma

[PATCH v5 3/3] zram: default to lzo-rle instead of lzo

2019-02-05 Thread Dave Rodgman
lzo-rle gives higher performance and similar compression ratios to lzo. Signed-off-by: Dave Rodgman --- drivers/block/zram/zram_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 04ca65912638

[PATCH v5 0/3]: lib/lzo: run-length encoding support

2019-02-05 Thread Dave Rodgman
Hi, Following on from the previous lzo-rle patchset: https://lkml.org/lkml/2018/11/30/972 This patchset contains only the RLE patches, and should be applied on top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ). Previously, some questions were raised around the RLE patches. I've

[PATCH v5 1/3] lib/lzo: tidy-up ifdefs

2019-02-05 Thread Dave Rodgman
Modify the ifdefs in lzodefs.h to be more consistent with normal kernel macros (e.g., change __aarch64__ to CONFIG_ARM64). Signed-off-by: Dave Rodgman Cc: Herbert Xu Cc: David S. Miller Cc: Nitin Gupta Cc: Richard Purdie Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim Cc: Sergey Senozhatsky

[PATCH v5 2/3] lib/lzo: 64-bit CTZ on arm64

2019-02-05 Thread Dave Rodgman
://lkml.kernel.org/r/20181127161913.23863-5-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim Cc: Nitin Gupta Cc: Richard Purdie Cc: Sergey Senozhatsky Cc: Sonny Rao

[PATCH v5 3/3] lib/lzo: fast 8-byte copy on arm64

2019-02-05 Thread Dave Rodgman
From: Matt Sealey Enable faster 8-byte copies on arm64. Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim

[PATCH v5 0/3] lib/lzo: performance improvements

2019-02-05 Thread Dave Rodgman
Hi, Following on from the previous lzo-rle patchset: https://lkml.org/lkml/2018/11/30/972 This patchset contains only the patches which were ack'd by Markus (i.e., not the RLE patches). I believe Markus was happy to land these (please shout if that's not the case). Regarding the RLE patches, I'

Re: [PATCH v4 0/7] lib/lzo: performance improvements

2019-01-07 Thread Dave Rodgman
ls on this please? I have not seen any crashes in my testing so I'm not able to look into this without more data. On 07/12/2018 3:54 pm, Dave Rodgman wrote: > Hi Markus, > > On 06/12/2018 3:47 pm, Markus F.X.J. Oberhumer wrote:> Request 3 - add > lzo-rle; *NOT* acked by me >

Re: [PATCH v4 0/7] lib/lzo: performance improvements

2018-12-07 Thread Dave Rodgman
Hi Markus, On 06/12/2018 3:47 pm, Markus F.X.J. Oberhumer wrote:> Request 3 - add lzo-rle; *NOT* acked by me > > [PATCH 6/8] lib/lzo: implement run-length encoding > [PATCH 7/8] lib/lzo: separate lzo-rle from lzo > [PATCH 8/8] zram: default to lzo-rle instead of lzo > > It (1) s

Re: [PATCH v4 0/7] lib/lzo: performance improvements

2018-12-05 Thread Dave Rodgman
On 05/12/2018 7:30 am, Sergey Senozhatsky wrote: > Hi Dave, > > Notices this warning today: > > lib/lzo/lzo1x_compress.c: In function ‘lzo1x_1_do_compress’: > lib/lzo/lzo1x_compress.c:239:14: warning: ‘m_pos’ may be used uninitialized > in this function [-Wmaybe-uninitialized] > 239 | m_off =

Re: linux-next: build warning after merge of the akpm tree

2018-11-30 Thread Dave Rodgman
On 30/11/2018 5:40 am, Stephen Rothwell wrote: > After merging the akpm tree, today's linux-next build (arm > multi_v7_defconfig) produced this warning: > > lib/lzo/lzo1x_compress.c: In function 'lzo1x_1_do_compress': > lib/lzo/lzo1x_compress.c:239:14: warning: 'm_pos' may be used uninitialized >

[PATCH 5/8] lib/lzo: fast 8-byte copy on arm64

2018-11-30 Thread Dave Rodgman
From: Matt Sealey Enable faster 8-byte copies on arm64. Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim

[PATCH 8/8] zram: default to lzo-rle instead of lzo

2018-11-30 Thread Dave Rodgman
lzo-rle gives higher performance and similar compression ratios to lzo. Testing with 80 browser tabs showed a 27% reduction in total time spent (de)compressing data during swapping. Signed-off-by: Dave Rodgman --- drivers/block/zram/zram_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion

[PATCH 7/8] lib/lzo: separate lzo-rle from lzo

2018-11-30 Thread Dave Rodgman
To prevent any issues with persistent data, separate lzo-rle from lzo so that it is treated as a separate algorithm, and lzo is still available. Link: http://lkml.kernel.org/r/20181127161913.23863-8-dave.rodg...@arm.com Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc

[PATCH 3/8] lib/lzo: enable 64-bit CTZ on Arm

2018-11-30 Thread Dave Rodgman
nstruction usage. We do not bother enabling LZO_USE_CTZ64 support for ARMv5 as the builtin code path does the same thing as the LZO_USE_CTZ32 code, only with more register pressure. Link: http://lkml.kernel.org/r/20181127161913.23863-4-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-of

[PATCH 6/8] lib/lzo: implement run-length encoding

2018-11-30 Thread Dave Rodgman
igned-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Matt Sealey Cc: Minchan Kim Cc: Nitin Gupta Cc: Richard Purdie Cc: Sergey Senozhatsky Cc: Sonny Rao Signed-off-by: Andrew Morton Signed-off-by: Stephen Rot

[PATCH v4 0/7] lib/lzo: performance improvements

2018-11-30 Thread Dave Rodgman
This patch series introduces performance improvements for lzo. The previous version of this patchset is here: https://lkml.org/lkml/2018/11/30/807 This version of the patchset fixes a maybe-used-uninitialized warning (although the previous version was still safe). Dave

[PATCH 1/8] lib/lzo: tidy-up ifdefs

2018-11-30 Thread Dave Rodgman
Modify the ifdefs in lzodefs.h to be more consistent with normal kernel macros (e.g., change __aarch64__ to CONFIG_ARM64). Signed-off-by: Dave Rodgman Cc: Herbert Xu Cc: David S. Miller Cc: Nitin Gupta Cc: Richard Purdie Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim Cc: Sergey Senozhatsky

[PATCH 2/8] lib/lzo: clean-up by introducing COPY16

2018-11-30 Thread Dave Rodgman
ns to do the same job. Link: http://lkml.kernel.org/r/20181127161913.23863-3-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim Cc: Nitin Gupta Cc: Richard Purdi

[PATCH 4/8] lib/lzo: 64-bit CTZ on arm64

2018-11-30 Thread Dave Rodgman
://lkml.kernel.org/r/20181127161913.23863-5-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim Cc: Nitin Gupta Cc: Richard Purdie Cc: Sergey Senozhatsky Cc: Sonny Rao

Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-11-30 Thread Dave Rodgman
> On 2018/11/30 0:49, Dave Rodgman wrote: >> On 28/11/2018 1:52 pm, David Sterba wrote: >> >>> The fix is adding a few branches to code that's supposed to be as fast >>> as possible. The branches would be evaluated all the time while >>> protecting

[PATCH 3/8] lib/lzo: enable 64-bit CTZ on Arm

2018-11-30 Thread Dave Rodgman
nstruction usage. We do not bother enabling LZO_USE_CTZ64 support for ARMv5 as the builtin code path does the same thing as the LZO_USE_CTZ32 code, only with more register pressure. Link: http://lkml.kernel.org/r/20181127161913.23863-4-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-of

[PATCH 8/8] zram: default to lzo-rle instead of lzo

2018-11-30 Thread Dave Rodgman
lzo-rle gives higher performance and similar compression ratios to lzo. Testing with 80 browser tabs showed a 27% reduction in total time spent (de)compressing data during swapping. Signed-off-by: Dave Rodgman --- drivers/block/zram/zram_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion

[PATCH 7/8] lib/lzo: separate lzo-rle from lzo

2018-11-30 Thread Dave Rodgman
To prevent any issues with persistent data, separate lzo-rle from lzo so that it is treated as a separate algorithm, and lzo is still available. Link: http://lkml.kernel.org/r/20181127161913.23863-8-dave.rodg...@arm.com Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc

[PATCH 1/8] lib/lzo: tidy-up ifdefs

2018-11-30 Thread Dave Rodgman
Modify the ifdefs in lzodefs.h to be more consistent with normal kernel macros (e.g., change __aarch64__ to CONFIG_ARM64). Signed-off-by: Dave Rodgman Cc: Herbert Xu Cc: David S. Miller Cc: Nitin Gupta Cc: Richard Purdie Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim Cc: Sergey Senozhatsky

[PATCH 5/8] lib/lzo: fast 8-byte copy on arm64

2018-11-30 Thread Dave Rodgman
From: Matt Sealey Enable faster 8-byte copies on arm64. Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim

[PATCH v3 0/7] lib/lzo: performance improvements

2018-11-30 Thread Dave Rodgman
This patch series introduces performance improvements for lzo. The previous version of this patchset is here: https://lkml.org/lkml/2018/11/27/1086 On 29/11/2018 8:32 pm, Andrew Morton wrote: > On Thu, 29 Nov 2018 10:21:53 +0000 Dave Rodgman wrote: >>> OK, so it's not just

[PATCH 4/8] lib/lzo: 64-bit CTZ on arm64

2018-11-30 Thread Dave Rodgman
://lkml.kernel.org/r/20181127161913.23863-5-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim Cc: Nitin Gupta Cc: Richard Purdie Cc: Sergey Senozhatsky Cc: Sonny Rao

[PATCH 6/8] lib/lzo: implement run-length encoding

2018-11-30 Thread Dave Rodgman
igned-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Matt Sealey Cc: Minchan Kim Cc: Nitin Gupta Cc: Richard Purdie Cc: Sergey Senozhatsky Cc: Sonny Rao Signed-off-by: Andrew Morton Signed-off-by: Stephen Rot

[PATCH 2/8] lib/lzo: clean-up by introducing COPY16

2018-11-30 Thread Dave Rodgman
ns to do the same job. Link: http://lkml.kernel.org/r/20181127161913.23863-3-dave.rodg...@arm.com Signed-off-by: Matt Sealey Signed-off-by: Dave Rodgman Cc: David S. Miller Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Markus F.X.J. Oberhumer Cc: Minchan Kim Cc: Nitin Gupta Cc: Richard Purdi

Re: [PATCH 7/7] lib/lzo: separate lzo-rle from lzo

2018-11-30 Thread Dave Rodgman
On 29/11/2018 4:43 am, Sergey Senozhatsky wrote: > On (11/27/18 16:19), Dave Rodgman wrote:> >> +static struct crypto_alg alg = { >> +.cra_name = "lzo-rle", >> +.cra_flags = CRYPTO_ALG_TYPE_COMPRESS, >> +.cra_ctxsi

Re: [PATCH v2] lzo: fix ip overrun during compress.

2018-11-29 Thread Dave Rodgman
On 28/11/2018 1:52 pm, David Sterba wrote: > The fix is adding a few branches to code that's supposed to be as fast > as possible. The branches would be evaluated all the time while > protecting against one signle bad page address. This does not look like > a good performance tradeoff. As an alte

Re: [PATCH 7/7] lib/lzo: separate lzo-rle from lzo

2018-11-29 Thread Dave Rodgman
On 29/11/2018 4:43 am, Sergey Senozhatsky wrote: > On (11/27/18 16:19), Dave Rodgman wrote: >> Documentation/lzo.txt | 12 ++- >> crypto/Makefile | 2 +- >> crypto/lzo-rle.c | 175 >>

[PATCH 5/7] lib/lzo: fast 8-byte copy on arm64

2018-11-27 Thread Dave Rodgman
From: Matt Sealey Enable faster 8-byte copies on arm64. Signed-off-by: Dave Rodgman Signed-off-by: Matt Sealey --- lib/lzo/lzodefs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h index c8965dc181df..06fa83a38e0a 100644 --- a/lib/lzo

[PATCH 7/7] lib/lzo: separate lzo-rle from lzo

2018-11-27 Thread Dave Rodgman
To prevent any issues with persistent data, separate lzo-rle from lzo so that it is treated as a separate algorithm, and lzo is still available. Use lzo-rle as the default algorithm for zram. Signed-off-by: Dave Rodgman --- Documentation/lzo.txt | 12 ++- crypto/Makefile

[PATCH 4/7] lib/lzo: 64-bit CTZ on arm64

2018-11-27 Thread Dave Rodgman
Sealey Signed-off-by: Dave Rodgman --- lib/lzo/lzodefs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h index c0193f726db0..c8965dc181df 100644 --- a/lib/lzo/lzodefs.h +++ b/lib/lzo/lzodefs.h @@ -28,7 +28,7 @@ #if defined(__BIG_ENDIAN

[PATCH 3/7] lib/lzo: enable 64-bit CTZ on Arm

2018-11-27 Thread Dave Rodgman
From: Matt Sealey ARMv6 Thumb state introduced an RBIT instruction which, combined with CLZ as present in ARMv5, introduces an extremely fast path for counting trailing zeroes. Enable the use of the GCC builtin for this on ARMv6+ with CONFIG_THUMB2_KERNEL to ensure we get the 'new' instruction u

[PATCH 6/7] lib/lzo: implement run-length encoding

2018-11-27 Thread Dave Rodgman
nimal. Compression ratio is within a few percent in all cases. This modifies the bitstream in a way which is backwards compatible (i.e., we can decompress old bitstreams, but old versions of lzo cannot decompress new bitstreams). Signed-off-by: Dave Rodgman --- Documentation/lzo.txt

[PATCH 1/7] lib/lzo: tidy-up ifdefs

2018-11-27 Thread Dave Rodgman
Modify the ifdefs in lzodefs.h to be more consistent with normal kernel macros (e.g., change __aarch64__ to CONFIG_ARM64). Signed-off-by: Dave Rodgman --- lib/lzo/lzodefs.h | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h index

[PATCH 2/7] lib/lzo: clean-up by introducing COPY16

2018-11-27 Thread Dave Rodgman
From: Matt Sealey Most compilers should be able to merge adjacent loads/stores of sizes which are less than but effect a multiple of a machine word size (in effect a memcpy() of a constant amount). However the semantics of the macro are that it just does the copy, the pointer increment is in the

[PATCH v2 0/7] lib/lzo: performance improvements

2018-11-27 Thread Dave Rodgman
This patch series introduces performance improvements for lzo. The previous version of this patchset is here: https://lkml.org/lkml/2018/11/21/625 This version tidies up the ifdefs as per Christoph's comment (although certainly more could be done, this is at least a bit more consistent with norma

Re: [PATCH 0/6] lib/lzo: performance improvements

2018-11-21 Thread Dave Rodgman
come from if you have 0% zeros. The chart does indeed include the other improvements, so this is where the performance uplift on the left hand side of the chart (i.e., random data) comes from. Thanks for taking a look at this. Dave > > Cheers, > Markus > > > > On

[PATCH 0/6] lib/lzo: performance improvements

2018-11-21 Thread Dave Rodgman
google.com/file/d/18GU4pgRVCLNN7wXxynz-8R2ygrY2IdyE/view Contributors: Dave Rodgman Matt Sealey

[PATCH 3/6] lib/lzo: 64-bit CTZ on Arm aarch64

2018-11-21 Thread Dave Rodgman
Sealey Signed-off-by: Dave Rodgman --- lib/lzo/lzodefs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h index a9927bbce21f..d167f8fc6795 100644 --- a/lib/lzo/lzodefs.h +++ b/lib/lzo/lzodefs.h @@ -28,7 +28,7 @@ #if defined(__BIG_ENDIAN

[PATCH 6/6] lib/lzo: separate lzo-rle from lzo

2018-11-21 Thread Dave Rodgman
To prevent any issues with persistent data, separate lzo-rle from lzo so that it is treated as a separate algorithm, and lzo is still available. Use lzo-rle as the default algorithm for zram. Signed-off-by: Dave Rodgman --- Documentation/lzo.txt | 12 ++- crypto/Makefile

[PATCH 5/6] lib/lzo: implement run-length encoding

2018-11-21 Thread Dave Rodgman
nimal. Compression ratio is within a few percent in all cases. This modifies the bitstream in a way which is backwards compatible (i.e., we can decompress old bitstreams, but old versions of lzo cannot decompress new bitstreams). Signed-off-by: Dave Rodgman --- Documentation/lzo.txt

[PATCH 2/6] lib/lzo: enable 64-bit CTZ on Arm

2018-11-21 Thread Dave Rodgman
From: Matt Sealey ARMv6 Thumb state introduced an RBIT instruction which, combined with CLZ as present in ARMv5, introduces an extremely fast path for counting trailing zeroes. Enable the use of the GCC builtin for this on ARMv6+ with CONFIG_THUMB2_KERNEL to ensure we get the 'new' instruction u

[PATCH 4/6] lib/lzo: fast 8-byte copy on arm64

2018-11-21 Thread Dave Rodgman
From: Matt Sealey Enable faster 8-byte copies on arm64. Signed-off-by: Dave Rodgman Signed-off-by: Matt Sealey --- lib/lzo/lzodefs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h index d167f8fc6795..341d0f6095ab 100644 --- a/lib/lzo

[PATCH 1/6] lib/lzo: clean-up by introducing COPY16

2018-11-21 Thread Dave Rodgman
From: Matt Sealey Most compilers should be able to merge adjacent loads/stores of sizes which are less than but effect a multiple of a machine word size (in effect a memcpy() of a constant amount). However the semantics of the macro are that it just does the copy, the pointer increment is in the