Looks good to me, thanks
Dave
On 22/05/2020, 15:11, "C. Masloch" wrote:
The encoded distance bits are zero, but the distance that is
calculated from this is actually equal to 16384. So correct
this statement to read that the 0001HLLL instruction means
EOS when a distance of 163
Your update looks correct to me, thanks.
Dave
On 22/05/2020, 15:11, "C. Masloch" wrote:
There was an error in the description of the initial byte's
interpretation. While "18..21" was listed as "copy 0..3 literals",
it should actually be interpreted as "copy 1..4 literals".
Th
input
sequences; for all of these cases, updated lzo-rle worked correctly.
There is no significant impact to performance or compression ratio.
Cc: sta...@vger.kernel.org
Signed-off-by: Dave Rodgman
---
Documentation/lzo.txt| 8 ++--
lib/lzo/lzo1x_compress.c | 13 +
2 files
Fix an unaligned access which breaks on platforms where this is not
permitted (e.g., Sparc).
Signed-off-by: Dave Rodgman
---
lib/lzo/lzo1x_compress.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
index
Thanks Anatoly, I'll take a look at this. Could you please let me know the
exact hardware you're running on?
thanks
Dave
From: linux-kernel-ow...@vger.kernel.org
on behalf of Anatoly Pugachev
Sent: 08 September 2019 11:58:08
To: Sparc kernel list
Cc:
> 0 require a minimum stream length of 5.
Also fixes a bug in handling the tail for very short inputs when a
bitstream version is present.
Change-Id: Ifcf7a1b9acc46a25cb3ef746eccfe26937209560
Signed-off-by: Dave Rodgman
---
Documentation/lzo.txt | 8 +---
lib/lzo/lzo1x_compres
On 19/03/2019 8:15 pm, Nick Terrell wrote:
> Hi Dave,
>
> I just saw you patches adding LZO-RLE, so I decided to fuzz the LZO
> compressor and decompressor. I didn't find any crashes, but I found some edge
> cases in the decompressor.
Hi Nick,
Thanks - I will take a look at this. These cases won'
To prevent any issues with persistent data, separate lzo-rle
from lzo so that it is treated as a separate algorithm, and
lzo is still available.
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Matt Sealey
Cc: Minchan Kim
nimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo
cannot decompress new bitstreams).
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartma
lzo-rle gives higher performance and similar compression ratios to lzo.
Signed-off-by: Dave Rodgman
---
drivers/block/zram/zram_drv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 04ca65912638
Hi,
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on top of
the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
Modify the ifdefs in lzodefs.h to be more consistent with normal kernel
macros (e.g., change __aarch64__ to CONFIG_ARM64).
Signed-off-by: Dave Rodgman
Cc: Herbert Xu
Cc: David S. Miller
Cc: Nitin Gupta
Cc: Richard Purdie
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
Cc: Sergey Senozhatsky
://lkml.kernel.org/r/20181127161913.23863-5-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
Cc: Nitin Gupta
Cc: Richard Purdie
Cc: Sergey Senozhatsky
Cc: Sonny Rao
From: Matt Sealey
Enable faster 8-byte copies on arm64.
Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
Hi,
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the patches which were ack'd by Markus (i.e., not
the RLE patches). I believe Markus was happy to land these (please shout if
that's not the case).
Regarding the RLE patches, I'
ls on this please? I have
not seen any crashes in my testing so I'm
not able to look into this without more data.
On 07/12/2018 3:54 pm, Dave Rodgman wrote:
> Hi Markus,
>
> On 06/12/2018 3:47 pm, Markus F.X.J. Oberhumer wrote:> Request 3 - add
> lzo-rle; *NOT* acked by me
>
Hi Markus,
On 06/12/2018 3:47 pm, Markus F.X.J. Oberhumer wrote:> Request 3 - add lzo-rle;
*NOT* acked by me
>
> [PATCH 6/8] lib/lzo: implement run-length encoding
> [PATCH 7/8] lib/lzo: separate lzo-rle from lzo
> [PATCH 8/8] zram: default to lzo-rle instead of lzo
>
> It (1) s
On 05/12/2018 7:30 am, Sergey Senozhatsky wrote:
> Hi Dave,
>
> Notices this warning today:
>
> lib/lzo/lzo1x_compress.c: In function ‘lzo1x_1_do_compress’:
> lib/lzo/lzo1x_compress.c:239:14: warning: ‘m_pos’ may be used uninitialized
> in this function [-Wmaybe-uninitialized]
> 239 | m_off =
On 30/11/2018 5:40 am, Stephen Rothwell wrote:
> After merging the akpm tree, today's linux-next build (arm
> multi_v7_defconfig) produced this warning:
>
> lib/lzo/lzo1x_compress.c: In function 'lzo1x_1_do_compress':
> lib/lzo/lzo1x_compress.c:239:14: warning: 'm_pos' may be used uninitialized
>
From: Matt Sealey
Enable faster 8-byte copies on arm64.
Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
lzo-rle gives higher performance and similar compression ratios to lzo.
Testing with 80 browser tabs showed a 27% reduction in total time spent
(de)compressing data during swapping.
Signed-off-by: Dave Rodgman
---
drivers/block/zram/zram_drv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
To prevent any issues with persistent data, separate lzo-rle
from lzo so that it is treated as a separate algorithm, and
lzo is still available.
Link: http://lkml.kernel.org/r/20181127161913.23863-8-dave.rodg...@arm.com
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc
nstruction usage.
We do not bother enabling LZO_USE_CTZ64 support for ARMv5 as the builtin
code path does the same thing as the LZO_USE_CTZ32 code, only with more
register pressure.
Link: http://lkml.kernel.org/r/20181127161913.23863-4-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-of
igned-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Matt Sealey
Cc: Minchan Kim
Cc: Nitin Gupta
Cc: Richard Purdie
Cc: Sergey Senozhatsky
Cc: Sonny Rao
Signed-off-by: Andrew Morton
Signed-off-by: Stephen Rot
This patch series introduces performance improvements for lzo.
The previous version of this patchset is here:
https://lkml.org/lkml/2018/11/30/807
This version of the patchset fixes a maybe-used-uninitialized warning
(although the previous version was still safe).
Dave
Modify the ifdefs in lzodefs.h to be more consistent with normal kernel
macros (e.g., change __aarch64__ to CONFIG_ARM64).
Signed-off-by: Dave Rodgman
Cc: Herbert Xu
Cc: David S. Miller
Cc: Nitin Gupta
Cc: Richard Purdie
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
Cc: Sergey Senozhatsky
ns to do the same job.
Link: http://lkml.kernel.org/r/20181127161913.23863-3-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
Cc: Nitin Gupta
Cc: Richard Purdi
://lkml.kernel.org/r/20181127161913.23863-5-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
Cc: Nitin Gupta
Cc: Richard Purdie
Cc: Sergey Senozhatsky
Cc: Sonny Rao
> On 2018/11/30 0:49, Dave Rodgman wrote:
>> On 28/11/2018 1:52 pm, David Sterba wrote:
>>
>>> The fix is adding a few branches to code that's supposed to be as fast
>>> as possible. The branches would be evaluated all the time while
>>> protecting
nstruction usage.
We do not bother enabling LZO_USE_CTZ64 support for ARMv5 as the builtin
code path does the same thing as the LZO_USE_CTZ32 code, only with more
register pressure.
Link: http://lkml.kernel.org/r/20181127161913.23863-4-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-of
lzo-rle gives higher performance and similar compression ratios to lzo.
Testing with 80 browser tabs showed a 27% reduction in total time spent
(de)compressing data during swapping.
Signed-off-by: Dave Rodgman
---
drivers/block/zram/zram_drv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
To prevent any issues with persistent data, separate lzo-rle
from lzo so that it is treated as a separate algorithm, and
lzo is still available.
Link: http://lkml.kernel.org/r/20181127161913.23863-8-dave.rodg...@arm.com
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc
Modify the ifdefs in lzodefs.h to be more consistent with normal kernel
macros (e.g., change __aarch64__ to CONFIG_ARM64).
Signed-off-by: Dave Rodgman
Cc: Herbert Xu
Cc: David S. Miller
Cc: Nitin Gupta
Cc: Richard Purdie
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
Cc: Sergey Senozhatsky
From: Matt Sealey
Enable faster 8-byte copies on arm64.
Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
This patch series introduces performance improvements for lzo.
The previous version of this patchset is here:
https://lkml.org/lkml/2018/11/27/1086
On 29/11/2018 8:32 pm, Andrew Morton wrote:
> On Thu, 29 Nov 2018 10:21:53 +0000 Dave Rodgman wrote:
>>> OK, so it's not just
://lkml.kernel.org/r/20181127161913.23863-5-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
Cc: Nitin Gupta
Cc: Richard Purdie
Cc: Sergey Senozhatsky
Cc: Sonny Rao
igned-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Matt Sealey
Cc: Minchan Kim
Cc: Nitin Gupta
Cc: Richard Purdie
Cc: Sergey Senozhatsky
Cc: Sonny Rao
Signed-off-by: Andrew Morton
Signed-off-by: Stephen Rot
ns to do the same job.
Link: http://lkml.kernel.org/r/20181127161913.23863-3-dave.rodg...@arm.com
Signed-off-by: Matt Sealey
Signed-off-by: Dave Rodgman
Cc: David S. Miller
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: Markus F.X.J. Oberhumer
Cc: Minchan Kim
Cc: Nitin Gupta
Cc: Richard Purdi
On 29/11/2018 4:43 am, Sergey Senozhatsky wrote:
> On (11/27/18 16:19), Dave Rodgman wrote:>
>> +static struct crypto_alg alg = {
>> +.cra_name = "lzo-rle",
>> +.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
>> +.cra_ctxsi
On 28/11/2018 1:52 pm, David Sterba wrote:
> The fix is adding a few branches to code that's supposed to be as fast
> as possible. The branches would be evaluated all the time while
> protecting against one signle bad page address. This does not look like
> a good performance tradeoff.
As an alte
On 29/11/2018 4:43 am, Sergey Senozhatsky wrote:
> On (11/27/18 16:19), Dave Rodgman wrote:
>> Documentation/lzo.txt | 12 ++-
>> crypto/Makefile | 2 +-
>> crypto/lzo-rle.c | 175
>>
From: Matt Sealey
Enable faster 8-byte copies on arm64.
Signed-off-by: Dave Rodgman
Signed-off-by: Matt Sealey
---
lib/lzo/lzodefs.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h
index c8965dc181df..06fa83a38e0a 100644
--- a/lib/lzo
To prevent any issues with persistent data, separate lzo-rle
from lzo so that it is treated as a separate algorithm, and
lzo is still available.
Use lzo-rle as the default algorithm for
zram.
Signed-off-by: Dave Rodgman
---
Documentation/lzo.txt | 12 ++-
crypto/Makefile
Sealey
Signed-off-by: Dave Rodgman
---
lib/lzo/lzodefs.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h
index c0193f726db0..c8965dc181df 100644
--- a/lib/lzo/lzodefs.h
+++ b/lib/lzo/lzodefs.h
@@ -28,7 +28,7 @@
#if defined(__BIG_ENDIAN
From: Matt Sealey
ARMv6 Thumb state introduced an RBIT instruction which, combined with CLZ
as present in ARMv5, introduces an extremely fast path for counting
trailing zeroes.
Enable the use of the GCC builtin for this on ARMv6+ with
CONFIG_THUMB2_KERNEL to ensure we get the 'new' instruction u
nimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo
cannot decompress new bitstreams).
Signed-off-by: Dave Rodgman
---
Documentation/lzo.txt
Modify the ifdefs in lzodefs.h to be more consistent with normal kernel
macros (e.g., change __aarch64__ to CONFIG_ARM64).
Signed-off-by: Dave Rodgman
---
lib/lzo/lzodefs.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h
index
From: Matt Sealey
Most compilers should be able to merge adjacent loads/stores of sizes
which are less than but effect a multiple of a machine word size (in
effect a memcpy() of a constant amount). However the semantics of the
macro are that it just does the copy, the pointer increment is in the
This patch series introduces performance improvements for lzo.
The previous version of this patchset is here:
https://lkml.org/lkml/2018/11/21/625
This version tidies up the ifdefs as per Christoph's comment (although
certainly more could be done, this is at least a bit more consistent
with norma
come from if you have 0% zeros.
The chart does indeed include the other improvements, so this is where
the performance uplift on the left hand side of the chart (i.e., random
data) comes from.
Thanks for taking a look at this.
Dave
>
> Cheers,
> Markus
>
>
>
> On
google.com/file/d/18GU4pgRVCLNN7wXxynz-8R2ygrY2IdyE/view
Contributors:
Dave Rodgman
Matt Sealey
Sealey
Signed-off-by: Dave Rodgman
---
lib/lzo/lzodefs.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h
index a9927bbce21f..d167f8fc6795 100644
--- a/lib/lzo/lzodefs.h
+++ b/lib/lzo/lzodefs.h
@@ -28,7 +28,7 @@
#if defined(__BIG_ENDIAN
To prevent any issues with persistent data, separate lzo-rle
from lzo so that it is treated as a separate algorithm, and
lzo is still available.
Use lzo-rle as the default algorithm for
zram.
Signed-off-by: Dave Rodgman
---
Documentation/lzo.txt | 12 ++-
crypto/Makefile
nimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo
cannot decompress new bitstreams).
Signed-off-by: Dave Rodgman
---
Documentation/lzo.txt
From: Matt Sealey
ARMv6 Thumb state introduced an RBIT instruction which, combined with CLZ
as present in ARMv5, introduces an extremely fast path for counting
trailing zeroes.
Enable the use of the GCC builtin for this on ARMv6+ with
CONFIG_THUMB2_KERNEL to ensure we get the 'new' instruction u
From: Matt Sealey
Enable faster 8-byte copies on arm64.
Signed-off-by: Dave Rodgman
Signed-off-by: Matt Sealey
---
lib/lzo/lzodefs.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h
index d167f8fc6795..341d0f6095ab 100644
--- a/lib/lzo
From: Matt Sealey
Most compilers should be able to merge adjacent loads/stores of sizes which
are less than but effect a multiple of a machine word size (in effect a
memcpy() of a constant amount). However the semantics of the macro are that
it just does the copy, the pointer increment is in the
57 matches
Mail list logo