On 10/18/21 11:01, Adrian Klaver wrote:
ot sure how much this applies to the Postgres usage of lz4. As I
understand it, this is only used internally for table compression.
When using pg_dump compression gzip is used. Unless you pipe plain
text output through some other program.
This appli
On Mon, Oct 18, 2021 at 08:01:04AM -0700, Adrian Klaver wrote:
> Not sure how much this applies to the Postgres usage of lz4. As I understand
> it, this is only used internally for table compression. When using pg_dump
> compression gzip is used. Unless you pipe plain text output through some
> oth
On 10/18/21 06:41, Mladen Gogala wrote:
On 10/18/21 01:07, Michael Paquier wrote:
CPU-speaking, LZ4 is*much* faster than pglz when it comes to
compression or decompression with its default options. The
compression ratio is comparable between both, still LZ4 compresses in
average less than PGL
On 10/18/21 01:07, Michael Paquier wrote:
CPU-speaking, LZ4 is*much* faster than pglz when it comes to
compression or decompression with its default options. The
compression ratio is comparable between both, still LZ4 compresses in
average less than PGLZ.
--
Michael
LZ4 works much better wi
On Mon, Oct 18, 2021 at 7:18 AM Michael Paquier wrote:
> On Sun, Oct 17, 2021 at 10:13:48PM +0300, Florents Tselai wrote:
> > I did look into VACUUM(full) for it’s PROCESS_TOAST option which
> > makes sense, but the thing is I already had a cron-ed VACUUM (full)
> > which I ended up disabling a w
On Mon, Oct 18, 2021 at 09:57:11AM +0300, Florents Tselai wrote:
> Oh, that’s good to know then. So besides ALTER COMPRESSION for
> future inserts there’s not much one can do for pre-existing values
The posting style of the mailing list is to not top-post, so if you
could avoid breaking the logic
Oh, that’s good to know then. So besides ALTER COMPRESSION for future inserts
there’s not much one can do for pre-existing values
I think it makes sense to update/ add more info to the docs on this as well,
since other people in the thread expected this to work that way too.
Maybe at some point,
On Sun, 17 Oct 2021 at 21:04, Florents Tselai wrote:
> Yes, That COPY-delete-COPY sequence is what I ended up doing.
> Unfortunately can’t use ranges as the PK its a text string.
Unless you have a really weird PK and have trouble calculating bounds,
text strings are sortable and fine to use as ra
On Sun, Oct 17, 2021 at 10:13:48PM +0300, Florents Tselai wrote:
> I did look into VACUUM(full) for it’s PROCESS_TOAST option which
> makes sense, but the thing is I already had a cron-ed VACUUM (full)
> which I ended up disabling a while back; exactly because of the
> double-space requirement.
Pl
On Sun, Oct 17, 2021 at 10:33:52PM +0200, Daniel Verite wrote:
> However lz4 appears to be much faster to compress than pglz, so its
> benefit is clear in terms of CPU usage for future insertions.
CPU-speaking, LZ4 is *much* faster than pglz when it comes to
compression or decompression with its d
Florents Tselai wrote:
> I have a table storing mostly text data (40M+ rows) that has
> pg_total_relation_size ~670GB.
> I’ve just upgraded to postgres 14 and I’m now eager to try the new LZ4
> compression.
You could start experimenting with data samples rather than the
full contents.
FW
I did look into VACUUM(full) for it’s PROCESS_TOAST option which makes sense,
but the thing is I already had a cron-ed VACUUM (full) which I ended up
disabling a while back; exactly because of the double-space requirement.
The DB has already a 1TB size and occupying another 600MB would require so
Yes, That COPY-delete-COPY sequence is what I ended up doing.
Unfortunately can’t use ranges as the PK its a text string.
> On 17 Oct 2021, at 7:36 PM, Ron wrote:
>
> On 10/17/21 10:12 AM, Florents Tselai wrote:
>> Hello,
>>
>> I have a table storing mostly text data (40M+ rows) that has
>> pg
On 10/17/21 10:17, Magnus Hagander wrote:
On Sun, Oct 17, 2021 at 5:12 PM Florents Tselai
mailto:florents.tse...@gmail.com>> wrote:
Is there a smarter way to do this ?
It should be enough to VACUUM FULL the table. (but it has to be VACUUM
FULL, not a regular vacuum). Or CLUSTER.
With
On Sun, Oct 17, 2021 at 5:12 PM Florents Tselai
wrote:
> Hello,
>
> I have a table storing mostly text data (40M+ rows) that has
> pg_total_relation_size ~670GB.
> I’ve just upgraded to postgres 14 and I’m now eager to try the new LZ4
> compression.
>
> I’ve altered the column to use the new lz4
On 10/17/21 11:36 AM, Ron wrote:
On 10/17/21 10:12 AM, Florents Tselai wrote:
Hello,
I have a table storing mostly text data (40M+ rows) that has
pg_total_relation_size ~670GB.
I’ve just upgraded to postgres 14 and I’m now eager to try the new LZ4
compression.
I’ve altered the column to use
On 10/17/21 10:12 AM, Florents Tselai wrote:
Hello,
I have a table storing mostly text data (40M+ rows) that has
pg_total_relation_size ~670GB.
I’ve just upgraded to postgres 14 and I’m now eager to try the new LZ4
compression.
I’ve altered the column to use the new lz4 compression, but that
17 matches
Mail list logo