On Tue, Sep 21, 2021 at 9:35 AM Jeevan Ladhe <jeevan.la...@enterprisedb.com> wrote: > Here is a patch for lz4 based on the v5 set of patches. The patch adapts with > the > bbsink changes, and is now able to make the provision for the required length > for the output buffer using the new callback function > bbsink_lz4_begin_backup(). > > Sample command to take backup: > pg_basebackup -t server:/tmp/data_lz4 -Xnone --server-compression=lz4 > > Please let me know your thoughts.
This pretty much looks right, with the exception of the autoFlush thing about which I sent a separate email. I need to write docs for all of this, and ideally test cases. It might also be good if pg_basebackup had an option to un-gzip or un-lz4 archives, but I haven't thought too hard about what would be required to make that work. + if (opt->compression == BACKUP_COMPRESSION_LZ4) else if + /* First of all write the frame header to destination buffer. */ + Assert(CHUNK_SIZE >= LZ4F_HEADER_SIZE_MAX); + headerSize = LZ4F_compressBegin(mysink->ctx, + mysink->base.bbs_next->bbs_buffer, + CHUNK_SIZE, + prefs); I think this is wrong. I think you should be passing bbs_buffer_length instead of CHUNK_SIZE, and I think you can just delete CHUNK_SIZE. If you think otherwise, why? + * sink's bbs_buffer of length that can accomodate the compressed input Spelling. + * Make it next multiple of BLCKSZ since the buffer length is expected so. The buffer length is expected to be a multiple of BLCKSZ, so round up. + * If we are falling short of available bytes needed by + * LZ4F_compressUpdate() per the upper bound that is decided by + * LZ4F_compressBound(), send the archived contents to the next sink to + * process it further. If the number of available bytes has fallen below the value computed by LZ4F_compressBound(), ask the next sink to process the data so that we can empty the buffer. -- Robert Haas EDB: http://www.enterprisedb.com