> Personally, my biggest gripe about the way we do compression is that
> it's easy to detoast the same object lots of times. More generally,
> our in-memory representation of user data values is pretty much a
> mirror of our on-disk representation, even when that leads to excess
> conversions. Be
On Tue, Jan 8, 2013 at 9:51 AM, Claudio Freire wrote:
> On Tue, Jan 8, 2013 at 10:20 AM, Robert Haas wrote:
>> On Tue, Jan 8, 2013 at 4:04 AM, Takeshi Yamamuro
>> wrote:
>>> Apart from my patch, what I care is that the current one might
>>> be much slow against I/O. For example, when compressing
On Tue, Jan 8, 2013 at 10:20 AM, Robert Haas wrote:
> On Tue, Jan 8, 2013 at 4:04 AM, Takeshi Yamamuro
> wrote:
>> Apart from my patch, what I care is that the current one might
>> be much slow against I/O. For example, when compressing
>> and writing large values, compressing data (20-40MiB/s) m
On Tue, Jan 8, 2013 at 4:04 AM, Takeshi Yamamuro
wrote:
> Apart from my patch, what I care is that the current one might
> be much slow against I/O. For example, when compressing
> and writing large values, compressing data (20-40MiB/s) might be
> a dragger against writing data in disks (50-80MiB/
On 01/08/2013 10:19 AM, Takeshi Yamamuro wrote:
Hi,
(2013/01/07 22:36), Greg Stark wrote:
On Mon, Jan 7, 2013 at 10:21 AM, John R Pierce
wrote:
On 1/7/2013 2:05 AM, Andres Freund wrote:
I think there should be enough bits available in the toast pointer to
indicate the type of compression. I
Hi,
(2013/01/07 22:36), Greg Stark wrote:
On Mon, Jan 7, 2013 at 10:21 AM, John R Pierce wrote:
On 1/7/2013 2:05 AM, Andres Freund wrote:
I think there should be enough bits available in the toast pointer to
indicate the type of compression. I seem to remember somebody even
posting a patch t
So why don't we use LZ4?
+1
Agree though, I think there're still patent issues there.
regards,
--
Takeshi Yamamuro
NTT Cyber Communications Laboratory Group
Software Innovation Center
(Open Source Software Center)
Tel: +81-3-5860-5057 Fax: +81-3-5463-5490
Mail:yamamuro.take...@lab.ntt.c
Hi,
Why would that be a good tradeoff to make? Larger stored values
require
more I/O, which is likely to swamp any CPU savings in the compression
step. Not to mention that a value once written may be read many times,
so the extra I/O cost could be multiplied many times over
On Mon, Jan 7, 2013 at 4:19 PM, Tom Lane wrote:
> Hm ... one of us is reading those results backwards, then.
*looks*
It's me.
Sorry for the noise.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hacker
On 01/07/2013 04:19 PM, Tom Lane wrote:
Robert Haas writes:
On Mon, Jan 7, 2013 at 11:16 AM, Tom Lane wrote:
Why would that be a good tradeoff to make? Larger stored values require
more I/O, which is likely to swamp any CPU savings in the compression
step. Not to mention that a value once
On Mon, Jan 7, 2013 at 2:41 PM, Tom Lane wrote:
> Merlin Moncure writes:
>> On Mon, Jan 7, 2013 at 10:16 AM, Tom Lane wrote:
>>> Takeshi Yamamuro writes:
The attached is a patch to improve compression speeds with loss of
compression ratios in backend/utils/adt/pg_lzcompress.c.
>
>>> W
Robert Haas writes:
> On Mon, Jan 7, 2013 at 11:16 AM, Tom Lane wrote:
>> Why would that be a good tradeoff to make? Larger stored values require
>> more I/O, which is likely to swamp any CPU savings in the compression
>> step. Not to mention that a value once written may be read many times,
>>
On Mon, Jan 7, 2013 at 11:16 AM, Tom Lane wrote:
> Why would that be a good tradeoff to make? Larger stored values require
> more I/O, which is likely to swamp any CPU savings in the compression
> step. Not to mention that a value once written may be read many times,
> so the extra I/O cost coul
Merlin Moncure writes:
> On Mon, Jan 7, 2013 at 10:16 AM, Tom Lane wrote:
>> Takeshi Yamamuro writes:
>>> The attached is a patch to improve compression speeds with loss of
>>> compression ratios in backend/utils/adt/pg_lzcompress.c.
>> Why would that be a good tradeoff to make? Larger stored
On Mon, Jan 7, 2013 at 10:16 AM, Tom Lane wrote:
> Takeshi Yamamuro writes:
>> The attached is a patch to improve compression speeds with loss of
>> compression ratios in backend/utils/adt/pg_lzcompress.c.
>
> Why would that be a good tradeoff to make? Larger stored values require
> more I/O, wh
Takeshi Yamamuro writes:
> The attached is a patch to improve compression speeds with loss of
> compression ratios in backend/utils/adt/pg_lzcompress.c.
Why would that be a good tradeoff to make? Larger stored values require
more I/O, which is likely to swamp any CPU savings in the compression
s
Hi,
It seems worth rereading the thread around
http://archives.postgresql.org/message-id/CAAZKuFb59sABSa7gCG0vnVnGb-mJCUBBbrKiyPraNXHnis7KMw%40mail.gmail.com
for people wanting to work on this.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
Postgre
On Mon, Jan 07, 2013 at 01:36:33PM +, Greg Stark wrote:
> On Mon, Jan 7, 2013 at 10:21 AM, John R Pierce wrote:
> > On 1/7/2013 2:05 AM, Andres Freund wrote:
> >>
> >> I think there should be enough bits available in the toast pointer to
> >> indicate the type of compression. I seem to remembe
On Mon, Jan 07, 2013 at 09:10:31AM +, Simon Riggs wrote:
> On 7 January 2013 07:29, Takeshi Yamamuro
> wrote:
>
> > Anyway, the compression speed in lz4 is very fast, so in my
> > opinion, there is a room to improve the current implementation
> > in pg_lzcompress.
>
> So why don't we use LZ4
On 7 January 2013 13:36, Greg Stark wrote:
> On Mon, Jan 7, 2013 at 10:21 AM, John R Pierce wrote:
>> On 1/7/2013 2:05 AM, Andres Freund wrote:
>>>
>>> I think there should be enough bits available in the toast pointer to
>>> indicate the type of compression. I seem to remember somebody even
>>>
On Mon, Jan 7, 2013 at 10:21 AM, John R Pierce wrote:
> On 1/7/2013 2:05 AM, Andres Freund wrote:
>>
>> I think there should be enough bits available in the toast pointer to
>> indicate the type of compression. I seem to remember somebody even
>> posting a patch to that effect?
>> I agree that it'
On 2013-01-07 02:21:26 -0800, John R Pierce wrote:
> On 1/7/2013 2:05 AM, Andres Freund wrote:
> >I think there should be enough bits available in the toast pointer to
> >indicate the type of compression. I seem to remember somebody even
> >posting a patch to that effect?
> >I agree that it's proba
On 1/7/2013 2:05 AM, Andres Freund wrote:
I think there should be enough bits available in the toast pointer to
indicate the type of compression. I seem to remember somebody even
posting a patch to that effect?
I agree that it's probably too late in the 9.3 cycle to start with this.
so an upgra
On 2013-01-07 09:57:58 +, Simon Riggs wrote:
> On 7 January 2013 09:19, John R Pierce wrote:
> > On 1/7/2013 1:10 AM, Simon Riggs wrote:
> >>
> >> On 7 January 2013 07:29, Takeshi Yamamuro
> >> wrote:
> >>
> >>> >Anyway, the compression speed in lz4 is very fast, so in my
> >>> >opinion, the
On 7 January 2013 09:19, John R Pierce wrote:
> On 1/7/2013 1:10 AM, Simon Riggs wrote:
>>
>> On 7 January 2013 07:29, Takeshi Yamamuro
>> wrote:
>>
>>> >Anyway, the compression speed in lz4 is very fast, so in my
>>> >opinion, there is a room to improve the current implementation
>>> >in pg_lzc
On 1/7/2013 1:10 AM, Simon Riggs wrote:
On 7 January 2013 07:29, Takeshi Yamamuro
wrote:
>Anyway, the compression speed in lz4 is very fast, so in my
>opinion, there is a room to improve the current implementation
>in pg_lzcompress.
So why don't we use LZ4?
what will changing compression
On 7 January 2013 07:29, Takeshi Yamamuro
wrote:
> Anyway, the compression speed in lz4 is very fast, so in my
> opinion, there is a room to improve the current implementation
> in pg_lzcompress.
So why don't we use LZ4?
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL
Hi, hackers,
The attached is a patch to improve compression speeds with loss of
compression ratios in backend/utils/adt/pg_lzcompress.c. Recent
modern compression techniques like google LZ4 and Snappy inspreid
me to write this patch. Thre are two points of my patch:
1. Skip at most 255 literals t
28 matches
Mail list logo