On Thu, Nov 14, 2019 at 6:30 PM Tomas Vondra
wrote:
> On Thu, Nov 14, 2019 at 03:27:42PM +0530, Rushabh Lathia wrote:
> >Today I noticed strange behaviour, consider the following test:
> >
> >postgres@126111=#create table foo ( a text );
> >CREATE TABLE
> >postgres@126111=#insert into foo values
On Thu, Nov 14, 2019 at 03:27:42PM +0530, Rushabh Lathia wrote:
Today I noticed strange behaviour, consider the following test:
postgres@126111=#create table foo ( a text );
CREATE TABLE
postgres@126111=#insert into foo values ( repeat('PostgreSQL is the
world''s best database and leading by an
Today I noticed strange behaviour, consider the following test:
postgres@126111=#create table foo ( a text );
CREATE TABLE
postgres@126111=#insert into foo values ( repeat('PostgreSQL is the
world''s best database and leading by an Open Source Community.', 8000));
INSERT 0 1
postgres@126111=#sele
Tomas Vondra writes:
> On Tue, Oct 01, 2019 at 10:10:37AM -0400, Tom Lane wrote:
>> Maybe it accidentally seems to work on little-endian, thanks to the
>> different definitions of varlena headers?
> Maybe. Let's see if just using VARSIZE_ANY does the trick. If not, I'll
> investigate further.
FW
On Tue, Oct 01, 2019 at 10:10:37AM -0400, Tom Lane wrote:
Tomas Vondra writes:
Hmmm, this seems to trigger a failure on thorntail, which is a sparc64
machine (and it seems to pass on all x86 machines, so far).
gharial's not happy either, and I bet if you wait a bit longer you'll
see the same
Tomas Vondra writes:
> Hmmm, this seems to trigger a failure on thorntail, which is a sparc64
> machine (and it seems to pass on all x86 machines, so far).
gharial's not happy either, and I bet if you wait a bit longer you'll
see the same on other big-endian machines.
> I wonder if that's wrong,
On Tue, Oct 01, 2019 at 02:34:20PM +0200, Tomas Vondra wrote:
On Tue, Oct 01, 2019 at 12:08:05PM +0200, Tomas Vondra wrote:
On Tue, Oct 01, 2019 at 11:20:39AM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 22:29, Tomas Vondra
написал(а):
On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey
On Tue, Oct 01, 2019 at 12:08:05PM +0200, Tomas Vondra wrote:
On Tue, Oct 01, 2019 at 11:20:39AM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 22:29, Tomas Vondra
написал(а):
On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 20:56, Tomas Vondra
на
On Tue, Oct 01, 2019 at 11:20:39AM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 22:29, Tomas Vondra
написал(а):
On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 20:56, Tomas Vondra
написал(а):
I mean this:
/*
* Use int64 to prevent overflow
> 30 сент. 2019 г., в 22:29, Tomas Vondra
> написал(а):
>
> On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:
>>
>>
>>> 30 сент. 2019 г., в 20:56, Tomas Vondra
>>> написал(а):
>>>
>>> I mean this:
>>>
>>> /*
>>> * Use int64 to prevent overflow during calculation.
>>>
On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 20:56, Tomas Vondra
написал(а):
I mean this:
/*
* Use int64 to prevent overflow during calculation.
*/
compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;
I'm not very familiar with pglz int
> 30 сент. 2019 г., в 20:56, Tomas Vondra
> написал(а):
>
> I mean this:
>
> /*
>* Use int64 to prevent overflow during calculation.
>*/
> compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;
>
> I'm not very familiar with pglz internals, but I'm a bit puzzled by
> this. My
On Fri, Sep 27, 2019 at 01:00:36AM +0200, Tomas Vondra wrote:
On Wed, Sep 25, 2019 at 05:38:34PM -0300, Alvaro Herrera wrote:
Hello, can you please update this patch?
I'm not the patch author, but I've been looking at the patch recently
and I have a rebased version at hand - so attached.
FWI
On Wed, Sep 25, 2019 at 05:38:34PM -0300, Alvaro Herrera wrote:
Hello, can you please update this patch?
I'm not the patch author, but I've been looking at the patch recently
and I have a rebased version at hand - so attached.
FWIW I believe the patch is solid and in good shape, and it got st
Hello, can you please update this patch?
--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Jul 10, 2019 at 01:35:25PM +0800, Binguo Bao wrote:
Tomas Vondra 于2019年7月10日周三 上午5:12写道:
On Sat, Jul 06, 2019 at 05:23:37PM +0200, Tomas Vondra wrote:
>On Sat, Jul 06, 2019 at 02:27:56AM +0800, Binguo Bao wrote:
>>Hi, Tomas!
>>Thanks for your testing and the suggestion.
>>
>>That's qui
Tomas Vondra 于2019年7月10日周三 上午5:12写道:
> On Sat, Jul 06, 2019 at 05:23:37PM +0200, Tomas Vondra wrote:
> >On Sat, Jul 06, 2019 at 02:27:56AM +0800, Binguo Bao wrote:
> >>Hi, Tomas!
> >>Thanks for your testing and the suggestion.
> >>
> >>That's quite bizarre behavior - it does work with a prefix, b
On Sat, Jul 06, 2019 at 05:23:37PM +0200, Tomas Vondra wrote:
On Sat, Jul 06, 2019 at 02:27:56AM +0800, Binguo Bao wrote:
Hi, Tomas!
Thanks for your testing and the suggestion.
That's quite bizarre behavior - it does work with a prefix, but not with
suffix. And the exact ERROR changes after th
On Sat, Jul 06, 2019 at 02:27:56AM +0800, Binguo Bao wrote:
Hi, Tomas!
Thanks for your testing and the suggestion.
That's quite bizarre behavior - it does work with a prefix, but not with
suffix. And the exact ERROR changes after the prefix query.
I think bug is caused by "#2 0x004c
Tomas Vondra 于2019年7月5日周五 上午1:46写道:
> I've done a bit of testing and benchmaring on this patch today, and
> there's a bug somewhere, making it look like there are corrupted data.
>
> What I'm seeing is this:
>
> CREATE TABLE t (a text);
>
> -- attached is data for one row
> COPY t FROM '/tmp/t.da
Of course, I forgot to attach the files, so here they are.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
t.data.gz
Description: application/gzip
(gdb) bt
#0 toast_decompress_datum (attr=0x12572e0) at
On Thu, Jul 04, 2019 at 11:10:24AM +0200, Andrey Borodin wrote:
3 июля 2019 г., в 18:06, Binguo Bao написал(а):
Paul Ramsey 于2019年7月2日周二 下午10:46写道:
This looks good to me. A little commentary around why
pglz_maximum_compressed_size() returns a universally correct answer
(there's no way the co
> 3 июля 2019 г., в 18:06, Binguo Bao написал(а):
>
> Paul Ramsey 于2019年7月2日周二 下午10:46写道:
> This looks good to me. A little commentary around why
> pglz_maximum_compressed_size() returns a universally correct answer
> (there's no way the compressed size can ever be larger than this
> because..
Paul Ramsey 于2019年7月2日周二 下午10:46写道:
> This looks good to me. A little commentary around why
> pglz_maximum_compressed_size() returns a universally correct answer
> (there's no way the compressed size can ever be larger than this
> because...) would be nice for peasants like myself.
>
> If you're
On Mon, Jul 1, 2019 at 6:46 AM Binguo Bao wrote:
> > Andrey Borodin 于2019年6月29日周六 下午9:48写道:
>> I've took a look into the code.
>> I think we should extract function for computation of max_compressed_size
>> and put it somewhere along with pglz code. Just in case something will
>> change somethi
Hi!
> Andrey Borodin 于2019年6月29日周六 下午9:48写道:
> Hi!
> Please, do not use top-posting, i.e. reply style where you quote whole
> message under your response. It makes reading of archives terse.
>
> > 24 июня 2019 г., в 7:53, Binguo Bao написал(а):
> >
> >> This is not correct: L bytes of compresse
Hi!
Please, do not use top-posting, i.e. reply style where you quote whole message
under your response. It makes reading of archives terse.
> 24 июня 2019 г., в 7:53, Binguo Bao написал(а):
>
>> This is not correct: L bytes of compressed data do not always can be decoded
>> into at least L byt
> This is not correct: L bytes of compressed data do not always can be
decoded into at least L bytes of data. At worst we have one control byte
per 8 bytes of literal bytes. This means at most we need (L*9 + 8) / 8
bytes with current pglz format.
Good catch! I've corrected the related code in the
Hi, Binguo!
> 2 июня 2019 г., в 19:48, Binguo Bao написал(а):
>
> Hi, hackers!
> This seems to have a 10x improvement. If the number of toast data chunks is
> more, I believe that patch can play a greater role, there are about 200
> related TOAST data chunks for each entry in the case.
T
29 matches
Mail list logo