On Wed, Oct 27, 2010 at 05:49:42PM -0400, Tom Lane wrote:
> Kenneth Marshall writes:
> > Just keeping the hope alive for faster compression.
>
> Is there any evidence that that's something we should worry about?
> I can't recall ever having seen a code profile that shows the
> pg_lzcompress.c cod
"Pierre C" wrote:
in-page compression
How would that be different from the in-page compression done by
TOAST now? Or are you just talking about being able to make it
more aggressive?
-Kevin
Well, I suppose lzo-style compression would be better used on data that is
written a few times maxi
Kenneth Marshall writes:
> Just keeping the hope alive for faster compression.
Is there any evidence that that's something we should worry about?
I can't recall ever having seen a code profile that shows the
pg_lzcompress.c code high enough to look like a bottleneck compared
to other query costs.
Kenneth Marshall, 27.10.2010 22:41:
Different algorithms have been discussed before. A quick search turned
up:
quicklz - GPL or commercial
fastlz - MIT works with BSD okay
zippy - Google - no idea about the licensing
lzf - BSD-type
lzo - GPL or commercial
zlib - current algorithm
Of these lzf c
On Wed, Oct 27, 2010 at 09:52:49PM +0200, Pierre C wrote:
>> Even if somebody had a
>> great idea that would make things smaller without any other penalty,
>> which I'm not sure I believe either.
>
> I'd say that the only things likely to bring an improvement significant
> enough to warrant the (q
"Pierre C" wrote:
> in-page compression
How would that be different from the in-page compression done by
TOAST now? Or are you just talking about being able to make it
more aggressive?
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
Even if somebody had a
great idea that would make things smaller without any other penalty,
which I'm not sure I believe either.
I'd say that the only things likely to bring an improvement significant
enough to warrant the (quite large) hassle of implementation would be :
- read-only / archi
On Tue, Oct 26, 2010 at 6:51 PM, Tom Lane wrote:
> Robert Haas writes:
>> I don't think this is due to fillfactor - the default fillfactor is
>> 100, and anyway we ARE larger on disk than Oracle. We really need to
>> do something about that, in the changes to NUMERIC in 9.1 are a step
>> in that
Robert Haas writes:
> I don't think this is due to fillfactor - the default fillfactor is
> 100, and anyway we ARE larger on disk than Oracle. We really need to
> do something about that, in the changes to NUMERIC in 9.1 are a step
> in that direction, but I think a lot more work is needed.
Of c
On Sat, Oct 16, 2010 at 2:44 PM, Kenneth Marshall wrote:
> Interesting data points. The amount of rows that you managed to
> insert into PostgreSQL before Oracle gave up the ghost is 95%
> of the rows in the Oracle version of the database. To count 5%
> fewer rows, it took PostgreSQL 24 seconds lo
On 10/18/2010 3:58 AM, Vitalii Tymchyshyn wrote:
Hello.
Did you vacuum postgresql DB before the count(*). I ask this because
(unless table was created& loaded in same transaction) on the first
scan, postgresql has to write hint bits to the whole table. Second scan
may be way faster.
Best rega
16.10.10 19:51, Mladen Gogala написав(ла):
There was some doubt as for the speed of doing the select count(*) in
PostgreSQL and Oracle.
To that end, I copied the most part of the Oracle table I used before
to Postgres. Although the copy
wasn't complete, the resulting table is already significant
Hi,
Interesting data points. The amount of rows that you managed to
insert into PostgreSQL before Oracle gave up the ghost is 95%
of the rows in the Oracle version of the database. To count 5%
fewer rows, it took PostgreSQL 24 seconds longer. Or adjusting
for the missing rows, 52 seconds longer fo
13 matches
Mail list logo