On Sun, Apr 19, 2015 at 10:38 PM, Jim Nasby
wrote:
> On 4/19/15 9:09 PM, Jeff Janes wrote:
>
>> I did literally the simplest thing I could think of as a proof of
>> concept patch, to see if it would actually fix things. I just jumped
>> back a certain number of blocks occasionally and prefetched
On 04/19/2015 11:51 PM, Tomas Vondra wrote:
Hi there,
in the past we've repeatedly discussed the option of using a different
compression algorithm (e.g. lz4), but every time the discussion died off
because of fear of possible patent issues [1] [2] and many other
threads. Have we decided it's not
On 4/19/15 9:09 PM, Jeff Janes wrote:
I did literally the simplest thing I could think of as a proof of
concept patch, to see if it would actually fix things. I just jumped
back a certain number of blocks occasionally and prefetched them
forward, then resumed the regular backward scan. The patc
On Tue, Apr 14, 2015 at 11:45 PM, Jeff Janes wrote:
> On Tue, Mar 31, 2015 at 12:02 PM, Tomas Vondra <
> tomas.von...@2ndquadrant.com> wrote:
>
>> Hi all,
>>
>> attached is v4 of the patch implementing adaptive ndistinct estimator.
>>
>
> Hi Tomas,
>
> I have a case here where the adaptive algori
Hi,
On 2015-04-19 22:51:53 +0200, Tomas Vondra wrote:
> The reason why I'm asking about this is the multivariate statistics patch -
> while optimizing the planning overhead, I realized that considerable amount
> of time is spent decompressing the statistics (serialized as bytea), and
> using an al
On Thu, Apr 16, 2015 at 1:00 PM, Michael Paquier wrote:
> Visibly that's not the case for this test case, the timing issues that
> we saw happened not because of the standby not catching up, but
> because of the promotion not taking effect in a timely fashion. And
> that's as well something I saw o
After a large bulk load aborted near the end, I decided to vacuum the main
table so as to not leave a huge chunk of free space in the middle of it,
before re-running the bulk load. This vacuum took a frustratingly long
time, as the backwards scan over the table to truncate the space did not
trigge
* Michael Paquier (michael.paqu...@gmail.com) wrote:
> On Mon, Apr 20, 2015 at 5:51 AM, Tomas Vondra wrote:
> > I'm a bit confused though, because I've noticed various other FOSS projects
> > adopting lz4 over the past few years and I'm yet to find a project voicing
> > the same concerns about pate
On Mon, Apr 20, 2015 at 5:51 AM, Tomas Vondra wrote:
> I'm a bit confused though, because I've noticed various other FOSS projects
> adopting lz4 over the past few years and I'm yet to find a project voicing
> the same concerns about patents. So either they're reckless or we're
> excessively parano
Jim Nasby writes:
> If we're going to magically create the schema (which at least for a
> fully non-relocatable extension is fine), then I think we should also
> mark the schema as being part of the extension.
If we do that, what happens to other objects that were added to the
containing schema
On 4/19/15 5:08 PM, Tom Lane wrote:
Jim Nasby writes:
CREATE EXTENSION creates the variant schema for me, but it leaves it
behind when I drop the extension. I assume this is a bug and not by design?
No, it's intentional. If you want the schema to be considered part of the
extension, then cre
Jim Nasby writes:
> CREATE EXTENSION creates the variant schema for me, but it leaves it
> behind when I drop the extension. I assume this is a bug and not by design?
No, it's intentional. If you want the schema to be considered part of the
extension, then create it within the extension.
We co
Is there a fundamental reason SQL/plpgsql functions won't accept record
as an input type? If not, can someone point me at a patch that might
show how much work would be involved in adding support?
My particular use case is a generic function that will count how many
fields in a record are NULL
This happens on 9.4.1 and HEAD.
The variant extension is locked to a specific schema by it's control file:
decibel@decina:[23:53]~/git/variant (master %=)$cat variant.control
# variant extension
comment = 'Variant data type for PostgreSQL'
default_version = '1.0.0-beta3'
relocatable = false
schem
Hi there,
in the past we've repeatedly discussed the option of using a different
compression algorithm (e.g. lz4), but every time the discussion died off
because of fear of possible patent issues [1] [2] and many other
threads. Have we decided it's not worth the risks, making patches in
this
Hi,
I would like allow specifying multiple host names for libpq to try to
connecting to. This is currently only supported if the host name resolves to
multiple addresses. Having the support for it without complex dns setup would
be much easier.
Example:
psql -h dbslave,dbmaster -p 5432 dbna
On Fri, Apr 10, 2015 at 11:29 AM, Michael Paquier wrote:
> Here is a v2 with the following changes:
> - Use an environment variable to define where pg_regress is located.
> - Use SSPI to access a node in the tests, to secure the test environment.
> - Rebase on latest HEAD
> - SSL tests are run only
Hi all,
I just noticed that --debug mode is missing for the remote mode of pg_rewind:
# Do rewind using a remote connection as source
my $result =
run(['pg_rewind',
+"--debug",
> 27 nov 2014 kl. 10:15 skrev Dave Page :
>
>
>
> On Thu, Nov 27, 2014 at 9:09 AM, Jakob Egger wrote:
> Am 26.11.2014 um 17:46 schrieb Geoff Montee :
> > This topic reminds me of a thread from a couple months ago:
> >
> > http://www.postgresql.org/message-id/f8268db6-b50f-429f-8289-da8ffa5f2..
19 matches
Mail list logo