Hello, I noticed that src/include/statistics is not installed by
make install.
The commit 7b504eb282ca2f5104b5c00b4f05a forgot to do that.
master and 10 beta 1 is affected.
regards,
--
Kyotaro Horiguchi
NTT Open Source Software Center
>From 9b871ac36a0867e106200c66179ce593a25988c2 Mon Sep 17 0
On Fri, Jun 02, 2017 at 11:51:26AM -0700, Andres Freund wrote:
> commit 7c4f52409a8c7d85ed169bbbc1f6092274d03920
> Author: Peter Eisentraut
> Date: 2017-03-23 08:36:36 -0400
>
> Logical replication support for initial data copy
>
> made walreceiver emit worse messages in v10 than before wh
On Fri, Jun 02, 2017 at 11:06:29PM -0700, Andres Freund wrote:
> On 2017-06-02 22:12:46 -0700, Noah Misch wrote:
> > On Fri, Jun 02, 2017 at 11:27:55PM -0400, Peter Eisentraut wrote:
> > > On 5/31/17 23:54, Peter Eisentraut wrote:
> > > > On 5/29/17 22:01, Noah Misch wrote:
> > > >> On Tue, May 23,
> From: "Kevin Grittner"
> wrote:
>
> > "vmstat 1" output is as follow. Because I used only 30 cores (1/4 of all),
> > cpu user time should be about 12*4 = 48.
> > There seems to be no process blocked by IO.
> >
> > procs ---memory-- ---swap-- -io -system--
> > -
v
On Thu, Jun 8, 2017 at 3:31 AM, Robert Haas wrote:
> I think if you're going to fix it so that we take spinlocks on
> MyWalSnd in a bunch of places that we didn't take them before, it
> would make sense to fix all the places where we're accessing those
> fields without a spinlock instead of onl
src/bin/pg_upgrade/TESTING claims (much further down in the file
than I'd like):
The shell script test.sh in this directory performs more or less this
procedure. You can invoke it by running
make check
or by running
make installcheck
if "mak
On Thu, Jun 8, 2017 at 2:17 AM, Andres Freund wrote:
> On 2017-05-08 09:12:13 -0400, Tom Lane wrote:
>> Simon Riggs writes:
>> > So rearranged code a little to keep it lean.
>>
>> Didn't you break it with that? As it now stands, the memcpy will
>> copy the nonzero value.
>
> I've not seen a fix
"Regina Obe" writes:
> I'm not a fan of either solution, but I think what Tom proposes of throwing
> an error sounds like least invasive and confusing.
> I'd much prefer an error thrown than silent behavior change. Given that we
> ran into this in 3 places in PostGIS code, I'm not convinced the i
On 08/06/17 03:50, Josh Berkus wrote:
> On 06/07/2017 06:25 PM, Petr Jelinek wrote:
>> On 08/06/17 03:19, Josh Berkus wrote:
>>>
>>> Peter and Petr:
>>>
>>> On 06/07/2017 05:24 PM, Peter Eisentraut wrote:
On 6/7/17 01:01, Josh Berkus wrote:
> * Having defaults on the various _workers all d
On 06/07/2017 06:25 PM, Petr Jelinek wrote:
> On 08/06/17 03:19, Josh Berkus wrote:
>>
>> Peter and Petr:
>>
>> On 06/07/2017 05:24 PM, Peter Eisentraut wrote:
>>> On 6/7/17 01:01, Josh Berkus wrote:
* Having defaults on the various _workers all devolve from max_workers
is also great.
>>>
On 06/02/17 15:51, Chapman Flack wrote:
> But what it buys you is then if your MyExtraPGNode has PostgresNode
> as a base, the familiar idiom
>
> MyExtraPGNode->get_new_node('foo');
>
> works, as it inserts the class as the first argument.
>
> As a bonus, you then don't need to complicate get_
On 6/7/17 21:19, Josh Berkus wrote:
> The user's first thought is going to be a network issue, or a bug, or
> some other problem, not a missing PK. Yeah, they can find that
> information in the logs, but only if they think to look for it in the
> first place, and in some environments (AWS, contain
On 08/06/17 03:19, Josh Berkus wrote:
>
> Peter and Petr:
>
> On 06/07/2017 05:24 PM, Peter Eisentraut wrote:
>> On 6/7/17 01:01, Josh Berkus wrote:
>>> * Having defaults on the various _workers all devolve from max_workers
>>> is also great.
>>
>> I'm not aware of anything like that happening.
>
Peter and Petr:
On 06/07/2017 05:24 PM, Peter Eisentraut wrote:
> On 6/7/17 01:01, Josh Berkus wrote:
>> * Having defaults on the various _workers all devolve from max_workers
>> is also great.
>
> I'm not aware of anything like that happening.
>
>> P1. On the publishing node, logical replicati
On 2017-06-08 03:14:55 +0200, Petr Jelinek wrote:
> On 08/06/17 03:08, Craig Ringer wrote:
> > On 7 June 2017 at 18:16, sanyam jain wrote:
> >> Hi,
> >>
> >> Can someone explain the usage of exporting snapshot when a logical
> >> replication slot is created?
> >
> > It's used to pg_dump the schem
On 08/06/17 03:08, Craig Ringer wrote:
> On 7 June 2017 at 18:16, sanyam jain wrote:
>> Hi,
>>
>> Can someone explain the usage of exporting snapshot when a logical
>> replication slot is created?
>
> It's used to pg_dump the schema at a consistent point in history where
> all xacts are known to
Hi,
On 07/06/17 07:01, Josh Berkus wrote:
> Folks,
>
> I've put together some demos on PostgreSQL 10beta1. Here's a few
> feedback notes based on my experience with it.
> [...snip...]
>
> Problems
>
>
> P1. On the publishing node, logical replication relies on the *implied*
> correspo
On 7 June 2017 at 13:39, Michael Paquier wrote:
> On Thu, Jun 1, 2017 at 10:48 PM, Tom Lane wrote:
>> Andres Freund writes:
>>> when using
>>> $ cat ~/.proverc
>>> -j9
>>> some tests fail for me in 9.4 and 9.5.
>>
>> Weren't there fixes specifically intended to make that safe, awhile ago?
>
> 60
On 7 June 2017 at 18:16, sanyam jain wrote:
> Hi,
>
> Can someone explain the usage of exporting snapshot when a logical
> replication slot is created?
It's used to pg_dump the schema at a consistent point in history where
all xacts are known to be in the snapshot (and thus dumped) or known
not t
Hi,
On 07/06/17 22:49, Erik Rijkers wrote:
> I am not sure whether what I found here amounts to a bug, I might be
> doing something dumb.
>
> During the last few months I did tests by running pgbench over logical
> replication. Earlier emails have details.
>
> The basic form of that now works w
On 6/7/17 01:01, Josh Berkus wrote:
> * Having defaults on the various _workers all devolve from max_workers
> is also great.
I'm not aware of anything like that happening.
> P1. On the publishing node, logical replication relies on the *implied*
> correspondence of the application_name and the r
Dear Meskes,
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Michael Meskes
> Done.
Thanks, I confirmed the commit messages.
> My standard workflow is to wait a couple days to see if everything works
> nicely before backporting. Obviously this
On 2017/06/08 2:07, Robert Haas wrote:
> On Wed, Jun 7, 2017 at 1:23 AM, Amit Langote
> wrote:
>> On 2017/06/07 11:57, Amit Langote wrote:
>>> How about we export ExecPartitionCheck() out of execMain.c and call it
>>> just before ExecFindPartition() using the root table's ResultRelInfo?
>>
>> Turn
On Thu, Jun 8, 2017 at 11:05 AM, Thomas Munro
wrote:
> 1. Keep the current behaviour. [...]
>
> 2. Make a code change that would split the 'new table' tuplestore in
> two: an insert tuplestore and an update tuplestore (for new images;
> old images could remain in the old tuplestore that is also
On Thu, Jun 8, 2017 at 10:21 AM, Kevin Grittner wrote:
> On Wed, Jun 7, 2017 at 5:00 PM, Peter Geoghegan wrote:
>
>> My assumption about how transition tables ought to behave here is
>> based on the simple fact that we already fire both AFTER
>> statement-level triggers, plus my sense of aestheti
Good day Robert, Jim, and everyone.
On 2017-06-08 00:06, Jim Van Fleet wrote:
Robert Haas wrote on 06/07/2017 12:12:02 PM:
> OK -- would love the feedback and any suggestions on how to
mitigate the low
> end problems.
Did you intend to attach a patch?
Yes I do -- tomorrow or Thursday -- n
On Wed, Jun 7, 2017 at 3:00 PM, Peter Geoghegan wrote:
> My assumption would be that since you have as many as two
> statement-level triggers firing that could reference transition tables
> when ON CONFLICT DO UPDATE is used (one AFTER UPDATE statement level
> trigger, and another AFTER INSERT sta
On Wed, Jun 7, 2017 at 5:00 PM, Peter Geoghegan wrote:
> My assumption about how transition tables ought to behave here is
> based on the simple fact that we already fire both AFTER
> statement-level triggers, plus my sense of aesthetics, or bias. I
> admit that I might be missing the point, but
On Wed, Jun 7, 2017 at 4:48 PM, Thomas Munro
wrote:
> Is there anything about that semantics that is incompatible with the
> incremental matview use case?
Nothing incompatible at all. If we had separate "new" tables for
UPDATE and DELETE we would logically need to do a "counting"-style
UNION of
On 2017-06-07 23:18, Alvaro Herrera wrote:
Erik Rijkers wrote:
Now, looking at the script again I am thinking that it would be
reasonable
to expect that after issuing
delete from pg_subscription;
the other 2 tables are /also/ cleaned, automatically, as a
consequence. (Is
this reasonable
On Wed, Jun 7, 2017 at 2:19 PM, Kevin Grittner wrote:
> The idea of transition tables is that you see all changes to the
> target of a single statement in the form of delta relations -- with
> and "old" table for any rows affected by a delete or update and a
> "new" table for any rows affected by
On Thu, Jun 8, 2017 at 9:19 AM, Kevin Grittner wrote:
> On Wed, Jun 7, 2017 at 3:42 AM, Thomas Munro
> wrote:
>> On Wed, Jun 7, 2017 at 7:27 PM, Thomas Munro
>> wrote:
>>> On Wed, Jun 7, 2017 at 10:47 AM, Peter Geoghegan wrote:
I suppose you'll need two tuplestores for the ON CONFLICT DO U
On Tue, Jun 6, 2017 at 4:42 PM, Robert Haas wrote:
> So, are you willing and able to put any effort into this, like say
> reviewing the patch Thomas posted, and if so when and how much? If
> you're just done and you aren't going to put any more work into
> maintaining it (for whatever reasons),
On Wed, Jun 7, 2017 at 3:42 AM, Thomas Munro
wrote:
> On Wed, Jun 7, 2017 at 7:27 PM, Thomas Munro
> wrote:
>> On Wed, Jun 7, 2017 at 10:47 AM, Peter Geoghegan wrote:
>>> I suppose you'll need two tuplestores for the ON CONFLICT DO UPDATE
>>> case -- one for updated tuples, and the other for ins
Erik Rijkers wrote:
> Now, looking at the script again I am thinking that it would be reasonable
> to expect that after issuing
>delete from pg_subscription;
>
> the other 2 tables are /also/ cleaned, automatically, as a consequence. (Is
> this reasonable? this is really the main question of
Robert Haas wrote on 06/07/2017 12:12:02 PM:
> > OK -- would love the feedback and any suggestions on how to mitigate
the low
> > end problems.
>
> Did you intend to attach a patch?
Yes I do -- tomorrow or Thursday -- needs a little cleaning up ...
> > Sokolov Yura has a patch which, to me, l
> After chewing on this for awhile, I'm starting to come to the conclusion
that we'd be best off to throw an error for SRF-inside-CASE (or COALESCE).
Mark is correct that the simplest case of
> SELECT x, CASE WHEN y THEN generate_series(1,z) ELSE 5 END
> FROM table_with_columns_x_and_
On 2017-06-07 20:31, Robert Haas wrote:
[...]
[ Side note: Erik's report on this thread initially seemed to suggest
that we needed this patch to make logical decoding stable. But my
impression is that this is belied by subsequent developments on other
threads, so my theory is that this patch w
I am not sure whether what I found here amounts to a bug, I might be
doing something dumb.
During the last few months I did tests by running pgbench over logical
replication. Earlier emails have details.
The basic form of that now works well (and the fix has been comitted)
but as I looked o
On 5/30/17 13:25, Masahiko Sawada wrote:
> I think this cause is that the relation status entry could be deleted
> by ALTER SUBSCRIPTION REFRESH before corresponding table sync worker
> starting. Attached patch fixes issues reported on this thread so far.
I have committed the part of the patch tha
On Wed, Jun 7, 2017 at 5:46 AM, Amit Kapila wrote:
> As far as I understand, it is to ensure that for deleted rows, nothing
> more needs to be done. For example, see the below check in
> ExecUpdate/ExecDelete.
> if (!ItemPointerEquals(tupleid, &hufd.ctid))
> {
> ..
> }
> ..
>
> Also a similar che
On Wed, Jun 7, 2017 at 11:20 AM, Amit Kapila wrote:
> On Sat, Jun 3, 2017 at 1:03 AM, Robert Haas wrote:
>> On Fri, Jun 2, 2017 at 3:48 AM, Rafia Sabih
>> wrote:
>> I don't see how to do that. It could possibly be done with the TAP
>> framework, but that exceeds my abilities.
>>
>> Here's an up
Hi pgsql-hackers,
Thank you again for all these replies. I have started working under this
project
and learnt a lot of new stuff last month, so here are some new thoughts
about
ERRORS handling in COPY. I decided to stick to the same thread, since it
has a neutral subject.
(1) One of my mentors--A
On Wed, Jun 7, 2017 at 11:57 AM, Tom Lane wrote:
> If people are on board with throwing an error, I'll go see about
> writing a patch.
>
+1 from me.
David J.
Mark Dilger writes:
>> On Jun 4, 2017, at 2:19 PM, Andres Freund wrote:
>> Seems very unlikely that we'd ever want to do that. The right way to do
>> this is to simply move the SRF into the from list. Having the executor
>> support arbitrary sources of tuples would just complicate and slow down
On Wed, Jun 7, 2017 at 3:30 PM, Andres Freund wrote:
>
>
>
> On June 7, 2017 11:29:28 AM PDT, "Fabrízio de Royes Mello" <
fabriziome...@gmail.com> wrote:
> >On Fri, Jun 2, 2017 at 6:37 PM, Fabrízio de Royes Mello <
> >fabriziome...@gmail.com> wrote:
> >>
> >>
> >> On Fri, Jun 2, 2017 at 6:32 PM, F
On Sat, May 20, 2017 at 8:40 AM, Michael Paquier
wrote:
> On Fri, May 19, 2017 at 3:01 PM, Masahiko Sawada
> wrote:
>> Also, as Horiguchi-san pointed out earlier, walreceiver seems need the
>> similar fix.
>
> Actually, now that I look at it, ready_to_display should as well be
> protected by the
On June 7, 2017 11:29:28 AM PDT, "Fabrízio de Royes Mello"
wrote:
>On Fri, Jun 2, 2017 at 6:37 PM, Fabrízio de Royes Mello <
>fabriziome...@gmail.com> wrote:
>>
>>
>> On Fri, Jun 2, 2017 at 6:32 PM, Fabrízio de Royes Mello <
>fabriziome...@gmail.com> wrote:
>> >
>> > Hi all,
>> >
>> > This week
On Fri, Jun 2, 2017 at 6:37 PM, Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
>
>
> On Fri, Jun 2, 2017 at 6:32 PM, Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
> >
> > Hi all,
> >
> > This week I faced a out of disk space trouble in 8TB production
cluster. During investiga
On Wed, Jun 7, 2017 at 9:49 AM, Mike Palmiotto
wrote:
> One thing that concerns me is the first EXPLAIN plan from regress_rls_dave:
> +EXPLAIN (COSTS OFF) SELECT * FROM part_document WHERE f_leak(dtitle);
> + QUERY PLAN
> +---
On Sun, Jun 4, 2017 at 11:27 AM, Mengxing Liu
wrote:
> "vmstat 1" output is as follow. Because I used only 30 cores (1/4 of all),
> cpu user time should be about 12*4 = 48.
> There seems to be no process blocked by IO.
>
> procs ---memory-- ---swap-- -io -system--
> ---
On Fri, Jun 2, 2017 at 9:15 AM, Amit Kapila wrote:
> On Fri, Jun 2, 2017 at 6:38 PM, Robert Haas wrote:
>> On Fri, Jun 2, 2017 at 9:01 AM, Amit Kapila wrote:
>>> Your reasoning sounds sensible to me. I think the other way to attack
>>> this problem is that we can maintain some local queue in ea
On Tue, Jun 6, 2017 at 12:16 PM, Mengxing Liu
wrote:
> I think disk I/O is not the bottleneck in our experiment, but the global lock
> is.
A handy way to figure this kind of thing out is to run a query like
this repeatedly during the benchmark:
SELECT wait_event_type, wait_event FROM pg_stat_ac
On Tue, Jun 6, 2017 at 3:23 PM, David Fetter wrote:
> I'd bet on lack of tuits.
I expect that was part of it. Another thing to consider is that, for
numeric aggregates, the transition values don't generally get larger
as you aggregate, but for something like string_agg(), they will.
It's not cle
On 2017-05-08 09:12:13 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > So rearranged code a little to keep it lean.
>
> Didn't you break it with that? As it now stands, the memcpy will
> copy the nonzero value.
I've not seen a fix and/or alleviating comment about this so far. Did I
miss somet
On Wed, Jun 7, 2017 at 12:29 PM, Jim Van Fleet wrote:
>> The basic idea is clear from your description, but it will be better
>> if you share the patch as well. It will not only help people to
>> review and provide you feedback but also allow them to test and see if
>> they can reproduce the numb
On Wed, Jun 7, 2017 at 12:49 PM, Andres Freund wrote:
> On 2017-06-07 07:49:00 -0300, Alvaro Herrera wrote:
>> Instead of adding a second 64 bit counter for multixacts, how about
>> first implementing something like TED which gets rid of multixacts (and
>> freezing thereof) altogether?
>
> -1 - th
On Wed, Jun 7, 2017 at 1:23 AM, Amit Langote
wrote:
> On 2017/06/07 11:57, Amit Langote wrote:
>> How about we export ExecPartitionCheck() out of execMain.c and call it
>> just before ExecFindPartition() using the root table's ResultRelInfo?
>
> Turns out there wasn't a need to export ExecPartitio
В письме от 30 мая 2017 17:24:26 Вы написали:
> > I still have three more questions. A new one:
> >
> >
> >
> >my_command->line = expr_scanner_get_substring(sstate,
> >
> > start_offset,
> >
> > -
On 2017-06-07 07:49:00 -0300, Alvaro Herrera wrote:
> Instead of adding a second 64 bit counter for multixacts, how about
> first implementing something like TED which gets rid of multixacts (and
> freezing thereof) altogether?
-1 - that seems like a too high barrier. We've punted on improvements
On Wed, Jun 7, 2017 at 7:47 AM, Ashutosh Bapat
wrote:
> In ATExecAttachPartition() there's following code
>
> 13715 partnatts = get_partition_natts(key);
> 13716 for (i = 0; i < partnatts; i++)
> 13717 {
> 13718 AttrNumber partattno;
> 13719
> 13720
Hi
I got strange error message - false message - max connection is less on
slave than on master, although these numbers was same. The issue was in
wrong connection string in recovery conf and slave cannot to check master
and used some defaults.
Regards
Pavel
Amit Kapila wrote on 06/07/2017 07:34:06 AM:
...
> > The down side is that on smaller configurations (single socket) where
there
> > is less "lock thrashing" in the storage subsystem and there are
multiple
> > Lwlocks to take for an exclusive acquire, there is a decided downturn
in
> > perfor
On Wed, Jun 7, 2017 at 4:47 PM, Heikki Linnakangas wrote:
> On 06/06/2017 07:24 AM, Ashutosh Bapat wrote:
>>
>> On Tue, Jun 6, 2017 at 9:48 AM, Craig Ringer
>> wrote:
>>>
>>> On 6 June 2017 at 12:13, Ashutosh Bapat
>>> wrote:
>>>
What happens when the epoch is so low that the rest of the XI
On 06/07/2017 06:49 AM, Mike Palmiotto wrote:
> I ended up narrowing it down to 4 tables (one parent and 3 partitions)
> in order to demonstrate policy sorting and order of RLS/partition
> constraint checking. It should be much more straight-forward now, but
> let me know if there are any further r
On Sat, Jun 3, 2017 at 1:03 AM, Robert Haas wrote:
> On Fri, Jun 2, 2017 at 3:48 AM, Rafia Sabih
> wrote:
>
> I don't see how to do that. It could possibly be done with the TAP
> framework, but that exceeds my abilities.
>
> Here's an updated patch with a bunch of cosmetic fixes, plus I
> adjust
On Tue, Jun 6, 2017 at 9:24 PM, Jim Finnerty wrote:
> In some MPP systems, networking costs are modeled separately from I/O costs,
> processor costs, or memory access costs. I think this is what Ashutosh may
> have been getting at with /per-packet/ costs: in a more sophisticated fdw
> cost model
On 7 June 2017 at 16:42, Amit Khandekar wrote:
> The column bitmap set returned by GetUpdatedColumns() refer to
> attribute numbers w.r.t. to the root partition. And the
> mstate->resultRelInfo[] have attnos w.r.t. to the leaf partitions. So
> we need to do something similar to map_partition_varat
On Tue, Jun 6, 2017 at 3:39 AM, Alvaro Herrera wrote:
> FWIW I don't think calling these tablespaces "temporary" is the right
> word. It's not the tablespaces that are temporary. Maybe "evanescent".
While I would personally find it pretty hilarious to see the
EVANESCENT in kwlist.h, I think it'
On Wed, Jun 07, 2017 at 03:45:23AM +, Tsunakawa, Takayuki wrote:
> Could you also apply it to past versions if you don't mind? The oldest
> supported version 9.2 is already thread-aware.
Done.
My standard workflow is to wait a couple days to see if everything works nicely
before backportin
Amit Kapila writes:
> On Tue, Jun 6, 2017 at 10:14 PM, Tom Lane wrote:
>> By definition, the address range we're trying to reuse worked successfully
>> in the postmaster process. I don't see how forcing a specific address
>> could do anything but create an additional risk of postmaster startup
>
On Tue, Jun 6, 2017 at 9:12 PM, Michael Paquier
wrote:
> On Wed, Jun 7, 2017 at 9:52 AM, Joe Conway wrote:
>> Thanks Mike. I'll take a close look to verify output correctnes, but I
>> am concerned that the new tests are unnecessarily complex. Any other
>> opinions on that?
>
> Some tests would be
Robert Haas writes:
> On Wed, Jun 7, 2017 at 6:36 AM, Amit Kapila wrote:
>> I don't think so because this problem has been reported previously as
>> well [1][2] even before the commit in question.
>>
>> [1] -
>> https://www.postgresql.org/message-id/1ce5a19f-3b1d-bb1c-4561-0158176f65f1%40dunsla
On Tue, Jun 6, 2017 at 10:58 PM, Peter Eisentraut
wrote:
> The decision was made to add background workers to pg_stat_activity, but
> no facility was provided to tell the background workers apart. Is it
> now the job of every background worker to invent a hack to populate some
> other pg_stat_act
On Tue, Jun 6, 2017 at 1:00 AM, Jim Van Fleet wrote:
> Hi,
>
> I have been experimenting with splitting the ProcArrayLock into parts.
> That is, to Acquire the ProcArrayLock in shared mode, it is only necessary
> to acquire one of the parts in shared mode; to acquire the lock in exclusive
> mode,
On Wed, Jun 7, 2017 at 4:58 AM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> On 6/6/17 15:58, Robert Haas wrote:
> > The problem with the status quo (after Peter's commit) is that there's
> > now nothing at all to identify the logical replication launcher, apart
> > from the wait_
On Tue, Jun 6, 2017 at 6:29 AM, Rafia Sabih
wrote:
> On Mon, Jun 5, 2017 at 8:06 PM, Robert Haas wrote:
>> Many of these seem worse, like these ones:
>>
>> - * Quit if we've reached records for another database. Unless the
>> + * Quit if we've reached records of another database.
On Wed, Jun 7, 2017 at 8:27 PM, Heikki Linnakangas wrote:
> Ok, I committed your patch, with some minor changes.
Thanks for the commit.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgs
On Wed, Jun 7, 2017 at 6:36 AM, Amit Kapila wrote:
> I don't think so because this problem has been reported previously as
> well [1][2] even before the commit in question.
>
> [1] -
> https://www.postgresql.org/message-id/1ce5a19f-3b1d-bb1c-4561-0158176f65f1%40dunslane.net
> [2] - https://www.po
On Wed, Jun 7, 2017 at 03:18:49PM +1000, Neha Khatri wrote:
>
> On Mon, May 15, 2017 at 12:45 PM, Bruce Momjian wrote:
>
> On Thu, May 11, 2017 at 11:50:03PM -0400, Tom Lane wrote:
> > Michael Paquier writes:
> > > Bruce, the release notes do not mention yet that support for cleart
In ATExecAttachPartition() there's following code
13715 partnatts = get_partition_natts(key);
13716 for (i = 0; i < partnatts; i++)
13717 {
13718 AttrNumber partattno;
13719
13720 partattno = get_partition_col_attnum(key, i);
13721
13722
Vladimir Borodin writes:
> > 6 июня 2017 г., в 23:30, Sergey Burladyan написал(а):
> >
> > Dmitriy Sarafannikov writes:
> >
> >> Starting and stopping master after running pg_upgrade but before rsync to
> >> collect statistics
> >> was a bad idea.
> >
> > But, starting and stopping master a
On 06/06/2017 06:09 AM, Michael Paquier wrote:
On Thu, Jun 1, 2017 at 4:58 AM, Heikki Linnakangas wrote:
To fix, I suppose we can do what you did for SASL in your patch, and move
the cleanup of conn->gctx from closePGconn to pgDropConnection. And I
presume we need to do the same for the SSPI st
On 6 June 2017 at 23:52, Robert Haas wrote:
> On Fri, Jun 2, 2017 at 7:07 AM, Amit Khandekar wrote:
>> So, according to that, below would be the logic :
>>
>> Run partition constraint check on the original NEW row.
>> If it succeeds :
>> {
>> Fire BR UPDATE trigger on the original partition.
Alexander Korotkov wrote:
> Right. I used the term "64-bit epoch" during developer unconference, but
> that was ambiguous. It would be more correct to call it a "64-bit base".
> BTW, we will have to store two 64-bit bases: for xids and for multixacts,
> because they are completely independent co
On Wed, Jun 7, 2017 at 10:47 AM, Heikki Linnakangas wrote:
> On 06/06/2017 07:24 AM, Ashutosh Bapat wrote:
>
>> On Tue, Jun 6, 2017 at 9:48 AM, Craig Ringer
>> wrote:
>>
>>> On 6 June 2017 at 12:13, Ashutosh Bapat
>>> wrote:
>>>
>>> What happens when the epoch is so low that the rest of the XID
On Wed, Jun 7, 2017 at 12:37 AM, Robert Haas wrote:
> On Tue, Jun 6, 2017 at 2:21 PM, Tom Lane wrote:
>>> One thought is that the only places where shm_mq_set_sender() should
>>> be getting invoked during the main regression tests are
>>> ParallelWorkerMain() and ExecParallelGetReceiver, and both
Hi,
Can someone explain the usage of exporting snapshot when a logical replication
slot is created?
Thanks,
Sanyam Jain
On Tue, Jun 6, 2017 at 10:14 PM, Tom Lane wrote:
> Robert Haas writes:
>> I think the idea of retrying process creation (and I definitely agree
>> with Tom and Magnus that we have to retry process creation, not just
>> individual mappings) is a good place to start. Now if we find that we
>> are
On 07/06/17 03:00, Andres Freund wrote:
> On 2017-06-06 19:36:13 +0200, Petr Jelinek wrote:
>
>> As a side note, we are starting to have several IsSomeTypeOfProcess
>> functions for these kind of things. I wonder if bgworker infrastructure
>> should somehow provide this type of stuff (the proposed
On Sat, Jun 3, 2017 at 2:11 AM, Robert Haas wrote:
>
> + errmsg("default partition contains row(s)
> that would overlap with partition being created")));
>
> It doesn't really sound right to talk about rows overlapping with a
> partition. Partitions can overlap with each o
On Tue, Jun 6, 2017 at 11:54 PM, Robert Haas wrote:
> On Mon, Jun 5, 2017 at 2:51 AM, Amit Kapila wrote:
>>> Greg/Amit's idea of using the CTID field rather than an infomask bit
>>> seems like a possibly promising approach. Not everything that needs
>>> bit-space can use the CTID field, so using
On Wed, Jun 7, 2017 at 7:27 PM, Thomas Munro
wrote:
> On Wed, Jun 7, 2017 at 10:47 AM, Peter Geoghegan wrote:
>> I suppose you'll need two tuplestores for the ON CONFLICT DO UPDATE
>> case -- one for updated tuples, and the other for inserted tuples.
>
> Hmm. Right. INSERT ... ON CONFLICT DO UP
On Tue, Jun 6, 2017 at 4:05 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> On 6/6/17 08:29, Bruce Momjian wrote:
> > On Tue, Jun 6, 2017 at 06:00:54PM +0800, Craig Ringer wrote:
> >> Tom's point is, I think, that we'll want to stay pg_upgrade
> >> compatible. So when we see a p
On Fri, Jun 02, 2017 at 05:58:59AM +, Noah Misch wrote:
> On Mon, May 29, 2017 at 01:43:26PM -0700, Michael Paquier wrote:
> > On Mon, May 29, 2017 at 1:38 PM, Daniele Varrazzo
> > wrote:
> > > Patch attached
> >
> > Right. I am adding that to the list of open items, and Heikki in CC
> > will
On Wed, May 31, 2017 at 09:14:17AM -0500, Kevin Grittner wrote:
> On Wed, May 31, 2017 at 1:44 AM, Noah Misch wrote:
>
> > IMMEDIATE ATTENTION REQUIRED.
>
> I should be able to complete review and testing by Friday. If there
> are problems I might not take action until Monday; otherwise I
> sho
On 06/06/2017 07:24 AM, Ashutosh Bapat wrote:
On Tue, Jun 6, 2017 at 9:48 AM, Craig Ringer wrote:
On 6 June 2017 at 12:13, Ashutosh Bapat wrote:
What happens when the epoch is so low that the rest of the XID does
not fit in 32bits of tuple header? Or such a case should never arise?
Storing
On Wed, Jun 7, 2017 at 10:47 AM, Peter Geoghegan wrote:
> On Mon, Jun 5, 2017 at 6:40 PM, Thomas Munro
> wrote:
>> After sleeping on it, I don't think we need to make that decision here
>> though. I think it's better to just move the tuplestores into
>> ModifyTableState so that each embedded DML
> 6 июня 2017 г., в 23:30, Sergey Burladyan написал(а):
>
> Dmitriy Sarafannikov writes:
>
>> Starting and stopping master after running pg_upgrade but before rsync to
>> collect statistics
>> was a bad idea.
>
> But, starting and stopping master after running pg_upgrade is *required*
> by d
99 matches
Mail list logo