On Thu, Mar 14, 2019 at 6:12 AM David Rowley
wrote:
> On Wed, 6 Mar 2019 at 10:13, Robert Haas wrote:
> > Would anyone like to argue that there is any other reason not to
> > commit these patches?
>
> Hi Robert,
>
> Thanks for working on this. I'm sorry I didn't get a chance to
> dedicate some ti
On Wed, 6 Mar 2019 at 10:13, Robert Haas wrote:
> Would anyone like to argue that there is any other reason not to
> commit these patches?
Hi Robert,
Thanks for working on this. I'm sorry I didn't get a chance to
dedicate some time to look at it.
It looks like you've pushed all of this now. Ca
On Thu, Feb 28, 2019 at 3:27 PM Robert Haas wrote:
> I'm not currently aware of any remaining correctness issues with this
> code, although certainly there may be some. There has been a certain
> dearth of volunteers to review any of this. I do plan to poke at it a
> bit to see whether it has an
On Tue, Feb 26, 2019 at 5:10 PM Robert Haas wrote:
> Aside from these problems, I think I have spotted a subtle problem in
> 0001. I'll think about that some more and post another update.
0001 turned out to be guarding against the wrong problem. It supposed
that if we didn't get a coherent view
On Thu, Jan 31, 2019 at 1:02 PM Robert Haas wrote:
> New patch series attached.
And here's yet another new patch series, rebased over today's commit
and with a couple of other fixes:
1. I realized that the PartitionDirectory for the planner ought to be
attached to the PlannerGlobal, not the Plan
On Fri, Dec 21, 2018 at 6:04 PM David Rowley
wrote:
> On Fri, 21 Dec 2018 at 09:43, Robert Haas wrote:
> > - I refactored expand_inherited_rtentry() to drive partition expansion
> > entirely off of PartitionDescs. The reason why this is necessary is
> > that it clearly will not work to have find_
On Tue, 5 Feb 2019 at 01:54, Robert Haas wrote:
>
> On Mon, Feb 4, 2019 at 12:02 AM David Rowley
> wrote:
> > If the PartitionDesc from the parallel worker has an extra partition
> > than what was there when the plan was built then the partition index
> > to subplan index translation will be inco
On Mon, Feb 4, 2019 at 12:54 PM Robert Haas wrote:
> On Mon, Feb 4, 2019 at 12:02 AM David Rowley
> wrote:
> > If the PartitionDesc from the parallel worker has an extra partition
> > than what was there when the plan was built then the partition index
> > to subplan index translation will be inc
On Mon, Feb 4, 2019 at 12:02 AM David Rowley
wrote:
> If the PartitionDesc from the parallel worker has an extra partition
> than what was there when the plan was built then the partition index
> to subplan index translation will be incorrect as the
> find_matching_subplans_recurse() will call get
On Mon, 4 Feb 2019 at 16:45, Robert Haas wrote:
>
> On Sat, Feb 2, 2019 at 7:19 PM David Rowley
> wrote:
> > I think we do need to ensure that the PartitionDesc matches between
> > worker and leader. Have a look at choose_next_subplan_for_worker() in
> > nodeAppend.c. Notice that a call is made t
On Sat, Feb 2, 2019 at 7:19 PM David Rowley
wrote:
> I think we do need to ensure that the PartitionDesc matches between
> worker and leader. Have a look at choose_next_subplan_for_worker() in
> nodeAppend.c. Notice that a call is made to
> ExecFindMatchingSubPlans().
Thanks for the tip. I see t
On Sat, 2 Feb 2019 at 09:31, Robert Haas wrote:
> After having written this code, I'm still torn about whether to go
> further with this design. On the one hand, this is such boilerplate
> code that it's kinda hard to imagine it having too many more bugs; on
> the other hand, as you can see, it's
On Fri, Feb 1, 2019 at 9:00 AM Robert Haas wrote:
> I don't think we'd be using pqmq here, or shm_mq either, but I think
> the bigger issues is that starting a parallel query is already a
> pretty heavy operation, and so the added overhead of this is probably
> not very noticeable. I agree that i
On Thu, Jan 31, 2019 at 6:00 PM Alvaro Herrera wrote:
> > - 0003 doesn't have any handling for parallel query at this point, so
> > even though within a single backend a single query execution will
> > always get the same PartitionDesc for the same relation, the answers
> > might not be consistent
On 2019-Jan-31, Robert Haas wrote:
> OK, that seems to be pretty easy. New patch series attached. The
> patch with that new logic is 0004. I've consolidated some of the
> things I had as separate patches in my last post and rewritten the
> commit messages to explain more clearly the purpose of
On Tue, Jan 29, 2019 at 1:59 PM Robert Haas wrote:
> I don't know how to reduce this to something reliable enough to
> include it in the regression tests, and maybe we don't really need
> that, but it's good to know that this is not a purely theoretical
> problem. I think next I'll try to write s
On Fri, Jan 25, 2019 at 4:18 PM Alvaro Herrera wrote:
> > I wrote a little patch that stores the relation OIDs of the partitions
> > into the PartitionedPruneRelInfo and then, at execution time, does an
> > Assert() that what it gets matches what existed at plan time. I
> > figured that a good st
On Tue, Jan 29, 2019 at 12:29 AM Simon Riggs wrote:
>> But I kind of wonder whether we're really gaining as much as you think
>> by trying to support concurrent DETACH in the first place. If there
>> are queries running against the table, there's probably at least
>> AccessShareLock on the partit
On Mon, 28 Jan 2019 at 20:15, Robert Haas wrote:
> On Fri, Jan 25, 2019 at 4:18 PM Alvaro Herrera
> wrote:
> > Right, the planner/executor "disconnect" is one of the challenges, and
> > why I was trying to keep the old copy of the PartitionDesc around
> > instead of building updated ones as need
On Fri, Jan 25, 2019 at 4:18 PM Alvaro Herrera wrote:
> Right, the planner/executor "disconnect" is one of the challenges, and
> why I was trying to keep the old copy of the PartitionDesc around
> instead of building updated ones as needed.
I agree that would be simpler, but I just don't see how
On 2019-Jan-25, Robert Haas wrote:
> I finally gotten a little more time to work on this. It took me a
> while to understand that a PartitionedRelPruneInfos assumes that the
> indexes of partitions in the PartitionDesc don't change between
> planning and execution, because subplan_map[] and subpl
On Thu, Dec 20, 2018 at 3:58 PM Alvaro Herrera wrote:
> Namely: how does this handle the case of partition pruning structure
> being passed from planner to executor, if an attach happens in the
> middle of it and puts a partition in between existing partitions? Array
> indexes of any partitions t
On Tue, 25 Dec 2018 at 08:15, Robert Haas wrote:
>
> On Fri, Dec 21, 2018 at 6:04 PM David Rowley
> wrote:
> > I don't think you need to qsort() the Oids before locking. What the
> > qsort() does today is ensure we get a consistent locking order. Any
> > other order would surely do, providing we
On Fri, Dec 21, 2018 at 6:06 PM David Rowley
wrote:
> > I didn't handle that. If partition pruning relies on nothing changing
> > between planning and execution, isn't that broken regardless of any of
> > this? It's true that with the simple query protocol we'll hold locks
> > continuously from
On Fri, Dec 21, 2018 at 6:04 PM David Rowley
wrote:
> I don't think you need to qsort() the Oids before locking. What the
> qsort() does today is ensure we get a consistent locking order. Any
> other order would surely do, providing we stick to it consistently. I
> think PartitionDesc order is fin
On Fri, 21 Dec 2018 at 10:05, Robert Haas wrote:
> On Thu, Dec 20, 2018 at 3:58 PM Alvaro Herrera
> wrote:
> > Namely: how does this handle the case of partition pruning structure
> > being passed from planner to executor, if an attach happens in the
> > middle of it and puts a partition in betw
On Fri, 21 Dec 2018 at 09:43, Robert Haas wrote:
> - I refactored expand_inherited_rtentry() to drive partition expansion
> entirely off of PartitionDescs. The reason why this is necessary is
> that it clearly will not work to have find_all_inheritors() use a
> current snapshot to decide what chil
On Thu, Dec 20, 2018 at 4:38 PM Robert Haas wrote:
> Lowering the lock level might also make something that was previously
> safe into something unsafe, because now there's no longer a guarantee
> that invalidation messages are received soon enough. With
> AccessExclusiveLock, we'll send invalidat
On Thu, Dec 20, 2018 at 4:11 PM Alvaro Herrera wrote:
> Oh, so maybe this case is already handled by plan invalidation -- I
> mean, if we run DDL, the stored plan is thrown away and a new one
> recomputed. IOW this was already a solved problem and I didn't need to
> spend effort on it. /me slaps
On 2018-Dec-20, Robert Haas wrote:
> I didn't handle that. If partition pruning relies on nothing changing
> between planning and execution, isn't that broken regardless of any of
> this? It's true that with the simple query protocol we'll hold locks
> continuously from planning into execution,
On Thu, Dec 20, 2018 at 3:58 PM Alvaro Herrera wrote:
> > Introduce the concept of a partition directory.
> >
> > Teach the optimizer and executor to use it, so that a single planning
> > cycle or query execution gets the same PartitionDesc for the same table
> > every time it looks it up. This d
Thanks for this work! I like the name "partition directory".
On 2018-Dec-20, Robert Haas wrote:
> 0002 introduces the concept of a partition directory. The idea is
> that the planner will create a partition directory, and so will the
> executor, and all calls which occur in those places to
> Re
On Tue, Dec 18, 2018 at 8:04 PM Michael Paquier wrote:
> On Tue, Dec 18, 2018 at 01:41:06PM -0500, Robert Haas wrote:
> > OK. I'll post what I have by the end of the week.
>
> Thanks, Robert.
OK, so I got slightly delayed here by utterly destroying my laptop,
but I've mostly reconstructed what I
On Tue, Dec 18, 2018 at 01:41:06PM -0500, Robert Haas wrote:
> OK. I'll post what I have by the end of the week.
Thanks, Robert.
--
Michael
signature.asc
Description: PGP signature
On Mon, Dec 17, 2018 at 6:44 PM Michael Paquier wrote:
> Agreed. This patch has value, and somebody else could always take it
> from the point where you were.
OK. I'll post what I have by the end of the week.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL C
On Mon, Dec 17, 2018 at 06:52:51PM -0300, Alvaro Herrera wrote:
> On 2018-Dec-17, Robert Haas wrote:
> This patch missing the CF deadline would not be a happy way for me to
> begin the new year.
>
> I'm not sure what's the best way to move forward with this patch, but I
> encourage you to post wha
On 2018-Dec-17, Robert Haas wrote:
> I have done a bit more work on this, but need to spend some more time
> on it before I have something that is worth posting. Not sure whether
> I'll get to that before the New Year at this point.
This patch missing the CF deadline would not be a happy way for
On Sun, Dec 16, 2018 at 6:43 AM Michael Paquier wrote:
> On Sat, Dec 15, 2018 at 01:04:00PM +0300, Sergei Kornilov wrote:
> > Seems we erroneously moved this thread to next CF:
> > https://commitfest.postgresql.org/21/1842/
> > Can you close this entry?
>
> Robert has committed a patch to refactor
On Sat, Dec 15, 2018 at 01:04:00PM +0300, Sergei Kornilov wrote:
> Seems we erroneously moved this thread to next CF:
> https://commitfest.postgresql.org/21/1842/
> Can you close this entry?
Robert has committed a patch to refactor a bit the list contruction of
RelationBuildPartitionDesc thanks to
Hello
> OK. Rebased again, and committed (although I forgot to include a link
> to this discussion - sorry about that).
Seems we erroneously moved this thread to next CF:
https://commitfest.postgresql.org/21/1842/
Can you close this entry?
regards, Sergei
On Wed, Nov 14, 2018 at 9:03 PM Amit Langote
wrote:
> On 2018/11/15 4:27, Robert Haas wrote:
> > RelationBuildPartitionDesc doesn't lock the children
> > whose relpartbounds it is fetching (!), so unless we're guaranteed to
> > have already locked them children earlier for some other reason, we
>
On Sun, Nov 18, 2018 at 9:43 PM Amit Langote
wrote:
> On 2018/11/17 9:06, Michael Paquier wrote:
> > On Fri, Nov 16, 2018 at 09:38:40AM -0500, Robert Haas wrote:
> >> OK, but it seems to me that your version of my patch rearranges the
> >> code more than necessary.
> >>
> >> How about the attached
On 2018/11/17 9:06, Michael Paquier wrote:
> On Fri, Nov 16, 2018 at 09:38:40AM -0500, Robert Haas wrote:
>> OK, but it seems to me that your version of my patch rearranges the
>> code more than necessary.
>>
>> How about the attached?
>
> What you are proposing here looks good to me. Thanks!
Me
On Fri, Nov 16, 2018 at 09:38:40AM -0500, Robert Haas wrote:
> OK, but it seems to me that your version of my patch rearranges the
> code more than necessary.
>
> How about the attached?
What you are proposing here looks good to me. Thanks!
--
Michael
signature.asc
Description: PGP signature
On Thu, Nov 15, 2018 at 8:58 PM Amit Langote
wrote:
> The partition_bounds_copy() is not because of your changes, it's there in
> HEAD.
OK, but it seems to me that your version of my patch rearranges the
code more than necessary.
How about the attached?
--
Robert Haas
EnterpriseDB: http://www.
On Fri, Nov 16, 2018 at 1:00 PM Michael Paquier wrote:
> On Fri, Nov 16, 2018 at 10:57:57AM +0900, Amit Langote wrote:
> > Maybe partition_bounds_create() should've had a MemoryContext argument to
> > pass it the context we want it to create the PartitionBoundInfo in. That
> > way, we can simply
On Fri, Nov 16, 2018 at 10:57:57AM +0900, Amit Langote wrote:
> Maybe partition_bounds_create() should've had a MemoryContext argument to
> pass it the context we want it to create the PartitionBoundInfo in. That
> way, we can simply pass rd_pdcxt to it and avoid making a copy. As is,
> we're now
On 2018/11/15 22:57, Robert Haas wrote:
> On Thu, Nov 15, 2018 at 12:38 AM Michael Paquier wrote:
>> On Thu, Nov 15, 2018 at 01:38:55PM +0900, Amit Langote wrote:
>>> I've fixed 0001 again to re-order the code so that allocations happen the
>>> correct context and now tests pass with the rebased p
On Thu, Nov 15, 2018 at 12:38 AM Michael Paquier wrote:
> On Thu, Nov 15, 2018 at 01:38:55PM +0900, Amit Langote wrote:
> > I've fixed 0001 again to re-order the code so that allocations happen the
> > correct context and now tests pass with the rebased patches.
>
> I have been looking at 0001, an
On 2018/11/15 15:22, Michael Paquier wrote:
>> If there are no partitions, nparts is 0, and other fields are NULL, though
>> rd_partdesc itself is never NULL.
>
> I find a bit confusing that both concepts have the same meaning, aka
> that a relation has no partition, and that it is actually relkin
On Thu, Nov 15, 2018 at 02:53:47PM +0900, Amit Langote wrote:
> As things stand today, rd_partdesc of a partitioned table must always be
> non-NULL. In fact, there are many places in the backend code that Assert it:
>
> [...]
I have noticed those, and they actually would not care much if
rd_partd
On 2018/11/15 14:38, Michael Paquier wrote:
> On Thu, Nov 15, 2018 at 01:38:55PM +0900, Amit Langote wrote:
>> I've fixed 0001 again to re-order the code so that allocations happen the
>> correct context and now tests pass with the rebased patches.
>
> I have been looking at 0001, and it seems to
On Thu, Nov 15, 2018 at 01:38:55PM +0900, Amit Langote wrote:
> I've fixed 0001 again to re-order the code so that allocations happen the
> correct context and now tests pass with the rebased patches.
I have been looking at 0001, and it seems to me that you make even more
messy the current situati
On 2018/11/15 11:03, Amit Langote wrote:
> As Michael pointed out, the first cleanup patch needs to be rebased due to
> a recent commit [1]. I did that to see if something we did in that commit
> made things worse for your patch, but seems fine. I had to go and change
> things outside RelationBui
On 2018/11/15 4:27, Robert Haas wrote:
> RelationBuildPartitionDesc doesn't lock the children
> whose relpartbounds it is fetching (!), so unless we're guaranteed to
> have already locked them children earlier for some other reason, we
> could grab the partition bound at this point and then it coul
On Wed, Nov 14, 2018 at 02:27:31PM -0500, Robert Haas wrote:
> Here are a couple of patches to illustrate this approach to this part
> of the overall problem. 0001 is, I think, a good cleanup that may as
> well be applied in isolation; it makes the code in
> RelationBuildPartitionDesc both cleaner
On Fri, Nov 9, 2018 at 9:50 AM Robert Haas wrote:
> I had another idea, too. I think we might be able to reuse the
> technique Noah invented in 4240e429d0c2d889d0cda23c618f94e12c13ade7.
> That is:
>
> - make a note of SharedInvalidMessageCounter before doing any of the
> relevant catalog lookups
On Thu, Nov 8, 2018 at 3:59 PM David Rowley
wrote:
> On 9 November 2018 at 05:34, Robert Haas wrote:
> > I suspect the only good way of fixing this problem is using a single
> > snapshot to perform both the scan of pg_inherits and the subsequent
> > pg_class lookups. That way, you know that you
On 9 November 2018 at 05:34, Robert Haas wrote:
> I suspect the only good way of fixing this problem is using a single
> snapshot to perform both the scan of pg_inherits and the subsequent
> pg_class lookups. That way, you know that you are seeing the state of
> the whole partitioning hierarchy a
On Wed, Nov 7, 2018 at 1:37 PM Robert Haas wrote:
> > Maybe you could give my patch a look.
>
> I have, a bit.
While thinking about this problem a bit more, I realized that what is
called RelationBuildPartitionDesc in master and BuildPartitionDesc in
Alvaro's patch has a synchronization problem a
On 2018/11/08 11:01, Robert Haas wrote:
> On Wed, Nov 7, 2018 at 7:06 PM David Rowley
> wrote:
>> While the find_all_inheritors() call is something I'd like to see
>> gone, I assume it was done that way since an UPDATE might route a
>> tuple to a partition that there is no subplan for and due to I
On 8 November 2018 at 15:01, Robert Haas wrote:
> Honestly, I *think* that the reason that find_all_inheritors() call is
> there is because I had the idea that it was important to try to lock
> partition hierarchies in the same order in all cases so as to avoid
> spurious deadlocks. However, I do
On Wed, Nov 7, 2018 at 7:06 PM David Rowley
wrote:
> While the find_all_inheritors() call is something I'd like to see
> gone, I assume it was done that way since an UPDATE might route a
> tuple to a partition that there is no subplan for and due to INSERT
> with VALUES not having any RangeTblEntr
On 8 November 2018 at 05:05, Robert Haas wrote:
> That seems OK at present, because no new partitions can have appeared
> since ExecSetupPartitionTupleRouting() acquired locks. But if we
> allow new partitions to be added with only ShareUpdateExclusiveLock,
> then I think there would be a problem
On Wed, Nov 7, 2018 at 12:58 PM Alvaro Herrera wrote:
> On 2018-Nov-06, Robert Haas wrote:
> > - get_partition_dispatch_recurse and ExecCreatePartitionPruneState
> > both call RelationGetPartitionDesc.
>
> My patch deals with this by caching descriptors in the active snapshot.
> So those two thing
On 2018-Nov-06, Robert Haas wrote:
> - get_partition_dispatch_recurse and ExecCreatePartitionPruneState
> both call RelationGetPartitionDesc.
My patch deals with this by caching descriptors in the active snapshot.
So those two things would get the same partition descriptor. There's no
RelationGe
On Tue, Nov 6, 2018 at 5:09 PM Robert Haas wrote:
> - get_partition_dispatch_recurse and ExecCreatePartitionPruneState
> both call RelationGetPartitionDesc. Presumably, this means that if
> the partition descriptor gets updated on the fly, the tuple routing
> and partition dispatch code could end
On Tue, Nov 6, 2018 at 10:18 PM Alvaro Herrera wrote:
> > > Throw tuples destined for that partition away?
> > Surely not. (/me doesn't beat straw men anyway.)
>
> Hmm, apparently this can indeed happen with my patch :-(
D'oh. This is a hard problem, especially the part of it that involves
hand
On 2018-Nov-06, Alvaro Herrera wrote:
> On 2018-Nov-06, Robert Haas wrote:
> > Throw tuples destined for that partition away?
>
> Surely not. (/me doesn't beat straw men anyway.)
Hmm, apparently this can indeed happen with my patch :-(
--
Álvaro Herrerahttps://www.2ndQuadrant
On Tue, Aug 7, 2018 at 9:29 AM Andres Freund wrote:
> One approach would be to make sure that everything relying on
> rt_partdesc staying the same stores its value in a local variable, and
> then *not* free the old version of rt_partdesc (etc) when the refcount >
> 0, but delay that to the Relatio
On Tue, 6 Nov 2018 at 11:06, Robert Haas wrote:
> On Tue, Nov 6, 2018 at 2:01 PM Simon Riggs wrote:
> > If you can remove the ERROR without any other adverse effects, that
> sounds great.
> >
> > Please let us know what, if any, adverse effects would be caused so we
> can discuss. Thanks
>
> Wel
On Tue, Nov 6, 2018 at 2:10 PM Alvaro Herrera wrote:
> On 2018-Nov-06, Robert Haas wrote:
> > If you don't throw an error when a partition is concurrently detached
> > and then someone routes a tuple to that portion of the key space, what
> > DO you do? Continue inserting tuples into the table ev
On 2018-Nov-06, Robert Haas wrote:
> If you don't throw an error when a partition is concurrently detached
> and then someone routes a tuple to that portion of the key space, what
> DO you do? Continue inserting tuples into the table even though it's
> no longer a partition?
Yes -- the table was
On Tue, Nov 6, 2018 at 2:01 PM Simon Riggs wrote:
> If you can remove the ERROR without any other adverse effects, that sounds
> great.
>
> Please let us know what, if any, adverse effects would be caused so we can
> discuss. Thanks
Well, I've already written about this in two previous emails o
On Tue, 6 Nov 2018 at 10:56, Robert Haas wrote:
> On Tue, Nov 6, 2018 at 1:54 PM Simon Riggs wrote:
> > Error in the COPY or in the DDL? COPY preferred. Somebody with insert
> rights shouldn't be able to prevent a table-owner level action. People
> normally drop partitions to save space, so it c
On Tue, Nov 6, 2018 at 1:54 PM Simon Riggs wrote:
> Error in the COPY or in the DDL? COPY preferred. Somebody with insert rights
> shouldn't be able to prevent a table-owner level action. People normally drop
> partitions to save space, so it could be annoying if that was interrupted.
Yeah, the
On Tue, 6 Nov 2018 at 10:10, Robert Haas wrote:
> With this
> approach, already-running queries won't take into account the fact
> that new partitions have been added, but that seems at least tolerable
> and perhaps desirable.
>
Desirable, imho. No data added after a query starts would be visib
On 2018-Nov-06, Robert Haas wrote:
> If you're not hacking on this patch set too actively right at the
> moment, I'd like to spend some time hacking on the CREATE/ATTACH side
> of things and see if I can produce something committable for that
> portion of the problem.
I'm not -- feel free to hack
On Thu, Oct 25, 2018 at 4:26 PM Alvaro Herrera wrote:
> Firstly, I took Robert's advice and removed the CONCURRENTLY keyword
> from the syntax. We just do it that way always. When there's a default
> partition, only that partition is locked with an AEL; all the rest is
> locked with ShareUpdateE
Hello
Here's my take on this feature, owing to David Rowley's version.
Firstly, I took Robert's advice and removed the CONCURRENTLY keyword
from the syntax. We just do it that way always. When there's a default
partition, only that partition is locked with an AEL; all the rest is
locked with Sh
On 21 August 2018 at 13:59, Robert Haas wrote:
> On Mon, Aug 20, 2018 at 4:21 PM, Alvaro Herrera
> wrote:
>> I wonder if this all stems from a misunderstanding of what I suggested
>> to David offlist. My suggestion was that the catalog scans would
>> continue to use the catalog MVCC snapshot, an
On Mon, Aug 20, 2018 at 4:21 PM, Alvaro Herrera
wrote:
> I wonder if this all stems from a misunderstanding of what I suggested
> to David offlist. My suggestion was that the catalog scans would
> continue to use the catalog MVCC snapshot, and that the relcache entries
> would contain all the par
On 2018-Aug-13, Robert Haas wrote:
> I think this is a somewhat confused analysis. We don't use
> SnapshotAny for catalog scans, and we never have. We used to use
> SnapshotNow, and we now use a current MVCC snapshot. What you're
> talking about, I think, is possibly using the transaction snaps
On 14 August 2018 at 04:00, Robert Haas wrote:
> I've thought about similar things, but I think there's a pretty deep
> can of worms. For instance, if you built a relcache entry using the
> transaction snapshot, you might end up building a seemingly-valid
> relcache entry for a relation that has
On Sun, Aug 12, 2018 at 9:05 AM, David Rowley
wrote:
> This is not a fully baked idea, but I'm wondering if a better way to
> do this, instead of having this PartitionIsValid macro to determine if
> the partition should be visible to the current transaction ID, we
> could, when we invalidate a rel
On 8 August 2018 at 01:29, Andres Freund wrote:
> One approach would be to make sure that everything relying on
> rt_partdesc staying the same stores its value in a local variable, and
> then *not* free the old version of rt_partdesc (etc) when the refcount >
> 0, but delay that to the RelationClo
Hi,
On 2018-08-09 20:57:35 +0200, Peter Eisentraut wrote:
> On 07/08/2018 15:29, Andres Freund wrote:
> > I don't think that solves the problem that an arriving relcache
> > invalidation would trigger a rebuild of rd_partdesc, while it actually
> > is referenced by running code.
>
> The problem i
On 07/08/2018 15:29, Andres Freund wrote:
> I don't think that solves the problem that an arriving relcache
> invalidation would trigger a rebuild of rd_partdesc, while it actually
> is referenced by running code.
The problem is more generally that a relcache invalidation changes all
pointers that
On 8 August 2018 at 01:29, Andres Freund wrote:
> On 2018-08-08 01:23:51 +1200, David Rowley wrote:
>> I'm not proposing that sessions running older snapshots can't see that
>> there's a new partition. The code I have uses PartitionIsValid() to
>> test if the partition should be visible to the sna
On 2018-08-08 01:23:51 +1200, David Rowley wrote:
> On 8 August 2018 at 00:47, Andres Freund wrote:
> > On 2018-08-08 00:40:12 +1200, David Rowley wrote:
> >> 1. Obtain a ShareUpdateExclusiveLock on the partitioned table rather
> >> than an AccessExclusiveLock.
> >> 2. Do all the normal partition
On 8 August 2018 at 00:47, Andres Freund wrote:
> On 2018-08-08 00:40:12 +1200, David Rowley wrote:
>> 1. Obtain a ShareUpdateExclusiveLock on the partitioned table rather
>> than an AccessExclusiveLock.
>> 2. Do all the normal partition attach partition validation.
>> 3. Insert pg_partition recor
On 7 August 2018 at 13:47, Andres Freund wrote:
> Hi,
>
> On 2018-08-08 00:40:12 +1200, David Rowley wrote:
>> 1. Obtain a ShareUpdateExclusiveLock on the partitioned table rather
>> than an AccessExclusiveLock.
>> 2. Do all the normal partition attach partition validation.
>> 3. Insert pg_partiti
Hi,
On 2018-08-08 00:40:12 +1200, David Rowley wrote:
> 1. Obtain a ShareUpdateExclusiveLock on the partitioned table rather
> than an AccessExclusiveLock.
> 2. Do all the normal partition attach partition validation.
> 3. Insert pg_partition record with partvalid = true.
> 4. Invalidate relcache
On 3 August 2018 at 01:25, David Rowley wrote:
> 1. Do all the normal partition attach partition validation.
> 2. Insert a record into pg_partition with partisvalid=false
> 3. Obtain a session-level ShareUpdateExclusiveLock on the partitioned table.
> 4. Obtain a session-level AccessExclusiveLock
94 matches
Mail list logo