On Sun, Feb 10, 2019 at 12:31 AM Tom Lane wrote:
>
> Julien Rouhaud writes:
> > I just hit one of the asserts (in relation_open()) added in
> > b04aeb0a053. Here's a simple reproducer:
>
> Yeah, I think this is the same issue being discussed in
>
> https://www.postgresql.org/message-id/flat/1946
Julien Rouhaud writes:
> I just hit one of the asserts (in relation_open()) added in
> b04aeb0a053. Here's a simple reproducer:
Yeah, I think this is the same issue being discussed in
https://www.postgresql.org/message-id/flat/19465.1541636036%40sss.pgh.pa.us
I imagine the patch David recently
Hi,
On Sun, Sep 30, 2018 at 7:18 PM Tom Lane wrote:
>
> I think that the call sites should ultimately look like
>
> Assert(CheckRelationLockedByMe(...));
>
> but for hunting down the places where the assertion currently fails,
> it's more convenient if it's just an elog(WARNING).
I just
On Tue, Oct 9, 2018 at 2:35 PM Tom Lane wrote:
> > That last part could *easily* change in a future release. We've
> > already started to allow CTAS with parallel query, and there have
> > already been multiple people wanting to allow more. It would be a
> > shame if we threw up additional obsta
Robert Haas writes:
> On Sat, Oct 6, 2018 at 2:59 PM Tom Lane wrote:
>> The reasons why we need locks on tables not physically accessed by the
>> query are (a) to ensure that we've blocked, or received sinval messages
>> for, any DDL related to views or partition parent tables, in case that
>> wo
On Sat, Oct 6, 2018 at 2:59 PM Tom Lane wrote:
> The reasons why we need locks on tables not physically accessed by the
> query are (a) to ensure that we've blocked, or received sinval messages
> for, any DDL related to views or partition parent tables, in case that
> would invalidate the plan; (b
On Tue, Oct 9, 2018 at 11:07 PM Tom Lane wrote:
>
> Amit Langote writes:
> > On 2018/10/08 3:55, Tom Lane wrote:
> >> I didn't like the idea of unifying ModifyTable.nominalRelation with
> >> the partition root info. Those fields serve different masters ---
> >> nominalRelation, at least in its o
Amit Langote writes:
> On 2018/10/08 3:55, Tom Lane wrote:
>> I didn't like the idea of unifying ModifyTable.nominalRelation with
>> the partition root info. Those fields serve different masters ---
>> nominalRelation, at least in its original intent, is only meant for
>> use of EXPLAIN and might
On 2018/10/08 8:18, Tom Lane wrote:
> I wrote:
>> Still need to think a bit more about whether we want 0005 in
>> anything like its current form.
>
> So I poked at that for a bit, and soon realized that the *main* problem
> there is that ExecFindRowMark() eats O(N^2) time due to repeated searches
On 2018/10/08 9:29, David Rowley wrote:
> On 8 October 2018 at 13:13, Tom Lane wrote:
>> The idea I had in mind was to allow hard pruning of any leaf that's
>> been excluded *at plan time* based on partition constraints seen in
>> its parent rel(s). That should be safe enough as long as we take
>
On 2018/10/09 0:38, Tom Lane wrote:
> I wrote:
>> Keeping that comparison in mind, I'm inclined to think that 0001
>> is the best thing to do for now. The incremental win from 0002
>> is not big enough to justify the API break it creates, while your
>> 0005 is not really attacking the problem the
On 2018/10/08 3:55, Tom Lane wrote:
> I didn't like the idea of unifying ModifyTable.nominalRelation with
> the partition root info. Those fields serve different masters ---
> nominalRelation, at least in its original intent, is only meant for
> use of EXPLAIN and might have nothing to do with wha
On 2018/10/07 3:59, Tom Lane wrote:
> Amit Langote writes:
>> On 2018/10/05 5:59, Tom Lane wrote:
>>> So I'm inclined to just omit 0003. AFAICS this would only mean that
>>> we couldn't drop the global PlanRowMarks list from PlannedStmt, which
>>> does not bother me much.
>
>> To be honest, I to
I wrote:
> Keeping that comparison in mind, I'm inclined to think that 0001
> is the best thing to do for now. The incremental win from 0002
> is not big enough to justify the API break it creates, while your
> 0005 is not really attacking the problem the right way.
I've pushed 0001 now. I belie
On 8 October 2018 at 13:13, Tom Lane wrote:
> The idea I had in mind was to allow hard pruning of any leaf that's
> been excluded *at plan time* based on partition constraints seen in
> its parent rel(s). That should be safe enough as long as we take
> locks on all the non-leaf rels --- then we'd
David Rowley writes:
> On 8 October 2018 at 12:18, Tom Lane wrote:
>> So ISTM that the *real* win for this scenario is going to come from
>> teaching the system to prune unwanted relations from the query
>> altogether, not just from the PlanRowMark list.
> Idle thought: I wonder if we could add
On 8 October 2018 at 12:18, Tom Lane wrote:
> However, we should keep in mind that without partitioning overhead
> (ie "select * from lt_999 where b = 999 for share"), the TPS rate
> is over 25800 tps. Most of the overhead in the partitioned case seems
> to be from acquiring locks on rangetable e
I wrote:
> Still need to think a bit more about whether we want 0005 in
> anything like its current form.
So I poked at that for a bit, and soon realized that the *main* problem
there is that ExecFindRowMark() eats O(N^2) time due to repeated searches
of the es_rowMarks list. While the patch as s
Amit Langote writes:
> 0004: removes useless fields from certain planner nodes whose only purpose
> has been to assist the executor lock relations in proper order
I've pushed most of 0004 now; obviously, not the parts removing
PlannedStmt.rowMarks, since that's not possible without rearrangement
Amit Langote writes:
> On 2018/10/05 5:59, Tom Lane wrote:
>> So I'm inclined to just omit 0003. AFAICS this would only mean that
>> we couldn't drop the global PlanRowMarks list from PlannedStmt, which
>> does not bother me much.
> To be honest, I too had begun to fail to see the point of this
On 2018/10/05 5:59, Tom Lane wrote:
> Amit Langote writes:
>> I've rebased the remaining patches. I broke down one of the patches into
>> 2 and re-ordered the patches as follows:
>
>> 0001: introduces a function that opens range table relations and maintains
>> them in an array indexes by RT ind
Amit Langote writes:
> I've rebased the remaining patches. I broke down one of the patches into
> 2 and re-ordered the patches as follows:
> 0001: introduces a function that opens range table relations and maintains
> them in an array indexes by RT index
> 0002: introduces a new field in EState
Robert Haas writes:
> On Thu, Oct 4, 2018 at 3:28 PM Tom Lane wrote:
>> What we've determined so far in this thread is that workers *do* get
>> their own locks (or did before yesterday), but I'd been supposing that
>> that was accidental not intentional.
> Nope, that was intentional.
Fair enoug
On Thu, Oct 4, 2018 at 3:28 PM Tom Lane wrote:
> I'm possibly confused, but I thought that the design of parallel query
> involved an expectation that workers didn't need to get their own locks.
You are, indeed, confused. A heck of a lot of effort went into making
sure that the workers COULD tak
On 2018-10-04 12:34:44 -0700, Andres Freund wrote:
> Hi,
>
> On 2018-10-04 15:27:59 -0400, Tom Lane wrote:
> > Andres Freund writes:
> > > I've not really followed this thread, and just caught up to here. It
> > > seems entirely unacceptable to not acquire locks on workers to me.
> > > Maybe I'm
Hi,
On 2018-10-04 15:27:59 -0400, Tom Lane wrote:
> Andres Freund writes:
> > I've not really followed this thread, and just caught up to here. It
> > seems entirely unacceptable to not acquire locks on workers to me.
> > Maybe I'm missing something, but why do/did the patches in this thread
> >
Andres Freund writes:
> I've not really followed this thread, and just caught up to here. It
> seems entirely unacceptable to not acquire locks on workers to me.
> Maybe I'm missing something, but why do/did the patches in this thread
> require that / introduce that? We didn't have that kind of c
Hi,
On 2018-10-03 16:16:11 -0400, Tom Lane wrote:
> I wrote:
> > Amit Langote writes:
> >> Should this check that we're not in a parallel worker process?
>
> > Hmm. I've not seen any failures in the parallel parts of the regular
> > regression tests, but maybe I'd better do a force_parallel_mod
Amit Langote writes:
> On 2018/10/04 5:16, Tom Lane wrote:
>> I think that we ought to adjust parallel query to insist that children
>> do take locks, and then revert the IsParallelWorker() exceptions I made
>> here.
> Maybe I'm missing something here, but isn't the necessary adjustment just
> th
On 2018/10/04 5:16, Tom Lane wrote:
> I wrote:
>> Amit Langote writes:
>>> Should this check that we're not in a parallel worker process?
>
>> Hmm. I've not seen any failures in the parallel parts of the regular
>> regression tests, but maybe I'd better do a force_parallel_mode
>> run before com
I wrote:
> Amit Langote writes:
>> Should this check that we're not in a parallel worker process?
> Hmm. I've not seen any failures in the parallel parts of the regular
> regression tests, but maybe I'd better do a force_parallel_mode
> run before committing.
> In general, I'm not on board with
David Rowley writes:
> On 1 October 2018 at 19:39, Amit Langote
> wrote:
>> For this and the other cases (AcquireRewriteLocks, AcquireExecutorLocks,
>> etc.), I wonder whether we couldn't just *not* recalculate the lock mode
>> based on inspecting the query tree to cross-check with rellockmode?
On 1 October 2018 at 19:39, Amit Langote wrote:
> For this and the other cases (AcquireRewriteLocks, AcquireExecutorLocks,
> etc.), I wonder whether we couldn't just *not* recalculate the lock mode
> based on inspecting the query tree to cross-check with rellockmode? Why
> not just use rellockmod
Amit Langote writes:
> On 2018/10/01 2:18, Tom Lane wrote:
>> I think that the call sites should ultimately look like
>> Assert(CheckRelationLockedByMe(...));
>> but for hunting down the places where the assertion currently fails,
>> it's more convenient if it's just an elog(WARNING).
> Should th
Amit Langote writes:
> On 2018/09/30 5:04, Tom Lane wrote:
>> 3. There remain some cases where the RTE says RowExclusiveLock but
>> the executor calculation indicates we only need AccessShareLock.
>> AFAICT, this happens only when we have a DO ALSO rule that results
>> in an added query that merel
On 2018/10/01 2:18, Tom Lane wrote:
> I wrote:
>> 1. You set up transformRuleStmt to insert AccessExclusiveLock into
>> the "OLD" and "NEW" RTEs for a view. This is surely wrong; we do
>> not want to take exclusive lock on a view just to run a query using
>> the view. It should (usually, anyway)
On 2018/09/30 5:04, Tom Lane wrote:
> David Rowley writes:
>> I've attached v10 which fixes this and aligns the WARNING text in
>> ExecInitRangeTable() and addRangeTableEntryForRelation().
>
> I started poking at this.
Thanks a lot for looking at this.
> I thought that it would be a good cross-
David Rowley writes:
> On 1 October 2018 at 06:18, Tom Lane wrote:
>> + for (slockmode = lockmode + 1;
>> + slockmode <= AccessExclusiveLock;
>> + slockmode++)
> So would it not be better to add the following to lockdefs.h?
> #define MaxLockLevel 8
> then use that to terminate the loop
On 1 October 2018 at 06:18, Tom Lane wrote:
> It occurred to me that it'd be reasonable to insist that the caller
> holds a lock *at least as strong* as the one being recorded in the RTE,
> and that there's also been discussions about verifying that some lock
> is held when something like heap_ope
I wrote:
> 1. You set up transformRuleStmt to insert AccessExclusiveLock into
> the "OLD" and "NEW" RTEs for a view. This is surely wrong; we do
> not want to take exclusive lock on a view just to run a query using
> the view. It should (usually, anyway) just be AccessShareLock.
> However, becaus
I wrote:
> I started poking at this. I thought that it would be a good cross-check
> to apply just the "front half" of 0001 (i.e., creation and population of
> the RTE lockmode field), and then to insert checks in each of the
> "back half" places (executor, plancache, etc) that the lockmodes they
David Rowley writes:
> I've attached v10 which fixes this and aligns the WARNING text in
> ExecInitRangeTable() and addRangeTableEntryForRelation().
I started poking at this. I thought that it would be a good cross-check
to apply just the "front half" of 0001 (i.e., creation and population of
th
On 2018-Sep-28, Amit Langote wrote:
> On 2018/09/28 17:48, David Rowley wrote:
> > Meh, I just noticed that the WARNING text claims "InitPlan" is the
> > function name. I think it's best to get rid of that. It's pretty much
> > redundant anyway if you do: \set VERBOSITY verbose
>
> Oops, good cat
Hi,
On 9/28/18 4:58 AM, Amit Langote wrote:
Okay, I've revised the text in the attached updated patch.
Meh, I just noticed that the WARNING text claims "InitPlan" is the
function name. I think it's best to get rid of that. It's pretty much
redundant anyway if you do: \set VERBOSITY verbose
O
On 28 September 2018 at 20:28, Amit Langote
wrote:
> On 2018/09/28 17:21, David Rowley wrote:
>> I think we maybe should switch the word "assert" for "verifies". The
>> Assert is just checking we didn't get a NoLock and I don't think
>> you're using "assert" meaning the Assert() marco, so likely s
On 28 September 2018 at 20:00, Amit Langote
wrote:
> I've made minor tweaks, which find in
> the attached updated patches (a .diff file containing changes from v6 to
> v7 is also attached).
Thanks for looking over the changes.
I've looked at the v6 to v7 diff and it seems all good, apart from:
Hi,
On 9/27/18 5:15 AM, David Rowley wrote:
I've just completed a review of the v5 patch set. I ended up just
making the changes myself since Amit mentioned he was on leave for a
few weeks.
Summary of changes:
1. Changed the way we verify the lock already exists with debug
builds. I reverted s
On 2018/09/27 18:15, David Rowley wrote:
> I've just completed a review of the v5 patch set. I ended up just
> making the changes myself since Amit mentioned he was on leave for a
> few weeks.
Thanks David. I'm back today and will look at the updated patches tomorrow.
Regards,
Amit
Hi Amit,
On 9/13/18 12:58 AM, Amit Langote wrote:
Attached updated patches.
Beside the issue that caused eval-plan-qual isolation test to crash, I
also spotted and fixed an oversight in the 0002 patch which would lead to
EState.es_output_cid being set to wrong value and causing unexpected error
On Wed, Sep 12, 2018 at 9:23 PM, Jesper Pedersen
wrote:
> Hi Amit,
>
> On 9/12/18 1:23 AM, Amit Langote wrote:
>>
>> Please find attached revised patches.
>>
>
> After applying 0004 I'm getting a crash in 'eval-plan-qual' during
> check-world using
>
> export CFLAGS="-DCOPY_PARSE_PLAN_TREES -O0 -f
Hi Amit,
On 9/12/18 1:23 AM, Amit Langote wrote:
Please find attached revised patches.
After applying 0004 I'm getting a crash in 'eval-plan-qual' during
check-world using
export CFLAGS="-DCOPY_PARSE_PLAN_TREES -O0 -fno-omit-frame-pointer" &&
./configure --enable-dtrace --with-openssl --w
On 4 September 2018 at 20:53, Amit Langote
wrote:
> Updated patches attached; 0001-0003 are same as v1.
I've looked at these. Here's my review so far:
0001:
1. The following does not seem to be true any longer:
+ /*
+ * If you change the conditions under which rel locks are acquired
+ * here,
On 2018/08/16 17:22, Amit Langote wrote:
> 0004-Revise-executor-range-table-relation-opening-closing.patch
>
> This adds two arrays to EState indexed by RT indexes, one for
> RangeTblEntry's and another for Relation pointers. The former reduces the
> cost of fetching RangeTblEntry by RT index. T
53 matches
Mail list logo