On Tue, Mar 5, 2019 at 8:04 PM David Rowley
wrote:
> Actually, I'm not sure it could work at all. It does not seem very
> safe to lookup a partition's parent without actually holding a lock on
> the partition and we can't lock the partition and then lock each
> parent in turn as that's the exact
On Wed, 6 Mar 2019 at 04:46, Tomas Vondra wrote:
>
> On 3/5/19 6:55 AM, David Rowley wrote:
> > The only way I can think to fix this is to just never lock partitions
> > at all, and if a lock is to be obtained on a partition, it must be
> > instead obtained on the top-level partitioned table. Tha
On 3/5/19 6:55 AM, David Rowley wrote:
> On Sat, 2 Feb 2019 at 02:52, Robert Haas wrote:
>> I think the key question here is whether or not you can cope with
>> someone having done arbitrary AEL-requiring modifications to the
>> delaylocked partitions. If you can, the fact that the plan might
On Sat, 2 Feb 2019 at 02:52, Robert Haas wrote:
> I think the key question here is whether or not you can cope with
> someone having done arbitrary AEL-requiring modifications to the
> delaylocked partitions. If you can, the fact that the plan might
> sometimes be out-of-date is an inevitable con
On Tue, 19 Feb 2019 at 12:50, Tom Lane wrote:
>
> [ reposting some questions asked in the wrong thread ]
>
> What I'd like to understand about this patch is how it relates
> to Amit L.'s work on making the planner faster for partitioned
> UPDATE/DELETE cases (https://commitfest.postgresql.org/22/1
[ reposting some questions asked in the wrong thread ]
What I'd like to understand about this patch is how it relates
to Amit L.'s work on making the planner faster for partitioned
UPDATE/DELETE cases (https://commitfest.postgresql.org/22/1778/).
I think that that might render this moot? Amit's a
On Sat, 2 Feb 2019 at 13:43, Tomas Vondra wrote:
>
> On 2/1/19 2:51 PM, Robert Haas wrote:
> >> (I admit to not having the best grasp on how all this works, so feel
> >> free to shoot this down)
> >>
> > I think the key question here is whether or not you can cope with
> > someone having done arbi
On 2/1/19 2:51 PM, Robert Haas wrote:
>> (I admit to not having the best grasp on how all this works, so feel
>> free to shoot this down)
>>
> I think the key question here is whether or not you can cope with
> someone having done arbitrary AEL-requiring modifications to the
> delaylocked partition
On Thu, Jan 31, 2019 at 8:29 PM David Rowley
wrote:
> I think perhaps accepting invalidations at the start of the statement
> might appear to fix the problem in master, but I think there's still a
> race condition around CheckCachedPlan() since we'll ignore
> invalidation messages when we perform
On Fri, 1 Feb 2019 at 10:32, Robert Haas wrote:
>
> On Sun, Jan 27, 2019 at 8:26 PM David Rowley
> wrote:
> > One way around this would be to always perform an invalidation on the
> > partition's parent when performing a relcache invalidation on the
> > partition. We could perhaps invalidate all
On 1/31/19 10:31 PM, Robert Haas wrote:
> On Sun, Jan 27, 2019 at 8:26 PM David Rowley
> wrote:
>> One way around this would be to always perform an invalidation on the
>> partition's parent when performing a relcache invalidation on the
>> partition. We could perhaps invalidate all the way up
On Sun, Jan 27, 2019 at 8:26 PM David Rowley
wrote:
> One way around this would be to always perform an invalidation on the
> partition's parent when performing a relcache invalidation on the
> partition. We could perhaps invalidate all the way up to the top
> level partitioned table, that way we
On Tue, 29 Jan 2019 at 19:42, Amit Langote
wrote:
> However, I tried the example as you described and the plan *doesn't*
> change due to concurrent update of reloptions with master (without the
> patch) either.
Well, I didn't think to try that. I just assumed had broken it.
Could well be related
On 2019/01/28 20:27, David Rowley wrote:
> On Mon, 28 Jan 2019 at 20:45, Amit Langote
> wrote:
>> It seems to me that plancache.c doesn't really need to perform
>> AcquireExecutorLocks()/LockRelationOid() to learn that a partition's
>> reloptions property has changed to discard a generic plan and
On Mon, 28 Jan 2019 at 20:45, Amit Langote
wrote:
> It seems to me that plancache.c doesn't really need to perform
> AcquireExecutorLocks()/LockRelationOid() to learn that a partition's
> reloptions property has changed to discard a generic plan and build a new
> one. AFAICT, PlanCacheRelCallback
On 2019/01/28 10:26, David Rowley wrote:
> On Tue, 4 Dec 2018 at 00:42, David Rowley
> wrote:
>> Over here and along similar lines to the above, but this time I'd like
>> to take this even further and change things so we don't lock *any*
>> partitions during AcquireExecutorLocks() and instead jus
On Tue, 4 Dec 2018 at 00:42, David Rowley wrote:
> Over here and along similar lines to the above, but this time I'd like
> to take this even further and change things so we don't lock *any*
> partitions during AcquireExecutorLocks() and instead just lock them
> when we first access them with Exec
On Sat, 12 Jan 2019 at 23:42, David Rowley wrote:
> I've attached a rebase version of this. The previous version
> conflicted with some changes made in b60c397599.
I've attached another rebased version. This one fixes up the conflict
with e0c4ec07284.
--
David Rowley http://w
On Thu, 17 Jan 2019 at 17:18, Amit Langote
wrote:
>
> On 2019/01/04 9:53, David Rowley wrote:
> > Without PREPAREd statements, if the planner itself was unable to prune
> > the partitions it would already have obtained the lock during
> > planning, so AcquireExecutorLocks(), in this case, would bu
On 2019/01/04 9:53, David Rowley wrote:
> Without PREPAREd statements, if the planner itself was unable to prune
> the partitions it would already have obtained the lock during
> planning, so AcquireExecutorLocks(), in this case, would bump into the
> local lock hash table entry and forego trying t
On Tue, 4 Dec 2018 at 00:42, David Rowley wrote:
> Over here and along similar lines to the above, but this time I'd like
> to take this even further and change things so we don't lock *any*
> partitions during AcquireExecutorLocks() and instead just lock them
> when we first access them with Exec
On Sat, 5 Jan 2019 at 03:12, Tomas Vondra wrote:
> >>
> >> partitions0 100 10001
> >>
> >> master 19 1590 2090 128
> >> patched 18 1780 6820 1130
> >>
> >> So, that's nice. I wonde
On 1/4/19 1:53 AM, David Rowley wrote:
> On Fri, 4 Jan 2019 at 13:01, Tomas Vondra
> wrote:
>> On 1/3/19 11:57 PM, David Rowley wrote:
>>> You'll know you're getting a generic plan when you see "Filter (a =
>>> $1)" and see "Subplans Removed: " below the Append.
>>>
>>
>> Indeed, with prep
On Fri, 4 Jan 2019 at 13:01, Tomas Vondra wrote:
> On 1/3/19 11:57 PM, David Rowley wrote:
> > You'll know you're getting a generic plan when you see "Filter (a =
> > $1)" and see "Subplans Removed: " below the Append.
> >
>
> Indeed, with prepared statements I now see some improvements:
>
>
On 1/3/19 11:57 PM, David Rowley wrote:
> On Fri, 4 Jan 2019 at 11:48, Tomas Vondra
> wrote:
>> Nope, that doesn't seem to make any difference :-( In all cases the
>> resulting plan (with 10k partitions) looks like this:
>>
>> test=# explain analyze select * from hashp where a = 13442;
>>
>>
On Fri, 4 Jan 2019 at 11:48, Tomas Vondra wrote:
> Nope, that doesn't seem to make any difference :-( In all cases the
> resulting plan (with 10k partitions) looks like this:
>
> test=# explain analyze select * from hashp where a = 13442;
>
> QUERY PLAN
>
On 1/3/19 10:50 PM, David Rowley wrote:
> On Fri, 4 Jan 2019 at 02:40, Tomas Vondra
> wrote:
>> I'm a bit confused, because I can't reproduce any such speedup. I've
>> used the attached script that varies the number of partitions (which
>> worked quite nicely in the INSERT thread), but I'm gettin
On Fri, 4 Jan 2019 at 02:40, Tomas Vondra wrote:
> I'm a bit confused, because I can't reproduce any such speedup. I've
> used the attached script that varies the number of partitions (which
> worked quite nicely in the INSERT thread), but I'm getting results like
> this:
>
> partitions 0
On 12/3/18 12:42 PM, David Rowley wrote:
> ...
>
> Master: 1 parts
>
> $ pgbench -n -f bench.sql -M prepared -T 60 postgres
> tps = 108.882749 (excluding connections establishing)
> tps = 108.245437 (excluding connections establishing)
>
> delaylock: 1 parts
>
> $ pgbench -n -f bench.sql
Over on [1] I'm proposing to delay locking partitions of a partitioned
table that's the target of an INSERT or UPDATE command until we first
route a tuple to the partition. Currently, we go and lock all
partitions, even if we just insert a single tuple to a single
partition. The patch in [1] impr
30 matches
Mail list logo