Heikki Linnakangas wrote:
> (sorry for repeatedly replying to self. I'll go for a coffee after
> this...)
That's so nice of you to try to make me feel better for the serious
brain fade I suffered yesterday. ;-)
> On 08.06.2011 14:18, Heikki Linnakangas wrote:
>> Committed after adjusting th
(sorry for repeatedly replying to self. I'll go for a coffee after this...)
On 08.06.2011 14:18, Heikki Linnakangas wrote:
Committed after adjusting that comment. I did a lot of other cosmetic
changes too, please double-check that I didn't screw up anything.
Also, it would be nice to have some
On 08.06.2011 14:18, Heikki Linnakangas wrote:
I just looked back
at your old email where you listed the different DDL operations, and
notice that we missed VACUUM FULL as well
(http://archives.postgresql.org/message-id/4dbd7e9102250003d...@gw.wicourts.gov).
I'll look into that.
Never mind
On 08.06.2011 03:16, Kevin Grittner wrote:
+ /*
+* It's OK to remove the old lock first because of the
ACCESS
+* EXCLUSIVE lock on the heap relation when this is
called. It is
+* desirable to do so be
Heikki Linnakangas wrote:
> On 07.06.2011 21:10, Kevin Grittner wrote:
>> I think that leaves me with all the answers I need to get a new
>> patch out this evening (U.S. Central Time).
>
> Great, I'll review it in my morning (in about 12h)
Attached. Passes all the usual regression tests I ru
On 07.06.2011 21:10, Kevin Grittner wrote:
I think that leaves me
with all the answers I need to get a new patch out this evening
(U.S. Central Time).
Great, I'll review it in my morning (in about 12h)
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hack
Tom Lane wrote:
> Just to answer the question (independently of Heikki's concern
> about whether this is needed at all): it depends on the
> information you have. If all you have is the index OID, then yeah
> a catcache lookup in pg_index is probably the best thing. If you
> have an open Relati
Heikki Linnakangas wrote:
> Predicate locks on indexes are only needed to lock key ranges, to
> notice later insertions into the range, right? For locks on tuples
> that do exist, we have locks on the heap. If we're just about to
> delete every tuple in the heap, that doesn't need to conflict wi
"Kevin Grittner" writes:
> I think I've caught up with the rest of the class on why this isn't
> sane in DropAllPredicateLocksFromTableImpl, but I wonder about
> CheckTableForSerializableConflictIn. We *do* expect to be throwing
> errors in here, and we need some way to tell whether an index is
>
On 07.06.2011 20:42, Kevin Grittner wrote:
Heikki Linnakangas wrote:
It makes me a bit uncomfortable to do catalog cache lookups while
holding all the lwlocks.
I think I've caught up with the rest of the class on why this isn't
sane in DropAllPredicateLocksFromTableImpl, but I wonder about
C
Heikki Linnakangas wrote:
> It makes me a bit uncomfortable to do catalog cache lookups while
> holding all the lwlocks.
I think I've caught up with the rest of the class on why this isn't
sane in DropAllPredicateLocksFromTableImpl, but I wonder about
CheckTableForSerializableConflictIn. We
On 07.06.2011 20:03, Kevin Grittner wrote:
Heikki Linnakangas wrote:
We've also already removed the reserved entry for scratch space
This and Tom's concerns have me wondering if we should bracket the
two sections of code where we use the reserved lock target entry
with HOLD_INTERRUPTS() and
Heikki Linnakangas wrote:
> We've also already removed the reserved entry for scratch space
This and Tom's concerns have me wondering if we should bracket the
two sections of code where we use the reserved lock target entry
with HOLD_INTERRUPTS() and RESUME_INTERRUPTS(). In an assert-enable
b
Tom Lane wrote:
> "Kevin Grittner" writes:
>> What am I missing?
>
> Out-of-memory. Query cancel. The attempted catalog access
> failing because it results in a detected deadlock. I could
> probably think of several more if I spent ten minutes on it; and
> that's not even considering genuin
"Kevin Grittner" writes:
> Tom Lane wrote:
>> If you don't believe that a catcache lookup will ever fail, I will
>> contract to break the patch.
> As you probably know by now by reaching the end of the thread, this
> code is going away based on Heikki's arguments; but for my
> understanding, so
Tom Lane wrote:
> If you don't believe that a catcache lookup will ever fail, I will
> contract to break the patch.
As you probably know by now by reaching the end of the thread, this
code is going away based on Heikki's arguments; but for my
understanding, so that I don't make a bad assumptio
Heikki Linnakangas writes:
> It makes me a bit uncomfortable to do catalog cache lookups while
> holding all the lwlocks. We've also already removed the reserved entry
> for scratch space while we do that - if a cache lookup errors out, we'll
> leave behind quite a mess. I guess it shouldn't fa
"Kevin Grittner" wrote:
> Heikki Linnakangas wrote:
>
>> I think the logic is actually wrong at the moment: When you
>> reindex a single index, DropAllPredicateLocksFromTableImpl() will
>> transfer all locks belonging to any index on the same table, and
>> any finer-granularity locks on the heap.
Heikki Linnakangas wrote:
> It makes me a bit uncomfortable to do catalog cache lookups while
> holding all the lwlocks. We've also already removed the reserved
> entry for scratch space while we do that - if a cache lookup errors
> out, we'll leave behind quite a mess. I guess it shouldn't fail
On 06.06.2011 05:13, Kevin Grittner wrote:
"Kevin Grittner" wrote:
Maybe I should submit a patch without added complexity of the
scheduled cleanup and we can discuss that as a separate patch?
Here's a patch which adds the missing support for DDL.
It makes me a bit uncomfortable to do catalo
On Sun, Jun 05, 2011 at 12:45:41PM -0500, Kevin Grittner wrote:
> Is this possible? If a transaction gets its snapshot while OID of N
> is assigned to relation X, can that transaction wind up seeing an OID
> of N as a reference to relation Y? If not, there aren't any false
> positives possible.
"Kevin Grittner" wrote:
> Maybe I should submit a patch without added complexity of the
> scheduled cleanup and we can discuss that as a separate patch?
Here's a patch which adds the missing support for DDL. Cleanup of
predicate locks at commit time for transactions which ran DROP TABLE
or TRUNCA
> Heikki Linnakangas wrote:
> On 03.06.2011 23:44, Kevin Grittner wrote:
>> Heikki Linnakangas wrote:
>>
>>> I think you'll need to just memorize the lock deletion command in
>>> a backend-local list, and perform the deletion in a post-commit
>>> function.
>>
>> Hmm. As mentioned earlier in the th
Heikki Linnakangas wrote:
> On 04.06.2011 19:19, Tom Lane wrote:
>> Heikki Linnakangas writes:
>>> On 03.06.2011 21:04, Kevin Grittner wrote:
Also, if anyone has comments or hints about the placement of
those calls, I'd be happy to receive them.
>>
>>> heap_drop_with_catalog() schedules
On 04.06.2011 19:19, Tom Lane wrote:
Heikki Linnakangas writes:
On 03.06.2011 21:04, Kevin Grittner wrote:
Also, if anyone has comments or hints about the placement of those
calls, I'd be happy to receive them.
heap_drop_with_catalog() schedules the relation for deletion at the end
of trans
Heikki Linnakangas writes:
> On 03.06.2011 21:04, Kevin Grittner wrote:
>> Also, if anyone has comments or hints about the placement of those
>> calls, I'd be happy to receive them.
> heap_drop_with_catalog() schedules the relation for deletion at the end
> of transaction, but it's still possibl
"Kevin Grittner" wrote:
> Tuple locks should be safe from that because we use the tuple xmin
> as part of the target key, but page and heap locks
That should have read "page and relation locks".
> I guess that tips the scales in favor of it being worth the extra
> code. I think it's still i
Heikki Linnakangas wrote:
> On 03.06.2011 23:44, Kevin Grittner wrote:
>> Hmm. As mentioned earlier in the thread, cleaning these up
>> doesn't actually have any benefit beyond freeing space in the
>> predicate locking collections. I'm not sure that benefit is
>> enough to justify this much ne
On 03.06.2011 23:44, Kevin Grittner wrote:
Heikki Linnakangas wrote:
I think you'll need to just memorize the lock deletion command in
a backend-local list, and perform the deletion in a post-commit
function.
Hmm. As mentioned earlier in the thread, cleaning these up doesn't
actually have a
Heikki Linnakangas wrote:
> I think you'll need to just memorize the lock deletion command in
> a backend-local list, and perform the deletion in a post-commit
> function.
Hmm. As mentioned earlier in the thread, cleaning these up doesn't
actually have any benefit beyond freeing space in the
Tom Lane wrote:
> "Kevin Grittner" writes:
>> Heikki Linnakangas wrote:
>>> I think you'll need to just memorize the lock deletion command
>>< in a backend-local list, and perform the deletion in a
>>> post-commit function. Something similar to the PendingRelDelete
>>> stuff in storage.c. In fac
"Kevin Grittner" writes:
> Heikki Linnakangas wrote:
>> I think you'll need to just memorize the lock deletion command in
>> a backend-local list, and perform the deletion in a post-commit
>> function. Something similar to the PendingRelDelete stuff in
>> storage.c. In fact, hooking into smgrDoPe
Heikki Linnakangas wrote:
> On 03.06.2011 21:04, Kevin Grittner wrote:
>> Also, if anyone has comments or hints about the placement of
>> those calls, I'd be happy to receive them.
>
> heap_drop_with_catalog() schedules the relation for deletion at
> the end of transaction, but it's still possibl
On 03.06.2011 21:04, Kevin Grittner wrote:
Also, if anyone has comments or hints about the placement of those
calls, I'd be happy to receive them.
heap_drop_with_catalog() schedules the relation for deletion at the end
of transaction, but it's still possible that the transaction aborts and
th
On 1 May 2011 I wrote:
> Consider this a WIP patch
Just so people know where this stands...
By 8 May 2011 I had the attached. I didn't post it because I was
not confident I had placed the calls to the SSI functions for DROP
TABLE and TRUNCATE TABLE correctly. Then came PGCon and also the
r
> Heikki Linnakangas wrote:
> On 30.04.2011 01:04, Kevin Grittner wrote:
>> TRUNCATE TABLE and DROP TABLE should generate a rw-conflict *in*
>> to the enclosing transaction (if it is serializable) from all
>> transactions holding predicate locks on the table or its indexes.
>> Note that this could
On 30.04.2011 01:04, Kevin Grittner wrote:
TRUNCATE TABLE and DROP TABLE should generate a rw-conflict *in* to
the enclosing transaction (if it is serializable) from all
transactions holding predicate locks on the table or its indexes.
Note that this could cause a transactions which is running on
Just a quick status update.
I wrote:
> Consider this a WIP patch
The serializable branch on my git repo has a modified form of this
which has been tested successfully with:
DROP INDEX
REINDEX
VACUUM FULL
CLUSTER
ALTER TABLE
I'm holding off on posting another version of the patch until I
c
"Kevin Grittner" wrote:
> I haven't dug into ALTER INDEX enough to know whether it can ever
> cause an index to be rebuilt. If so, we need to treat it like
> DROP INDEX and REINDEX -- which should change all predicate locks
> of any granularity on the index into relation locks on the
> associat
"Kevin Grittner" wrote:
> This'll take some study.
I've gone through the list of commands in the development docs with
an eye toward exposing anything else we might have missed in dealing
with the SSI predicate locking. Some of this needs further
research, but I'm posting what I have so far s
Simon Riggs wrote:
> On Wed, Apr 27, 2011 at 8:59 PM, Kevin Grittner
> wrote:
>
>> For correct serializable behavior in the face of concurrent DDL
>> execution, I think that a request for a heavyweight ACCESS
>> EXCLUSIVE lock might need to block until all SIREAD locks on the
>> relation have be
On Wed, Apr 27, 2011 at 8:59 PM, Kevin Grittner
wrote:
> For correct serializable behavior in the face of concurrent DDL
> execution, I think that a request for a heavyweight ACCESS EXCLUSIVE
> lock might need to block until all SIREAD locks on the relation have
> been released. Picture, for exa
Dan Ports wrote:
> On Wed, Apr 27, 2011 at 04:09:38PM -0500, Kevin Grittner wrote:
>> Heikki Linnakangas wrote:
>>> Hmm, could we upgrade all predicate locks to relation-level
>>> predicate locks instead?
>>
>> Tied to what backend?
> I think Heikki was suggesting to upgrade to relation-level
On Wed, Apr 27, 2011 at 02:59:19PM -0500, Kevin Grittner wrote:
> For correct serializable behavior in the face of concurrent DDL
> execution, I think that a request for a heavyweight ACCESS EXCLUSIVE
> lock might need to block until all SIREAD locks on the relation have
> been released. Pictur
Heikki Linnakangas wrote:
> On 27.04.2011 22:59, Kevin Grittner wrote:
>> For correct serializable behavior in the face of concurrent DDL
>> execution, I think that a request for a heavyweight ACCESS
>> EXCLUSIVE lock might need to block until all SIREAD locks on the
>> relation have been released
On 27.04.2011 22:59, Kevin Grittner wrote:
For correct serializable behavior in the face of concurrent DDL
execution, I think that a request for a heavyweight ACCESS EXCLUSIVE
lock might need to block until all SIREAD locks on the relation have
been released. Picture, for example, what might hap
46 matches
Mail list logo