On 2021-11-11 09:38:07 -0300, Alvaro Herrera wrote:
> > I just noticed that this commit (dcfff74fb16) didn't revert the change of
> > lock
> > level in ReplicationSlotRelease(). Was that intentional?
>
> Hmm, no, that seems to have been a mistake. I'll restore it.
Thanks
On 2021-Nov-10, Andres Freund wrote:
> > Reverts 27838981be9d (some comments are kept). Per discussion, it does
> > not seem safe to relax the lock level used for this; in order for it to
> > be safe, there would have to be memory barriers between the point we set
> > the flag and the point we se
Hi,
On 2020-11-25 17:03:58 -0300, Alvaro Herrera wrote:
> On 2020-Nov-23, Andres Freund wrote:
>
> > On 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:
>
> > > In other words, my conclusion is that there definitely is a bug here and
> > > I am going to restore the use of exclusive lock for sett
On Mon, Nov 30, 2020 at 04:15:27PM -0300, Alvaro Herrera wrote:
> In the interest of showing progress, I'm going to mark this CF item as
> committed, and I'll submit the remaining pieces in a new thread.
Thanks for splitting!
--
Michael
signature.asc
Description: PGP signature
In the interest of showing progress, I'm going to mark this CF item as
committed, and I'll submit the remaining pieces in a new thread.
Thanks!
Actually, I noticed two things. The first of them, addressed in this
new version of the patch, is that REINDEX CONCURRENTLY is doing a lot of
repetitive work by reopening each index and table in the build/validate
loops, so that they can report progress. This is easy to remedy by
adding a couple
On 2020-Nov-26, Alvaro Herrera wrote:
> So let's discuss the next step in this series: what to do about REINDEX
> CONCURRENTLY.
> [...]
... as in the attached.
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index ca24620fd0..9c1c0aad3e 100644
--- a/src/backend/
So let's discuss the next step in this series: what to do about REINDEX
CONCURRENTLY.
I started with Dmitry's patch (an updated version of which I already
posted in [1]). However, Dmitry missed (and I hadn't noticed) that some
of the per-index loops are starting transactions also, and that we nee
On 2020-Nov-25, Alvaro Herrera wrote:
> On 2020-Nov-23, Andres Freund wrote:
>
> > On 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:
>
> > > In other words, my conclusion is that there definitely is a bug here and
> > > I am going to restore the use of exclusive lock for setting the
> > > stat
On 2020-Nov-23, Andres Freund wrote:
> On 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:
> > In other words, my conclusion is that there definitely is a bug here and
> > I am going to restore the use of exclusive lock for setting the
> > statusFlags.
>
> Cool.
Here's a patch.
Note it also mo
On 2020-Nov-23, Tom Lane wrote:
> Alvaro Herrera writes:
> > GetCurrentVirtualXIDs, ComputeXidHorizons, GetSnapshotData:
>
> > In these cases, what we need is that the code computes some xmin (or
> > equivalent computation) based on a set of transactions that exclude
> > those marked with the f
On 2020-Nov-23, Andres Freund wrote:
> On 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:
> > PROC_IN_LOGICAL_DECODING:
> > Oddly enough, I think the reset of PROC_IN_LOGICAL_DECODING in
> > ReplicationSlotRelease might be the most problematic one of the lot.
> > That's because a proc's xmin tha
On Mon, Nov 23, 2020 at 06:13:17PM -0500, Tom Lane wrote:
> Alvaro Herrera writes:
>> Please feel free to go ahead, including the change to ProcSleep.
>
> Will do.
Thank you both for 450c823 and 789b938.
--
Michael
signature.asc
Description: PGP signature
On 2020-Nov-23, Tom Lane wrote:
> I never cared that much for "is_log_level_output" either. Thinking
> about renaming it to "should_output_to_log()", and then the new function
> would be "should_output_to_client()".
Ah, that sounds nicely symmetric and grammatical.
Thanks!
Hi,
On 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:
> ProcSleep: no bug here.
> The flags are checked to see if we should kill() the process (used when
> autovac blocks some other process). The lock here appears to be
> inconsequential, since we release it before we do kill(); so strictly
>
Alvaro Herrera writes:
> On 2020-Nov-23, Tom Lane wrote:
>> I'm not too fussed about whether we invent is_log_level_output_client,
>> although that name doesn't seem well-chosen compared to
>> is_log_level_output.
> Just replacing "log" for "client" in that seemed strictly worse, and I
> didn't (
On 2020-Nov-23, Tom Lane wrote:
> Ah, I see I didn't cover the case in ProcSleep that you were originally on
> about ... I'd just looked for existing references to log_min_messages
> and client_min_messages.
Yeah, it seemed bad form to add that when you had just argued against it
:-)
> I think i
Alvaro Herrera writes:
> Your version has the advantage that errstart() doesn't get a new
> function call. I'm +1 for going with that ... we could avoid the
> duplicate code with some additional contortions but this changes so
> rarely that it's probably not worth the trouble.
I was considering
Alvaro Herrera writes:
> On 2020-Nov-23, Tom Lane wrote:
>> Here's a draft patch.
> Here's another of my own. Outside of elog.c it seems identical.
Ah, I see I didn't cover the case in ProcSleep that you were originally on
about ... I'd just looked for existing references to log_min_messages
an
On 2020-Nov-23, Alvaro Herrera wrote:
> On 2020-Nov-23, Tom Lane wrote:
>
> > Here's a draft patch.
>
> Here's another of my own. Outside of elog.c it seems identical.
Your version has the advantage that errstart() doesn't get a new
function call. I'm +1 for going with that ... we could avoid
On 2020-Nov-23, Tom Lane wrote:
> Here's a draft patch.
Here's another of my own. Outside of elog.c it seems identical.
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 03c553e7ea..a4ab9090f9 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backen
Here's a draft patch.
regards, tom lane
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 03c553e7ea..9cd0b7c11b 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -5344,7 +5344,7 @@ static void
On 2020-Nov-23, Tom Lane wrote:
> Alvaro Herrera writes:
> > Well, we already do this in a number of places. But I can get behind
> > this:
>
> >> Maybe it'd be a good idea to have elog.c expose a new function
> >> along the lines of "bool message_level_is_interesting(int elevel)"
> >> to suppo
Alvaro Herrera writes:
> Well, we already do this in a number of places. But I can get behind
> this:
>> Maybe it'd be a good idea to have elog.c expose a new function
>> along the lines of "bool message_level_is_interesting(int elevel)"
>> to support this and similar future optimizations in a l
On 2020-Nov-23, Tom Lane wrote:
> Alvaro Herrera writes:
> > On 2020-Nov-19, Michael Paquier wrote:
> >> By the way, it strikes me that you could just do nothing as long as
> >> (log_min_messages > DEBUG1), so you could encapsulate most of the
> >> logic that plays with the lock tag using that.
>
Alvaro Herrera writes:
> On 2020-Nov-19, Michael Paquier wrote:
>> By the way, it strikes me that you could just do nothing as long as
>> (log_min_messages > DEBUG1), so you could encapsulate most of the
>> logic that plays with the lock tag using that.
> Good idea, done.
I'm less sure that that
On 2020-Nov-19, Michael Paquier wrote:
> On Thu, Nov 19, 2020 at 12:13:44PM +0900, Michael Paquier wrote:
> > That still looks useful for debugging, so DEBUG1 sounds fine to me.
>
> By the way, it strikes me that you could just do nothing as long as
> (log_min_messages > DEBUG1), so you could enc
Alvaro Herrera writes:
> PROC_IN_LOGICAL_DECODING:
> Oddly enough, I think the reset of PROC_IN_LOGICAL_DECODING in
> ReplicationSlotRelease might be the most problematic one of the lot.
> That's because a proc's xmin that had been ignored all along by
> ComputeXidHorizons, will now be included in
On 2020-Nov-18, Andres Freund wrote:
> In 13 this is:
> LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);
> MyPgXact->vacuumFlags |= PROC_IN_VACUUM;
> if (params->is_wraparound)
> MyPgXact->vacuumFlags |= PROC_VACUUM_FOR_WRAPAROUND;
>
On Thu, Nov 19, 2020 at 12:13:44PM +0900, Michael Paquier wrote:
> That still looks useful for debugging, so DEBUG1 sounds fine to me.
By the way, it strikes me that you could just do nothing as long as
(log_min_messages > DEBUG1), so you could encapsulate most of the
logic that plays with the loc
On Wed, Nov 18, 2020 at 02:48:40PM -0800, Andres Freund wrote:
> On 2020-11-18 18:41:27 -0300, Alvaro Herrera wrote:
>> We could make this more concurrent by copying lock->tag to a local
>> variable, releasing the lock, then doing all the string formatting and
>> printing. See attached quickly.pat
On Wed, Nov 18, 2020 at 11:09:28AM -0800, Andres Freund wrote:
> Uh, wait a second. The acquisition of this lock hasn't been affected by
> the snapshot scalability changes, and therefore are unrelated to
> ->pgxactoff changing or not.
>
> In 13 this is:
> LWLockAcquire(ProcArrayLock,
Hi,
On 2020-11-18 18:41:27 -0300, Alvaro Herrera wrote:
> The amount of stuff that this is doing with ProcArrayLock held
> exclusively seems a bit excessive; it sounds like we could copy the
> values we need first, release the lock, and *then* do all that memory
> allocation and string printing --
On 2020-Nov-18, Michael Paquier wrote:
> On Tue, Nov 17, 2020 at 02:14:53PM -0300, Alvaro Herrera wrote:
> > diff --git a/src/backend/replication/logical/logical.c
> > b/src/backend/replication/logical/logical.c
> > index f1f4df7d70..4324e32656 100644
> > --- a/src/backend/replication/logical/log
> On 2020-11-17 12:55:01 -0300, Alvaro Herrera wrote:
> > ... ah, but I realize now that this means that we can use shared lock
> > here, not exclusive, which is already an enormous improvement. That's
> > because ->pgxactoff can only be changed with exclusive lock held; so as
> > long as we hold
Hi,
On 2020-11-17 12:55:01 -0300, Alvaro Herrera wrote:
> ... ah, but I realize now that this means that we can use shared lock
> here, not exclusive, which is already an enormous improvement. That's
> because ->pgxactoff can only be changed with exclusive lock held; so as
> long as we hold share
On Tue, Nov 17, 2020 at 02:14:53PM -0300, Alvaro Herrera wrote:
> diff --git a/src/backend/replication/logical/logical.c
> b/src/backend/replication/logical/logical.c
> index f1f4df7d70..4324e32656 100644
> --- a/src/backend/replication/logical/logical.c
> +++ b/src/backend/replication/logical/log
So I made the change to set the statusFlags with only LW_SHARED -- both
in existing uses (see 0002) and in the new CIC code (0003). I don't
think 0002 is going to have a tremendous performance impact, because it
changes operations that are very infrequent. But even so, it would be
weird to leave
On 2020-Nov-16, Alvaro Herrera wrote:
> On 2020-Nov-09, Tom Lane wrote:
>
> > Michael Paquier writes:
> > >> +LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);
> > >> +MyProc->vacuumFlags |= PROC_IN_SAFE_IC;
> > >> +ProcGlobal->vacuumFlags[MyProc->pgxact
On Mon, Nov 16, 2020 at 09:23:41PM -0300, Alvaro Herrera wrote:
> I've been wondering if it would be sane/safe to do the WaitForFoo stuff
> outside of any transaction.
This needs to remain within a transaction as CIC holds a session lock
on the parent table, which would not be cleaned up without a
I am really unsure about the REINDEX CONCURRENTLY part of this, for two
reasons:
1. It is not as good when reindexing multiple indexes, because we can
only apply the flag if *all* indexes are "safe". Any unsafe index means
we step down from it for the whole thing. This is probably not worth
worr
On 2020-Nov-09, Tom Lane wrote:
> Michael Paquier writes:
> >> + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);
> >> + MyProc->vacuumFlags |= PROC_IN_SAFE_IC;
> >> + ProcGlobal->vacuumFlags[MyProc->pgxactoff] =
> >> MyProc->vacuumFlags;
> >> + LWLockRelease(ProcA
On 2020-Nov-16, Dmitry Dolgov wrote:
> The same with reindex without locks:
>
> nsecs : count distribution
>512 -> 1023 : 0||
> 1024 -> 2047 : 111345 ||
>
On Tue, 10 Nov 2020 at 03:02, Tom Lane wrote:
>
> Alvaro Herrera writes:
> > Yeah ... it would be much better if we can make it use atomics instead.
>
> I was thinking more like "do we need any locking at all".
>
> Assuming that a proc's vacuumFlags can be set by only the process itself,
> there'
> On Fri, Nov 13, 2020 at 09:25:40AM +0900, Michael Paquier wrote:
> On Thu, Nov 12, 2020 at 04:36:32PM +0100, Dmitry Dolgov wrote:
> > Interesting enough, similar discussion happened about vaccumFlags before
> > with the same conclusion that theoretically it's fine to update without
> > holding th
On Thu, Nov 12, 2020 at 04:36:32PM +0100, Dmitry Dolgov wrote:
> Interesting enough, similar discussion happened about vaccumFlags before
> with the same conclusion that theoretically it's fine to update without
> holding the lock, but this assumption could change one day and it's
> better to avoid
> On Mon, Nov 09, 2020 at 10:02:27PM -0500, Tom Lane wrote:
>
> Alvaro Herrera writes:
> > Yeah ... it would be much better if we can make it use atomics instead.
>
> I was thinking more like "do we need any locking at all".
>
> Assuming that a proc's vacuumFlags can be set by only the process its
Alvaro Herrera writes:
> Yeah ... it would be much better if we can make it use atomics instead.
I was thinking more like "do we need any locking at all".
Assuming that a proc's vacuumFlags can be set by only the process itself,
there's no write conflicts to worry about. On the read side, there
On Mon, Nov 09, 2020 at 11:31:15PM -0300, Alvaro Herrera wrote:
> Yeah ... it would be much better if we can make it use atomics instead.
> Currently it's an uint8, and in PGPROC itself it's probably not a big
> deal to enlarge that, but I fear that quadrupling the size of the
> mirroring array in
On 2020-Nov-09, Tom Lane wrote:
> Michael Paquier writes:
> > On Mon, Nov 09, 2020 at 08:32:13PM -0500, Tom Lane wrote:
> >> Do we really need exclusive lock on the ProcArray to make this flag
> >> change? That seems pretty bad from a concurrency standpoint.
>
> > Any place where we update vacu
Michael Paquier writes:
> On Mon, Nov 09, 2020 at 08:32:13PM -0500, Tom Lane wrote:
>> Do we really need exclusive lock on the ProcArray to make this flag
>> change? That seems pretty bad from a concurrency standpoint.
> Any place where we update vacuumFlags acquires an exclusive LWLock on
> Pro
On Mon, Nov 09, 2020 at 08:32:13PM -0500, Tom Lane wrote:
> Do we really need exclusive lock on the ProcArray to make this flag
> change? That seems pretty bad from a concurrency standpoint.
Any place where we update vacuumFlags acquires an exclusive LWLock on
ProcArrayLock. That's held for a ve
Michael Paquier writes:
>> +LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);
>> +MyProc->vacuumFlags |= PROC_IN_SAFE_IC;
>> +ProcGlobal->vacuumFlags[MyProc->pgxactoff] =
>> MyProc->vacuumFlags;
>> +LWLockRelease(ProcArrayLock);
> I can't help noticing t
On Mon, Nov 09, 2020 at 04:47:43PM +0100, Dmitry Dolgov wrote:
> > On Tue, Nov 03, 2020 at 07:14:47PM +0100, Dmitry Dolgov wrote:
> > > On Thu, Aug 20, 2020 at 03:11:19PM +0900, Michael Paquier wrote:
> > > On Wed, Aug 19, 2020 at 02:16:46PM -0400, Alvaro Herrera wrote:
> > > > I did not set the fl
> On Tue, Nov 03, 2020 at 07:14:47PM +0100, Dmitry Dolgov wrote:
> > On Thu, Aug 20, 2020 at 03:11:19PM +0900, Michael Paquier wrote:
> > On Wed, Aug 19, 2020 at 02:16:46PM -0400, Alvaro Herrera wrote:
> > > I did not set the flag in REINDEX CONCURRENTLY, but as I understand it
> > > can be done to
> On Thu, Aug 20, 2020 at 03:11:19PM +0900, Michael Paquier wrote:
> On Wed, Aug 19, 2020 at 02:16:46PM -0400, Alvaro Herrera wrote:
> > I did not set the flag in REINDEX CONCURRENTLY, but as I understand it
> > can be done too, since in essence it's the same thing as a CIC from a
> > snapshot mana
On Wed, Aug 19, 2020 at 02:16:46PM -0400, Alvaro Herrera wrote:
> I did not set the flag in REINDEX CONCURRENTLY, but as I understand it
> can be done too, since in essence it's the same thing as a CIC from a
> snapshot management point of view.
Yes, I see no problems for REINDEX CONCURRENTLY as w
On 2020-Aug-11, James Coleman wrote:
> In [3] I'd brought up that a function index can do arbitrary
> operations (including, terribly, a query of another table), and Andres
> (in [4]) noted that:
>
> > Well, even if we consider this an actual problem, we could still use
> > PROC_IN_CIC for non-ex
On Tue, Aug 11, 2020 at 11:14 AM Tom Lane wrote:
>
> James Coleman writes:
> > Why is a CIC in active index-building something we need to wait for?
> > Wouldn't it fall under a similar kind of logic to the other snapshot
> > types we can explicitly ignore? CIC can't be run in a manual
> > transac
James Coleman writes:
> Why is a CIC in active index-building something we need to wait for?
> Wouldn't it fall under a similar kind of logic to the other snapshot
> types we can explicitly ignore? CIC can't be run in a manual
> transaction, so the snapshot it holds won't be used to perform
> arbi
On Mon, Aug 10, 2020 at 8:37 PM Tom Lane wrote:
>
> Alvaro Herrera writes:
> > To recap: currently, any CREATE INDEX CONCURRENTLY will wait for all
> > other CICs running concurrently to finish, because they can't be
> > distinguished amidst other old snapshots. We can change things by
> > havin
Alvaro Herrera writes:
> To recap: currently, any CREATE INDEX CONCURRENTLY will wait for all
> other CICs running concurrently to finish, because they can't be
> distinguished amidst other old snapshots. We can change things by
> having CIC set a special flag in PGPROC (like PROC_IN_VACUUM) indi
+ James Coleman
--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
I previously[1] posted a patch to have multiple CREATE INDEX CONCURRENTLY
not wait for the slowest of them. This is an update of that, with minor
conflicts fixed and a fresh thread.
To recap: currently, any CREATE INDEX CONCURRENTLY will wait for all
other CICs running concurrently to finish, bec
64 matches
Mail list logo