On Thu, Dec 12, 2024 at 04:29:47PM +0100, Alvaro Herrera wrote:
> On 2024-Nov-14, Michael Paquier wrote:
>
>> On Wed, Nov 13, 2024 at 02:52:31PM -0500, Robert Haas wrote:
>> > On Wed, Nov 13, 2024 at 11:05 AM Alvaro Herrera
>> > wrote:
>> >> So, my question now is, would there be much opposition
On 2024-Nov-14, Michael Paquier wrote:
> On Wed, Nov 13, 2024 at 02:52:31PM -0500, Robert Haas wrote:
> > On Wed, Nov 13, 2024 at 11:05 AM Alvaro Herrera
> > wrote:
> >> So, my question now is, would there be much opposition to backpatching
> >> beb4e9ba1652 + 1fb17b190341 to REL_14_STABLE?
> >
On Wed, Nov 13, 2024 at 02:52:31PM -0500, Robert Haas wrote:
> On Wed, Nov 13, 2024 at 11:05 AM Alvaro Herrera
> wrote:
>> So, my question now is, would there be much opposition to backpatching
>> beb4e9ba1652 + 1fb17b190341 to REL_14_STABLE?
>
> It seems like it's been long enough now that if t
On Wed, Nov 13, 2024 at 11:05 AM Alvaro Herrera wrote:
> So, my question now is, would there be much opposition to backpatching
> beb4e9ba1652 + 1fb17b190341 to REL_14_STABLE?
It seems like it's been long enough now that if the new logic had
major problems we probably would have found them by now
Hello, sorry for necro-posting here:
On 2021-May-03, Robert Haas wrote:
> I and various colleagues of mine have from time to time encountered
> systems that got a bit behind on WAL archiving, because the
> archive_command started failing and nobody noticed right away.
We've recently had a couple
On Thu, Nov 11, 2021 at 3:58 PM Bossart, Nathan wrote:
> Thanks! I figured it was something like that. Sorry if I caused the
> thread breakage.
I think it was actually that the thread went over 100 emails ... which
usually causes Google to break it, but I don't know why it broke it
into three p
On 11/11/21, 12:23 PM, "Robert Haas" wrote:
> On Thu, Nov 11, 2021 at 2:49 PM Robert Haas wrote:
>> Somehow I didn't see your October 19th response previously. The
>> threading in gmail seems to have gotten broken, which may have
>> contributed.
>
> And actually I also missed the September 27th e
On Thu, Nov 11, 2021 at 2:49 PM Robert Haas wrote:
> Somehow I didn't see your October 19th response previously. The
> threading in gmail seems to have gotten broken, which may have
> contributed.
And actually I also missed the September 27th email where you sent v3. Oops.
Committed now.
--
Ro
On Thu, Nov 11, 2021 at 10:37 AM Bossart, Nathan wrote:
> On 10/19/21, 7:53 AM, "Bossart, Nathan" wrote:
> > On 10/19/21, 5:59 AM, "Robert Haas" wrote:
> >> Nathan, I just realized we never closed the loop on this. Do you have
> >> any thoughts?
> >
> > IMO the patch is in decent shape. Happy t
On 10/19/21, 7:53 AM, "Bossart, Nathan" wrote:
> On 10/19/21, 5:59 AM, "Robert Haas" wrote:
>> Nathan, I just realized we never closed the loop on this. Do you have
>> any thoughts?
>
> IMO the patch is in decent shape. Happy to address any feedback you
> might have on the latest patch [0].
Thi
On 10/19/21, 5:59 AM, "Robert Haas" wrote:
> Nathan, I just realized we never closed the loop on this. Do you have
> any thoughts?
IMO the patch is in decent shape. Happy to address any feedback you
might have on the latest patch [0].
Nathan
[0]
https://www.postgresql.org/message-id/attachmen
On Fri, Sep 24, 2021 at 12:28 PM Robert Haas wrote:
> On Thu, Sep 16, 2021 at 7:26 PM Bossart, Nathan wrote:
> > What do you think?
>
> I think this is committable. I also went back and looked at your
> previous proposal to do files in batches, and I think that's also
> committable. After some re
On 9/27/21, 11:06 AM, "Bossart, Nathan" wrote:
> On 9/24/21, 9:29 AM, "Robert Haas" wrote:
>> So what I am inclined to do is commit
>> v1-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch.
>> However, v6-0001-Do-fewer-directory-scans-of-archive_status.patch has
>> perhaps evolved a bit
On 9/24/21 12:28 PM, Robert Haas wrote:
On Thu, Sep 16, 2021 at 7:26 PM Bossart, Nathan wrote:
What do you think?
I think this is committable. I also went back and looked at your
previous proposal to do files in batches, and I think that's also
committable. After some reflection, I think I ha
On Thu, Sep 16, 2021 at 7:26 PM Bossart, Nathan wrote:
> What do you think?
I think this is committable. I also went back and looked at your
previous proposal to do files in batches, and I think that's also
committable. After some reflection, I think I have a slight preference
for the batching ap
On Mon, Sep 20, 2021 at 4:42 PM Alvaro Herrera wrote:
> I was going to say that perhaps we can avoid repeated scans by having a
> bitmap of future files that were found by a scan; so if we need to do
> one scan, we keep track of the presence of the next (say) 64 files in
> our timeline, and then w
On 2021-Sep-20, Robert Haas wrote:
> I was thinking that this might increase the number of directory scans
> by a pretty large amount when we repeatedly catch up, then 1 new file
> gets added, then we catch up, etc.
I was going to say that perhaps we can avoid repeated scans by having a
bitmap of
On Thu, Sep 16, 2021 at 7:26 PM Bossart, Nathan wrote:
> 1. I've removed several calls to PgArchForceDirScan() in favor of
> calling it at the top of pgarch_ArchiverCopyLoop(). I believe
> there is some disagreement about this change, but I don't think
> we gain enough to justify
Hi,
> 1. I've removed several calls to PgArchForceDirScan() in favor of
> calling it at the top of pgarch_ArchiverCopyLoop(). I believe
> there is some disagreement about this change, but I don't think
> we gain enough to justify the complexity. The main reason we
> exit pgarch_A
Hi,
Thanks for the feedback.
> I wonder if this can be simplified even further. If we don't bother
> trying to catch out-of-order .ready files in XLogArchiveNotify() and
> just depend on the per-checkpoint/restartpoint directory scans, we can
> probably remove lastReadySegNo from archiver state
At Tue, 14 Sep 2021 18:07:31 +, "Bossart, Nathan"
wrote in
> On 9/14/21, 9:18 AM, "Bossart, Nathan" wrote:
> > This is an interesting idea, but the "else" block here seems prone to
> > race conditions. I think we'd have to hold arch_lck to prevent that.
> > But as I mentioned above, if we
On 9/14/21, 9:18 AM, "Bossart, Nathan" wrote:
> This is an interesting idea, but the "else" block here seems prone to
> race conditions. I think we'd have to hold arch_lck to prevent that.
> But as I mentioned above, if we are okay with depending on the
> fallback directory scans, I think we can
On 9/14/21, 7:23 AM, "Dipesh Pandit" wrote:
> I agree that when we are creating a .ready file we should compare
> the current .ready file with the last .ready file to check if this file is
> created out of order. We can store the state of the last .ready file
> in shared memory and compare it w
Thanks for the feedback.
> The latest post on this thread contained a link to this one, and it
> made me want to rewind to this point in the discussion. Suppose we
> have the following alternative scenario:
>
> Let's say step 1 looks for WAL file 10, but 10.ready doesn't exist
> yet. The followin
On 9/13/21, 1:14 PM, "Robert Haas" wrote:
> On Thu, Sep 2, 2021 at 5:52 PM Bossart, Nathan wrote:
>> Let's say step 1 looks for WAL file 10, but 10.ready doesn't exist
>> yet. The following directory scan ends up finding 11.ready. Just
>> before we update the PgArch state, XLogArchiveNotify() i
On Thu, Sep 2, 2021 at 5:52 PM Bossart, Nathan wrote:
> The pg_readyXlog() logic looks a bit like this:
>
> 1. Try to skip directory scan. If that succeeds, we're done.
> 2. Do a directory scan.
> 3. If we found a regular WAL file, update PgArch and return
>wha
Hi,
Thanks for the feedback.
> + * by checking the availability of next WAL file. "xlogState" specifies
the
> + * segment number and timeline ID corresponding to the next WAL file.
>
> "xlogState" probably needs to be updated here.
Yes, I updated the comment.
> As noted before [0], I think we n
On 9/8/21, 10:49 AM, "Dipesh Pandit" wrote:
> Updated log level to DEBUG3 and rebased the patch. PFA patch.
Thanks for the new patch.
+ * by checking the availability of next WAL file. "xlogState" specifies the
+ * segment number and timeline ID corresponding to the next WAL file.
"xlogState" p
> > I guess we still have to pick one or the other, but I don't really
> > know how to do that, since both methods seem to be relatively fine,
> > and the scenarios where one is better than the other all feel a little
> > bit contrived. I guess if no clear consensus emerges in the next week
> > or
At Tue, 7 Sep 2021 18:40:24 +, "Bossart, Nathan"
wrote in
> On 9/7/21, 11:31 AM, "Robert Haas" wrote:
> > I guess we still have to pick one or the other, but I don't really
> > know how to do that, since both methods seem to be relatively fine,
> > and the scenarios where one is better than
On 9/7/21, 11:31 AM, "Robert Haas" wrote:
> I guess we still have to pick one or the other, but I don't really
> know how to do that, since both methods seem to be relatively fine,
> and the scenarios where one is better than the other all feel a little
> bit contrived. I guess if no clear consens
On Tue, Sep 7, 2021 at 2:13 PM Bossart, Nathan wrote:
> Right. The latest patch for that approach [0] does just that. In
> fact, I think timeline files are the only files for which we need to
> force an immediate directory scan in the multiple-files-per-scan
> approach. For the keep-trying-the-
On 9/7/21, 10:54 AM, "Robert Haas" wrote:
> I guess what I don't understand about the multiple-files-per-dirctory
> scan implementation is what happens when something happens that would
> require the keep-trying-the-next-file approach to perform a forced
> scan. It seems to me that you still need
On Tue, Sep 7, 2021 at 1:28 PM Bossart, Nathan wrote:
> Thanks for chiming in. The limit of 64 in the multiple-files-per-
> directory-scan approach was mostly arbitrary. My earlier testing [0]
> with different limits didn't reveal any significant difference, but
> using a higher limit might yiel
On 9/7/21, 1:42 AM, "Kyotaro Horiguchi" wrote:
> I was thinking that the multple-files approch would work efficiently
> but the the patch still runs directory scans every 64 files. As
> Robert mentioned it is still O(N^2). I'm not sure the reason for the
> limit, but if it were to lower memory c
At Fri, 3 Sep 2021 18:31:46 +0530, Dipesh Pandit
wrote in
> Hi,
>
> Thanks for the feedback.
>
> > Which approach do you think we should use? I think we have decent
> > patches for both approaches at this point, so perhaps we should see if
> > we can get some additional feedback from the comm
Hi,
Thanks for the feedback.
> Which approach do you think we should use? I think we have decent
> patches for both approaches at this point, so perhaps we should see if
> we can get some additional feedback from the community on which one we
> should pursue further.
In my opinion both the appr
On 9/2/21, 6:22 AM, "Dipesh Pandit" wrote:
> I agree that multiple-files-pre-readdir is cleaner and has the resilience of
> the
> current implementation. However, I have a few suggestion on keep-trying-the
> -next-file approach patch shared in previous thread.
Which approach do you think we shou
Hi,
Thanks for the feedback.
> I attached two patches that demonstrate what I'm thinking this change
> should look like. One is my take on the keep-trying-the-next-file
> approach, and the other is a new version of the multiple-files-per-
> readdir approach (with handling for "cheating" archive
On 8/25/21, 4:11 AM, "Dipesh Pandit" wrote:
> Please find attached patch v11.
Apologies for the delay. I still intend to review this.
Nathan
> If a .ready file is created out of order, the directory scan logic
> will pick it up about as soon as possible based on its priority. If
> the archiver is keeping up relatively well, there's a good chance such
> a file will have the highest archival priority and will be picked up
> the next time
On 8/24/21, 12:09 PM, "Robert Haas" wrote:
> I can't quite decide whether the problems we're worrying about here
> are real issues or just kind of hypothetical. I mean, today, it seems
> to be possible that we fail to mark some file ready for archiving,
> emit a log message, and then a huge amount
On Tue, Aug 24, 2021 at 1:26 PM Bossart, Nathan wrote:
> I think Horiguchi-san made a good point that the .ready file creators
> should ideally not need to understand archiving details. However, I
> think this approach requires them to be inextricably linked. In the
> happy case, the archiver wi
On 8/24/21, 5:31 AM, "Dipesh Pandit" wrote:
>> > I've been looking at the v9 patch with fresh eyes, and I still think
>> > we should be able to force the directory scan as needed in
>> > XLogArchiveNotify(). Unless the file to archive is a regular WAL file
>> > that is > our stored location in ar
Thanks for the feedback.
> > > IIUC partial WAL files are handled because the next file in the
> > > sequence with the given TimeLineID won't be there, so we will fall
> > > back to a directory scan and pick it up. Timeline history files are
> > > handled by forcing a directory scan, which should
(sigh..)
At Tue, 24 Aug 2021 11:35:06 +0900 (JST), Kyotaro Horiguchi
wrote in
> > IIUC partial WAL files are handled because the next file in the
> > sequence with the given TimeLineID won't be there, so we will fall
> > back to a directory scan and pick it up. Timeline history files are
> > h
At Tue, 24 Aug 2021 00:03:37 +, "Bossart, Nathan"
wrote in
> On 8/23/21, 10:49 AM, "Robert Haas" wrote:
> > On Mon, Aug 23, 2021 at 11:50 AM Bossart, Nathan
> > wrote:
> >> To handle a "cheating" archive command, I'd probably need to add a
> >> stat() for every time pgarch_readyXLog() ret
On 8/23/21, 10:49 AM, "Robert Haas" wrote:
> On Mon, Aug 23, 2021 at 11:50 AM Bossart, Nathan wrote:
>> To handle a "cheating" archive command, I'd probably need to add a
>> stat() for every time pgarch_readyXLog() returned something from
>> arch_files. I suspect something similar might be neede
On Mon, Aug 23, 2021 at 11:50 AM Bossart, Nathan wrote:
> To handle a "cheating" archive command, I'd probably need to add a
> stat() for every time pgarch_readyXLog() returned something from
> arch_files. I suspect something similar might be needed in Dipesh's
> patch to handle backup history fi
On 8/23/21, 6:42 AM, "Robert Haas" wrote:
> On Sun, Aug 22, 2021 at 10:31 PM Bossart, Nathan wrote:
>> I ran this again on a bigger machine with 200K WAL files pending
>> archive. The v9 patch took ~5.5 minutes, the patch I sent took ~8
>> minutes, and the existing logic took just under 3 hour
On Sun, Aug 22, 2021 at 10:31 PM Bossart, Nathan wrote:
> I ran this again on a bigger machine with 200K WAL files pending
> archive. The v9 patch took ~5.5 minutes, the patch I sent took ~8
> minutes, and the existing logic took just under 3 hours.
Hmm. On the one hand, 8 minutes > 5.5 minutes,
On 8/21/21, 9:29 PM, "Bossart, Nathan" wrote:
> I was curious about this, so I wrote a patch (attached) to store
> multiple files per directory scan and tested it against the latest
> patch in this thread (v9) [0]. Specifically, I set archive_command to
> 'false', created ~20K WAL segments, then
On 8/19/21, 5:42 AM, "Dipesh Pandit" wrote:
>> Should we have XLogArchiveNotify(), writeTimeLineHistory(), and
>> writeTimeLineHistoryFile() enable the directory scan instead? Else,
>> we have to exhaustively cover all such code paths, which may be
>> difficult to maintain. Another reason I am b
Hi,
Thanks for the feedback.
> Should we have XLogArchiveNotify(), writeTimeLineHistory(), and
> writeTimeLineHistoryFile() enable the directory scan instead? Else,
> we have to exhaustively cover all such code paths, which may be
> difficult to maintain. Another reason I am bringing this up is
Thanks for the new version of the patch. Overall, I think it is on
the right track.
+/*
+ * This .ready file is created out of order, notify archiver to perform
+ * a full directory scan to archive corresponding WAL file.
+ */
+StatusFilePath(archiveStatusPath, xlog, ".ready")
On Tue, Aug 17, 2021 at 4:19 PM Bossart, Nathan wrote:
> Thinking further, I think the most important thing to ensure is that
> resetting the flag happens before we begin the directory scan.
> Consider the following scenario in which a timeline history file would
> potentially be lost:
>
>
Hi,
Thanks for the feedback. I have incorporated the suggestion
to use an unsynchronized boolean flag to force directory scan.
This flag is being set if there is a timeline switch or .ready file
is created out of order. Archiver resets this flag in case if it is
being set before it begins director
On 8/17/21, 12:11 PM, "Bossart, Nathan" wrote:
> On 8/17/21, 11:28 AM, "Robert Haas" wrote:
>> I can't actually see that there's any kind of hard synchronization
>> requirement here at all. What we're trying to do is guarantee that if
>> the timeline changes, we'll pick up the timeline history fo
On 8/17/21, 11:28 AM, "Robert Haas" wrote:
> I can't actually see that there's any kind of hard synchronization
> requirement here at all. What we're trying to do is guarantee that if
> the timeline changes, we'll pick up the timeline history for the new
> timeline next, and that if files are arch
On Tue, Aug 17, 2021 at 12:33 PM Bossart, Nathan wrote:
> Sorry, I think my note was not very clear. I agree that a flag should
> be used for this purpose, but I think we should just use a regular
> bool protected by a spinlock or LWLock instead of an atomic. The file
> atomics.h has the followi
On 8/17/21, 5:53 AM, "Dipesh Pandit" wrote:
>> I personally don't think it's necessary to use an atomic here. A
>> spinlock or LWLock would probably work just fine, as contention seems
>> unlikely. If we use a lock, we also don't have to worry about memory
>> barriers.
>
> History file should be
Thanks for the feedback.
> + StatusFilePath(archiveStatusPath, xlog, ".ready");
> + if (stat(archiveStatusPath, &stat_buf) == 0)
> + PgArchEnableDirScan();
> We may want to call PgArchWakeup() after setting the flag.
Yes, added a call to wake up archiver.
> > + *
On 8/15/21, 9:52 PM, "Bossart, Nathan" wrote:
> + * Perform a full directory scan to identify the next log segment. There
> + * may be one of the following scenarios which may require us to
> perform a
> + * full directory scan.
> ...
> + * - The next anticipated log segment i
+* This .ready file is created out of order, notify archiver to perform
+* a full directory scan to archive corresponding WAL file.
+*/
+ StatusFilePath(archiveStatusPath, xlog, ".ready");
+ if (stat(archiveStatusPath, &stat_buf) == 0)
+ PgArchEnabl
Hi,
Thanks for the feedback.
The possible path that archiver can take for each cycle is either a fast
path or a fall-back patch. The fast path involves checking availability of
next anticipated log segment and decide the next target for archival or
a fall-back path which involves full directory s
At Fri, 6 Aug 2021 02:34:24 +, "Bossart, Nathan"
wrote in
> On 8/5/21, 6:26 PM, "Kyotaro Horiguchi" wrote:
> > It works the current way always at the first iteration of
> > pgarch_ArchiveCopyLoop() becuse in the last iteration of
> > pgarch_ArchiveCopyLoop(), pgarch_readyXlog() erases the l
At Thu, 5 Aug 2021 21:53:30 +0530, Dipesh Pandit
wrote in
> > I'm not sure. I think we need the value to be accurate during
> > recovery, so I'm not sure whether replayEndTLI would get us there.
> > Another approach might be to set ThisTimeLineID on standbys also.
> > Actually just taking a fast
On 8/5/21, 6:26 PM, "Kyotaro Horiguchi" wrote:
> It works the current way always at the first iteration of
> pgarch_ArchiveCopyLoop() becuse in the last iteration of
> pgarch_ArchiveCopyLoop(), pgarch_readyXlog() erases the last
> anticipated segment. The shortcut works only when
> pgarch_Archive
At Tue, 3 Aug 2021 20:46:57 +, "Bossart, Nathan"
wrote in
> + /*
> + * Perform a full directory scan to identify the next log segment. There
> + * may be one of the following scenarios which may require us to
> perform a
> + * full directory scan.
> + *
> + * 1.
> I'm not sure. I think we need the value to be accurate during
> recovery, so I'm not sure whether replayEndTLI would get us there.
> Another approach might be to set ThisTimeLineID on standbys also.
> Actually just taking a fast look at the code I'm not quite sure why
> that isn't happening alrea
On Thu, Aug 5, 2021 at 7:39 AM Dipesh Pandit wrote:
> Yes, we can avoid storing another copy of information. We can
> use XLogCtl's ThisTimeLineID on Primary. However,
> XLogCtl's ThisTimeLineID is not set to the current timeline ID on
> Standby server. It's value is set to '0'. Can we use XLogCtl
Hi,
> I don't really understand why you are storing something in shared
> memory specifically for the archiver. Can't we use XLogCtl's
> ThisTimeLineID instead of storing another copy of the information?
Yes, we can avoid storing another copy of information. We can
use XLogCtl's ThisTimeLineID on
+ /*
+* Perform a full directory scan to identify the next log segment. There
+* may be one of the following scenarios which may require us to
perform a
+* full directory scan.
+*
+* 1. This is the first cycle since archiver has started and there is no
On Mon, Aug 2, 2021 at 9:06 AM Dipesh Pandit wrote:
> We can maintain the current timeline ID in archiver specific shared memory.
> If we switch to a new timeline then the backend process can update the new
> timeline ID in shared memory. Archiver can keep a track of current timeline ID
> and if i
Hi,
> I think what you are saying is true before v14, but not in v14 and master.
Yes, we can use archiver specific shared memory. Thanks.
> I don't think it's great that we're using up SIGINT for this purpose.
> There aren't that many signals available at the O/S level that we can
> use for our p
On Wed, Jul 28, 2021 at 6:48 AM Dipesh Pandit wrote:
> As of now shared memory is not attached to the archiver. Archiver cannot
> access ThisTimeLineID or a flag available in shared memory.
If that is true, why are there functions PgArchShmemSize() and
PgArchShmemInit(), and how does this stateme
Hi,
> I don't think it's great that we're using up SIGINT for this purpose.
> There aren't that many signals available at the O/S level that we can
> use for our purposes, and we generally try to multiplex them at the
> application layer, e.g. by setting a latch or a flag in shared memory,
> rathe
On Tue, Jul 27, 2021 at 3:43 AM Dipesh Pandit wrote:
> and updated a new patch. Please find the attached patch v4.
Some review:
/*
+* If archiver is active, send notification that timeline has switched.
+*/
+ if (XLogArchivingActive() && ArchiveRecoveryRequested &&
> Some minor suggestions:
Thanks for your comments. I have incorporated the changes
and updated a new patch. Please find the attached patch v4.
Thanks,
Dipesh
On Mon, Jul 26, 2021 at 9:44 PM Bossart, Nathan wrote:
> On 7/26/21, 6:31 AM, "Robert Haas" wrote:
> > In terms of immediate next steps
On 7/26/21, 6:31 AM, "Robert Haas" wrote:
> In terms of immediate next steps, I think we should focus on
> eliminating the O(n^2) problem and not get sucked into a bigger
> redesign. The patch on the table aims to do just that much and I think
> that's a good thing.
I agree. I'll leave further d
On Fri, Jul 23, 2021 at 5:46 PM Bossart, Nathan wrote:
> My apologies for chiming in so late to this thread, but a similar idea
> crossed my mind while working on a bug where .ready files get created
> too early [0]. Specifically, instead of maintaining a status file per
> WAL segment, I was thin
On 5/6/21, 1:01 PM, "Andres Freund" wrote:
> If we leave history files and gaps in the .ready sequence aside for a
> second, we really only need an LSN or segment number describing the
> current "archive position". Then we can iterate over the segments
> between the "archive position" and the flus
Thanks, Dipesh. The patch LGTM.
Some minor suggestions:
+ *
+ * "nextLogSegNo" identifies the next log file to be archived in a log
+ * sequence and the flag "dirScan" specifies a full directory scan to find
+ * the next log file.
IMHO, this comment should go atop of pgarch_readyXlog() as a
Hi,
> some comments on v2.
Thanks for your comments. I have incorporated the changes
and updated a new patch. Please find the details below.
> On the timeline switch, setting a flag should be enough, I don't think
> that we need to wake up the archiver. Because it will just waste the
> scan cycl
On Tue, Jul 6, 2021 at 9:34 AM Stephen Frost wrote:
> As was suggested on that subthread, it seems like it should be possible
> to just track the current timeline and adjust what we're doing if the
> timeline changes, and we should even know what the .history file is at
> that point and likely don
On Mon, Jul 19, 2021 at 5:43 PM Dipesh Pandit wrote:
>
> Hi,
>
> > I agree, I missed this part. The .history file should be given higher
> > preference.
> > I will take care of it in the next patch.
>
> Archiver does not have access to shared memory and the current timeline ID
> is not available
Hi,
> I agree, I missed this part. The .history file should be given higher
preference.
> I will take care of it in the next patch.
Archiver does not have access to shared memory and the current timeline ID
is not available at archiver. In order to keep track of timeline switch we
have
to push a
> I have a few suggestions on the patch
> 1.
> +
> + /*
> + * Found the oldest WAL, reset timeline ID and log segment number to
> generate
> + * the next WAL file in the sequence.
> + */
> + if (found && !historyFound)
> + {
> + XLogFromFileName(xlog, &curFileTLI, &nextLogSegNo, wal_segment_size);
> specifically about history files being given higher priority for
> archiving. If we go with this change then we'd at least want to rewrite
> or remove those comments, but I don't actually agree that we should
> remove that preference to archive history files ahead of WAL, for the
> reasons broug
Greetings,
* Dipesh Pandit (dipesh.pan...@gmail.com) wrote:
> We have addressed the O(n^2) problem which involves directory scan for
> archiving individual WAL files by maintaining a WAL counter to identify
> the next WAL file in a sequence.
This seems to have missed the concerns raised in
https:
On Tue, Jul 6, 2021 at 11:36 AM Dipesh Pandit wrote:
>
> Hi,
>
> We have addressed the O(n^2) problem which involves directory scan for
> archiving individual WAL files by maintaining a WAL counter to identify
> the next WAL file in a sequence.
>
> WAL archiver scans the status directory to identi
Hi,
We have addressed the O(n^2) problem which involves directory scan for
archiving individual WAL files by maintaining a WAL counter to identify
the next WAL file in a sequence.
WAL archiver scans the status directory to identify the next WAL file
which needs to be archived. This directory scan
Hi,
On 2021-05-06 21:23:36 +0200, Hannu Krosing wrote:
> How are you envisioning the shared-memory signaling should work in the
> original sample case, where the archiver had been failing for half a
> year ?
If we leave history files and gaps in the .ready sequence aside for a
second, we really o
How are you envisioning the shared-memory signaling should work in the
original sample case, where the archiver had been failing for half a
year ?
Or should we perhaps have a system table for ready-to-archive WAL
files to get around limitation sof file system to return just the
needed files with O
On Thu, May 6, 2021 at 3:23 AM Kyotaro Horiguchi
wrote:
> FWIW It's already done for v14 individually.
>
> Author: Fujii Masao
> Date: Mon Mar 15 13:13:14 2021 +0900
>
> Make archiver process an auxiliary process.
Oh, I hadn't noticed. Thanks.
--
Robert Haas
EDB: http://www.enterprisedb.
At Tue, 4 May 2021 10:07:51 -0400, Robert Haas wrote in
> On Tue, May 4, 2021 at 12:27 AM Andres Freund wrote:
> > On 2021-05-03 16:49:16 -0400, Robert Haas wrote:
> > > But perhaps we could work around this by allowing pgarch.c to access
> > > shared memory, in which case it could examine the c
On Wed, May 5, 2021 at 4:53 PM Stephen Frost wrote:
> I do note that this comment is timeline.c is, ahem, perhaps over-stating
> things a bit:
>
> * Note: while this is somewhat heuristic, it does positively guarantee
> * that (result + 1) is not a known timeline, and therefore it should
> * be
Greetings,
* Robert Haas (robertmh...@gmail.com) wrote:
> On Wed, May 5, 2021 at 4:31 PM Andres Freund wrote:
> > On 2021-05-05 16:22:21 -0400, Robert Haas wrote:
> > > Huh, I had not thought about that problem. So, at the risk of getting
> > > sidetracked, what exactly are you asking for here? L
On Wed, May 5, 2021 at 4:31 PM Andres Freund wrote:
> On 2021-05-05 16:22:21 -0400, Robert Haas wrote:
> > Huh, I had not thought about that problem. So, at the risk of getting
> > sidetracked, what exactly are you asking for here? Let the extension
> > pick the timeline using an algorithm of its
Greetings,
* Robert Haas (robertmh...@gmail.com) wrote:
> On Wed, May 5, 2021 at 4:13 PM Stephen Frost wrote:
> > That said, in an ideal world, we'd have a way to get the new timeline to
> > switch to in a way that doesn't leave open race conditions, so as long
> > we're talking about big changes
1 - 100 of 113 matches
Mail list logo