Hi Fujii-san and Sawada-san,
Thank you very much for your replies.
> >> I noticed that this thread and its set of patches have been marked with
> "Returned with Feedback" by yourself.
> >> I find the feature (atomic commit for foreign transactions) very
> >> useful and it will pave the road for h
Hi Sawada-san,
I noticed that this thread and its set of patches have been marked with
"Returned with Feedback" by yourself.
I find the feature (atomic commit for foreign transactions) very useful
and it will pave the road for having a distributed transaction management in
Postgres.
Although we
On Wed, June 30, 2021 10:06 (GMT+9), Masahiko Sawada wrote:
> I've attached the new version patch that incorporates the comments from
> Fujii-san and Ikeda-san I got so far. We launch a resolver process per foreign
> server, committing prepared foreign transactions on foreign servers in
> parallel
Hi Sawada-san,
I also tried to play a bit with the latest patches similar to Ikeda-san,
and with foreign 2PC parameter enabled/required.
> > >> b. about performance bottleneck (just share my simple benchmark
> > >> results)
> > >>
> > >> The resolver process can be performance bottleneck easily a
> From: Tsunakawa, Takayuki/綱川 貴之
> From: Kyotaro Horiguchi
> > About the patch, it would be better to change the type of
> > BUF_DROP_FULL_SCAN_THRESHOLD to uint64, even though the current
> value
> > doesn't harm.
>
> OK, attached, to be prepared for the distant future when NBuffers becomes
>
Hi Alvaro-san and Horiguchi-san,
CC) Iwata-san, Tsunakawa-san
Attached is the V23 of the libpq trace patch.
(1)
From: Álvaro Herrera
> It appears that something is still wrong. I applied lipq pipeline v27 from
> [1]
> and ran src/test/modules/test_libpq/pipeline singlerow, after patching it to
On Wed, Feb 17, 2021 8:14 AM (JST), Alvaro Herrera wrote:
Hi Alvaro,
> Here's a new version, where I've renamed everything to "pipeline". I think
> the
> docs could use some additional tweaks now in order to make a coherent
> story on pipeline mode, how it can be used in a batched fashion, etc.
> From: alvhe...@alvh.no-ip.org
> I'll give this another look tomorrow, but I wanted to pass along that I prefer
> libpq-trace.{c,h} instead of libpq-logging. I also renamed variable "pin" and
> pgindented. I don't have any major reservations about this patch now, so I'll
> mark it ready-for-com
On Mon, January 25, 2021 4:13 PM (JST), Tsunakawa-san wrote:.
> Also, please note this as:
>
> > Also, why don't you try running the regression tests with a temporary
> modification to PQtrace() to output the trace to a file? The sole purpose is
> to confirm that this patch doesn't make the test
On Mon, Jan 25, 2021 10:11 PM (JST), Alvaro Herrera wrote:
> On 2021-Jan-25, tsunakawa.ta...@fujitsu.com wrote:
>
> > Iwata-san,
>
> [...]
>
> > Considering these and the compilation error Kirk-san found, I'd like
> > you to do more self-review before I resume this review.
>
> Kindly note that
Hello,
I have not tested nor review the latest patch changes yet, but I am reporting
the compiler errors.
I am trying to compile the patch since V12 (Alvaro's version), but the
following needs to
be fixed too because of the complaints: doc changes and undeclared INT_MAX
libpq.sgml:5893: parser
Hi Iwata-san,
In addition to Tsunakawa-san's comments,
The compiler also complains:
fe-misc.c:678:20: error: ‘lenPos’ may be used uninitialized in this function
[-Werror=maybe-uninitialized]
conn->outMsgStart = lenPos;
There's no need for variable lenPos anymore since we have decided *not*
ote:
> > > >
> > > > At Thu, 7 Jan 2021 09:25:22 +, "k.jami...@fujitsu.com"
> wrote in:
> > > > > > Thanks for the detailed tests. NBuffers/32 seems like an
> > > > > > appropriate value for the threshold based on thes
On Thu, January 7, 2021 5:36 PM (JST), Amit Kapila wrote:
>
> On Wed, Jan 6, 2021 at 6:43 PM k.jami...@fujitsu.com
> wrote:
> >
> > [Results for VACUUM on single relation]
> > Average of 5 runs.
> >
> > 1. % REGRESSION
> > % Regression: (patched - mas
On Wed, January 6, 2021 7:04 PM (JST), I wrote:
> I will resume the test similar to Tang, because she also executed the original
> failover test which I have been doing before.
> To avoid confusion and to check if the results from mine and Tang are
> consistent, I also did the recovery/failover tes
On Sunday, January 3, 2021 10:35 PM (JST), Amit Kapila wrote:
> On Sat, Jan 2, 2021 at 7:47 PM k.jami...@fujitsu.com
> wrote:
> >
> > Happy new year. The V38 LGTM.
> > Apologies for a bit of delay on posting the test results, but since
> > it's the start
On Wednesday, December 30, 2020 8:58 PM, Amit Kapila wrote:
> On Wed, Dec 30, 2020 at 11:28 AM Tang, Haiying
> wrote:
> >
> > Hi Amit,
> >
> > In last
> >
> mail(https://www.postgresql.org/message-id/66851e198f6b41eda59e625718
> 2
> > 564b6%40G08CNEXMBPEKD05.g08.fujitsu.local),
> > I've sent you t
On Thu, December 24, 2020 6:02 PM JST, Tang, Haiying wrote:
> Hi Amit, Kirk
>
> >One idea could be to remove "nBlocksToInvalidate <
> >BUF_DROP_FULL_SCAN_THRESHOLD" part of check "if (cached &&
> >nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)" so that it
> always
> >use optimized path for th
On Wed, December 23, 2020 5:57 PM (GMT+9), Amit Kapila wrote:
> >
> > At Wed, 23 Dec 2020 04:22:19 +, "tsunakawa.ta...@fujitsu.com"
> > wrote in
> > > From: Amit Kapila
> > > > + /* Get the number of blocks for a relation's fork */ block[i][j]
> > > > + = smgrnblocks(smgr_reln[i], j, &cached)
On Tuesday, December 22, 2020 9:11 PM, Amit Kapila wrote:
> On Tue, Dec 22, 2020 at 2:55 PM Amit Kapila
> wrote:
> > Next, I'll look into DropRelFileNodesAllBuffers()
> > optimization patch.
> >
>
> Review of v35-0004-Optimize-DropRelFileNodesAllBuffers-in-recovery [1]
> =
On Tuesday, December 22, 2020 6:25 PM, Amit Kapila wrote:
> Attached, please find the updated patch with the following modifications, (a)
> updated comments at various places especially to tell why this is a safe
> optimization, (b) merged the patch for extending the smgrnblocks and
> vacuum optim
On Monday, December 21, 2020 10:25 PM, Amit Kapila wrote:
> I have started doing minor edits to the patch especially planning to write a
> theory why is this optimization safe and here is what I can come up with:
> "To
> remove all the pages of the specified relation forks from the buffer pool, we
On Tuesday, December 15, 2020 8:12 PM, Iwata-san wrote:
> > There are some things still to do:
> I worked on some to do.
Hi Iwata-san,
Thank you for updating the patch.
I would recommend to register this patch in the upcoming commitfest
to help us keep track of it. I will follow the thread to pro
On Friday, December 11, 2020 10:27 AM, Amit Kapila wrote:
> On Fri, Dec 11, 2020 at 5:54 AM k.jami...@fujitsu.com
> wrote:
> > So should I still not include that information?
> >
>
> I think we can extend your existing comment like: "Otherwise if the size of a
>
On Thursday, December 10, 2020 8:12 PM, Amit Kapila wrote:
> On Thu, Dec 10, 2020 at 1:40 PM k.jami...@fujitsu.com
> wrote:
> >
> > Yes, I have tested that optimization works for index relations.
> >
> > I have attached the V34, following the conditions that we
On Thursday, December 10, 2020 12:27 PM, Amit Kapila wrote:
> On Thu, Dec 10, 2020 at 7:11 AM Kyotaro Horiguchi
> wrote:
> >
> > At Wed, 9 Dec 2020 16:27:30 +0530, Amit Kapila
> > wrote in
> > > On Wed, Dec 9, 2020 at 6:32 AM Kyotaro Horiguchi
> > > > Mmm. At least btree doesn't need to call smg
On Wednesday, December 9, 2020 10:58 AM, Tsunakawa, Takayuki wrote:
> From: Kyotaro Horiguchi
> > At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila
> > wrote in
> > I also can't think of a way to use an optimized path for such cases
> > > but I don't agree with your comment on if it is common enou
On Tuesday, December 8, 2020 2:35 PM, Amit Kapila wrote:
> On Tue, Dec 8, 2020 at 10:41 AM Kyotaro Horiguchi
> wrote:
> >
> > At Tue, 8 Dec 2020 08:08:25 +0530, Amit Kapila
> > wrote in
> > > On Tue, Dec 8, 2020 at 7:24 AM Kyotaro Horiguchi
> > > wrote:
> > > > We drop
> > > > buffers for the o
On Friday, December 4, 2020 8:27 PM, Amit Kapila
wrote:
> On Fri, Nov 27, 2020 at 11:36 AM Kyotaro Horiguchi
> wrote:
> >
> > At Fri, 27 Nov 2020 02:19:57 +, "k.jami...@fujitsu.com"
> > wrote in
> > > > From: Kyotaro Horiguchi Hell
On Friday, December 4, 2020 12:42 PM, Tang, Haiying wrote:
> Hello, Kirk
>
> Thanks for providing the new patches.
> I did the recovery performance test on them, the results look good. I'd like
> to
> share them with you and everyone else.
> (I also record VACUUM and TRUNCATE execution time on ma
rks[][],
firstDelBlock[], nforks, as advised by Horiguchi-san. The memory
allocation for block[][] was also simplified.
So 0004 became simpler and more readable.
> At Thu, 26 Nov 2020 03:04:10 +0000, "k.jami...@fujitsu.com"
> wrote in
> > On Thursday, November 19, 2020 4:08 PM,
> From: Kyotaro Horiguchi
> Hello, Kirk. Thank you for the new version.
Hi, Horiguchi-san. Thank you for your very helpful feedback.
I'm updating the patches addressing those.
> + if (!smgrexists(rels[i], j))
> + continue;
> +
> +
> From: k.jami...@fujitsu.com
> On Thursday, November 19, 2020 4:08 PM, Tsunakawa, Takayuki wrote:
> > From: Andres Freund
> > > DropRelFileNodeBuffers() in recovery? The most common path is
> > > DropRelationFiles()->smgrdounlinkall()->DropRelFileNodesAl
On Thursday, November 19, 2020 4:08 PM, Tsunakawa, Takayuki wrote:
> From: Andres Freund
> > DropRelFileNodeBuffers() in recovery? The most common path is
> > DropRelationFiles()->smgrdounlinkall()->DropRelFileNodesAllBuffers(),
> > which 3/4 doesn't address and 4/4 doesn't mention.
> >
> > 4/4 se
On Thursday, November 12, 2020 1:14 PM, Tsunakawa-san wrote:
> The patch looks OK. I think as Thomas-san suggested, we can remove the
> modification to smgrnblocks() and don't care wheter the size is cached or not.
> But I think the current patch is good too, so I'd like to leave it up to a
> comm
On Tuesday, November 17, 2020 9:40 AM, Tsunkawa-san wrote:
> From: Thomas Munro
> > On Mon, Nov 16, 2020 at 11:01 PM Konstantin Knizhnik
> > wrote:
> > > I will look at your implementation more precisely latter.
> >
> > Thanks! Warning: I thought about making a thing like this for a
> > while,
On Tuesday, November 10, 2020 3:10 PM, Tsunakawa-san wrote:
> From: Jamison, Kirk/ジャミソン カーク
> > So I proceeded to update the patches using the "cached" parameter and
> > updated the corresponding comments to it in 0002.
>
> OK, I'm in favor of the name "cached" now, although I first agreed with
>
g
> Subject: Re: [Patch] Optimize dropping of relation buffers using dlist
>
> At Tue, 10 Nov 2020 08:33:26 +0530, Amit Kapila
> wrote in
> > On Tue, Nov 10, 2020 at 8:19 AM k.jami...@fujitsu.com
> > wrote:
> > >
> > > I repeated the recovery performance test f
> From: k.jami...@fujitsu.com
> On Thursday, October 22, 2020 3:15 PM, Kyotaro Horiguchi
> wrote:
> > I'm not sure about the exact steps of the test, but it can be expected
> > if we have many small relations to truncate.
> >
> > Currently BUF_DROP_FULL_S
On Thursday, October 22, 2020 3:15 PM, Kyotaro Horiguchi
wrote:
> I'm not sure about the exact steps of the test, but it can be expected if we
> have many small relations to truncate.
>
> Currently BUF_DROP_FULL_SCAN_THRESHOLD is set to Nbuffers / 512,
> which is quite arbitrary that comes from
:59:17 +0530, Amit Kapila
> wrote in
> > On Wed, Nov 4, 2020 at 8:28 AM k.jami...@fujitsu.com
> > wrote:
> > >
> > > Hi,
> > >
> > > I've updated the patch 0004 (Truncate optimization) with the
> > > previous comments of Tsunakawa-san al
Hi,
I've updated the patch 0004 (Truncate optimization) with the previous comments
of
Tsunakawa-san already addressed in the patch. (Thank you very much for the
review.)
The change here compared to the previous version is that in
DropRelFileNodesAllBuffers()
we don't check for the accurate fla
Hi everyone,
Attached are the updated set of patches (V28).
0004 - Truncate optimization is a new patch, while the rest are similar to V27.
This passes the build, regression and TAP tests.
Apologies for the delay.
I'll post the benchmark test results on SSD soon, considering the suggested
bench
On Thursday, October 22, 2020 10:34 AM, Tsunakwa-san wrote:
> > I have confirmed that the above comment (commenting out the lines in
> > RelationTruncate) solves the issue for non-recovery case.
> > The attached 0004 patch is just for non-recovery testing and is not
> > included in the final set of
On Wednesday, October 21, 2020 4:37 PM, Tsunakawa-san wrote:
> RelationTruncate() invalidates the cached fork sizes as follows. This causes
> smgrnblocks() return accurate=false, resulting in not running optimization.
> Try commenting out for non-recovery case.
>
> /*
> * Make sure smgr_
On Friday, October 9, 2020 11:12 AM, Horiguchi-san wrote:
> I have some comments on the latest patch.
Thank you for the feedback!
I've attached the latest patches.
> visibilitymap_prepare_truncate(Relation rel, BlockNumber nheapblocks) {
> BlockNumber newnblocks;
> + boolcached;
>
Hi,
> Attached are the updated patches.
Sorry there was an error in the 3rd patch. So attached is a rebase one.
Regards,
Kirk Jamison
0001-v1-Prevent-invalidating-blocks-in-smgrextend-during-recovery.patch
Description: 0001-v1-Prevent-invalidating-blocks-in-smgrextend-during-recovery.patch
On Thursday, October 8, 2020 3:38 PM, Tsunakawa-san wrote:
> Hi Kirk san,
Thank you for looking into my patches!
> (1)
> + * This returns an InvalidBlockNumber when smgr_cached_nblocks is not
> + * available and when not in recovery path.
>
> + /*
> + * We cannot believe the result from
On Monday, October 5, 2020 8:50 PM, Amit Kapila wrote:
> On Mon, Oct 5, 2020 at 3:04 PM k.jami...@fujitsu.com
> > > 2. Also, the other thing is I have asked for some testing to avoid
> > > the small regression we have for a smaller number of shared buffers
> > > whi
On Monday, October 5, 2020 3:30 PM, Amit Kapila wrote:
> + for (i = 0; i < nforks; i++)
> + {
> + /* Get the total nblocks for a relation's fork */ nForkBlocks =
> + smgrcachednblocks(smgr_reln, forkNum[i]);
> +
> + if (nForkBlocks == InvalidBlockNumber) { nTotalBlocks =
> + InvalidBlockNumber; br
On Friday, October 2, 2020 11:45 AM, Horiguchi-san wrote:
> Thaks for the new version.
Thank you for your thoughtful reviews!
I've attached an updated patch addressing the comments below.
1.
> The following description is found in the comment for FlushRelationBuffers.
>
> > * XXX curr
On Thursday, October 1, 2020 4:52 PM, Tsunakawa-san wrote:
> From: Kyotaro Horiguchi
> > I thought that the advantage of this optimization is that we don't
> > need to visit all buffers? If we need to run a full-scan for any
> > reason, there's no point in looking-up already-visited buffers aga
On Thursday, October 1, 2020 11:49 AM, Amit Kapila wrote:
> On Thu, Oct 1, 2020 at 8:11 AM tsunakawa.ta...@fujitsu.com
> wrote:
> >
> > From: Jamison, Kirk/ジャミソン カーク
> > > Recovery performance measurement results below.
> > > But it seems there are overhead even with large shared buffers.
> > >
>
On Tuesday, September 29, 2020 10:35 AM, Horiguchi-san wrote:
> FWIW, I (and maybe Amit) am thinking that the property we need here is not it
> is cached or not but the accuracy of the returned file length, and that the
> "cached" property should be hidden behind the API.
>
> Another reason for n
On Monday, September 28, 2020 5:08 PM, Tsunakawa-san wrote:
> From: Jamison, Kirk/ジャミソン カーク
> > Is my understanding above correct?
>
> No. I simply meant DropRelFileNodeBuffers() calls the following function,
> and avoids the optimization if it returns InvalidBlockNumber.
>
>
> BlockNum
On Monday, September 28, 2020 11:50 AM, Tsunakawa-san wrote:
> From: Amit Kapila
> > I agree with the above two points.
>
> Thank you. I'm relieved to know I didn't misunderstand.
>
>
> > > * Then, add a new function, say, smgrnblocks_cached() that simply
> > > returns
> > the cached block co
On Friday, September 25, 2020 6:02 PM, Tsunakawa-san wrote:
> From: Jamison, Kirk/ジャミソン カーク
> > [Results]
> > Recovery/Failover performance (in seconds). 3 trial runs.
> >
> > | shared_buffers | master | patch | %reg|
> > ||||-|
> > | 128MB |
Hi.
> I'll send performance measurement results in the next email. Thanks a lot for
> the reviews!
Below are the performance measurement results.
I was only able to use low-spec machine:
CPU 4v, Memory 8GB, RHEL, xfs filesystem.
[Failover/Recovery Test]
1. (Master) Create table (ex. 10,000 tabl
On Thursday, September 24, 2020 1:27 PM, Tsunakawa-san wrote:
> (1)
> + for (cur_blk = firstDelBlock[j]; cur_blk <
> nblocks; cur_blk++)
>
> The right side of "cur_blk <" should not be nblocks, because nblocks is not
> the number of the relation fork anymore.
Right. F
On Wednesday, September 23, 2020 2:37 PM, Tsunakawa, Takayuki wrote:
> > I revised the patch based from my understanding of Horiguchi-san's
> > comment, but I could be wrong.
> > Quoting:
> >
> > "
> > + /* Get the number of blocks for the supplied
> relation's
> > fork */
> > +
On Wednesday, September 23, 2020 11:26 AM, Tsunakawa, Takayuki wrote:
> I looked at v14.
Thank you for checking it!
> (1)
> + /* Get the total number of blocks for the supplied relation's
> fork */
> + for (j = 0; j < nforks; j++)
> + {
> +
On Wednesday, September 16, 2020 5:32 PM, Kyotaro Horiguchi wrote:
> At Wed, 16 Sep 2020 10:05:32 +0530, Amit Kapila
> wrote in
> > On Wed, Sep 16, 2020 at 9:02 AM Kyotaro Horiguchi
> > wrote:
> > >
> > > At Wed, 16 Sep 2020 08:33:06 +0530, Amit Kapila
> > > wrote in
> > > > On Wed, Sep 16, 2020
Hi,
> BTW, I think I see one problem in the code:
> >
> > if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&
> > + bufHdr->tag.forkNum == forkNum[j] && tag.blockNum >=
> > + bufHdr->firstDelBlock[j])
> >
> > Here, I think you need to use 'i' not 'j' for forkNum and
> > firstDelBlock as those
On Tuesday, September 8, 2020 1:02 PM, Amit Kapila wrote:
Hello,
> On Mon, Sep 7, 2020 at 1:33 PM k.jami...@fujitsu.com
> wrote:
> >
> > On Wednesday, September 2, 2020 5:49 PM, Amit Kapila wrote:
> > > On Wed, Sep 2, 2020 at 9:17 AM Tom Lane wrote:
> >
On Wednesday, September 2, 2020 5:49 PM, Amit Kapila wrote:
> On Wed, Sep 2, 2020 at 9:17 AM Tom Lane wrote:
> >
> > Amit Kapila writes:
> > > Even if the relation is locked, background processes like
> > > checkpointer can still touch the relation which might cause
> > > problems. Consider a cas
On Wednesday, September 2, 2020 10:31 AM, Kyotaro Horiguchi wrote:
> Hello.
>
> At Tue, 1 Sep 2020 13:02:28 +, "k.jami...@fujitsu.com"
> wrote in
> > On Tuesday, August 18, 2020 3:05 PM (GMT+9), Amit Kapila wrote:
> > > Today, again thinking about thi
On Tuesday, August 18, 2020 3:05 PM (GMT+9), Amit Kapila wrote:
> On Fri, Aug 7, 2020 at 9:33 AM Tom Lane wrote:
> >
> > Amit Kapila writes:
> > > On Sat, Aug 1, 2020 at 1:53 AM Andres Freund
> wrote:
> > >> We could also just use pg_class.relpages. It'll probably mostly be
> > >> accurate enou
On Wednesday, August 19, 2020 7:53 AM (GMT+9), Justin Pryzby wrote:
Hi,
All the patches apply, although when applying them the following appears:
(Stripping trailing CRs from patch; use --binary to disable.)
> During crash recovery, the server writes this to log:
>
> < 2020-08-16 08:46:08.
On Friday, August 7, 2020 12:38 PM, Amit Kapila wrote:
Hi,
> On Thu, Aug 6, 2020 at 6:53 AM k.jami...@fujitsu.com
> wrote:
> >
> > On Saturday, August 1, 2020 5:24 AM, Andres Freund wrote:
> >
> > Hi,
> > Thank you for your constructive review and co
On Saturday, August 1, 2020 5:24 AM, Andres Freund wrote:
Hi,
Thank you for your constructive review and comments.
Sorry for the late reply.
> Hi,
>
> On 2020-07-31 15:50:04 -0400, Tom Lane wrote:
> > Andres Freund writes:
> > > Indeed. The buffer mapping hashtable already is visible as a major
On Friday, July 31, 2020 2:37 AM, Konstantin Knizhnik wrote:
> The following review has been posted through the commitfest application:
> make installcheck-world: tested, passed
> Implements feature: tested, passed
> Spec compliant: not tested
> Documentation:not teste
Hi,
Just found a minor error in source code comment.
src/include/executor/instrument.h
Attached is the fix.
- longlocal_blks_dirtied; /* # of shared blocks dirtied */
+ longlocal_blks_dirtied; /* # of local blocks dirtied */
Regards,
Kirk Jamison
0001-Fix-
On Wednesday, July 29, 2020 4:55 PM, Konstantin Knizhnik wrote:
> On 17.06.2020 09:14, k.jami...@fujitsu.com wrote:
> > Hi,
> >
> > Since the last posted version of the patch fails, attached is a rebased
> > version.
> > Written upthread were performance results
On Wednesday, July 22, 2020 2:21 PM (GMT+9), David Rowley wrote:
> On Wed, 22 Jul 2020 at 16:40, k.jami...@fujitsu.com
> wrote:
> > I used the default max_parallel_workers & max_worker_proceses which is 8 by
> default in postgresql.conf.
> > IOW, I ran all those tests wi
On Tuesday, July 21, 2020 7:33 PM, Amit Kapila wrote:
> On Tue, Jul 21, 2020 at 3:08 PM k.jami...@fujitsu.com
> wrote:
> >
> > On Tuesday, July 21, 2020 12:18 PM, Amit Kapila wrote:
> > > On Tue, Jul 21, 2020 at 8:06 AM k.jami...@fujitsu.com
> > >
> >
On Friday, July 17, 2020 6:18 PM (GMT+9), Amit Kapila wrote:
> On Fri, Jul 17, 2020 at 11:35 AM k.jami...@fujitsu.com
> wrote:
> >
> > On Wednesday, July 15, 2020 12:52 PM (GMT+9), David Rowley wrote:
> >
> > >On Wed, 15 Jul 2020 at 14:51, Amit Kapila wrote:
&
On Tuesday, July 21, 2020 12:18 PM, Amit Kapila wrote:
> On Tue, Jul 21, 2020 at 8:06 AM k.jami...@fujitsu.com
> wrote:
> >
> > Thank you for the advice. I repeated the test as per your advice and
> > average of 3 runs per worker/s planned.
> > It still shows the
On Wednesday, July 15, 2020 12:52 PM (GMT+9), David Rowley wrote:
>On Wed, 15 Jul 2020 at 14:51, Amit Kapila wrote:
>>
>> On Wed, Jul 15, 2020 at 5:55 AM David Rowley wrote:
>>> If we've not seen any performance regressions within 1 week, then I
>>> propose that we (pending final review) push t
On Tuesday, July 14, 2020 3:01 AM (GMT+9), Bossart, Nathan wrote:
Hi Nathan,
>On 7/13/20, 11:02 AM, "Justin Pryzby" wrote:
>> Should bin/vacuumdb support this?
>
>Yes, it should. I've added it in v5 of the patch.
Thank you for the updated patch. I've joined as a reviewer.
I've also noticed tha
Hi,
Since the last posted version of the patch fails, attached is a rebased version.
Written upthread were performance results and some benefits and challenges.
I'd appreciate your feedback/comments.
Regards,
Kirk Jamison
v8-Optimize-dropping-of-relation-buffers-using-dlist.patch
Description:
On Wednesday, March 25, 2020 3:25 PM, Kirk Jamison wrote:
> As for the performance and how it affects the read-only workloads.
> Using pgbench, I've kept the overload to a minimum, less than 1%.
> I'll post follow-up results.
Here's the follow-up results.
I executed the similar tests from top of t
Hi,
I know this might already be late at end of CommitFest, but attached
is the latest version of the patch. The previous version only includes buffer
invalidation improvement for VACUUM. The new patch adds the same
routine for TRUNCATE WAL replay.
In summary, this patch aims to improve the buffe
Hi,
I have rebased the patch to keep the CFbot happy.
Apparently, in the previous patch there was a possibility of infinite loop
when allocating buffers, so I fixed that part and also removed some whitespaces.
Kindly check the attached V6 patch.
Any thoughts on this?
Regards,
Kirk Jamison
v6-O
On Wednesday, January 29, 2020 3:56 AM (GMT+9), Mike Lissner wrote:
> Hi all, I didn't get any replies to this. Is this the right way to send in a
> patch to the
> docs?
Hello,
Yes, although your current patch does not apply as I tried it in my machine.
But you can still rebase it.
For the revie
Hi Ibrar,
Are you still working on this patch?
Currently the patch does not apply mainly because of
recent commits for parallel vacuum have updated the files in this patch.
Kindly rebase it and change the status to "Needs Review" after.
Upon quick scan of another thread [1] mentioned above,
I bel
Hi,
I have updated the patch (v5).
I tried to reduce the lock waiting times by using spinlock
when inserting/deleting buffers in the new hash table, and
exclusive lock when doing lookup for buffers to be dropped.
In summary, instead of scanning the whole buffer pool in
shared buffers, we just tra
On Wed, Nov 13, 2019 4:20AM (GMT +9), Tomas Vondra wrote:
> On Tue, Nov 12, 2019 at 10:49:49AM +0000, k.jami...@fujitsu.com wrote:
> >On Thurs, November 7, 2019 1:27 AM (GMT+9), Robert Haas wrote:
> >> On Tue, Nov 5, 2019 at 10:34 AM Tomas Vondra
> >>
> >>
On Wed, Nov 13, 2019 5:34PM (GMT+9), Fujii Masao wrote:
> On Wed, Nov 13, 2019 at 3:57 PM k.jami...@fujitsu.com
> wrote:
> >
> > On Wed, Oct. 2, 2019 5:40 PM, Fujii Masao wrote:
> > > On Tue, Jul 10, 2018 at 3:04 PM Michael Paquier
> wrote:
> > > >
On Wed, Oct. 2, 2019 5:40 PM, Fujii Masao wrote:
> On Tue, Jul 10, 2018 at 3:04 PM Michael Paquier wrote:
> >
> > On Thu, Jul 05, 2018 at 01:42:20AM +0900, Fujii Masao wrote:
> > > TBH, I have no numbers measured by the test.
> > > One question about your test is; how did you measure the *recovery
On Thurs, November 7, 2019 1:27 AM (GMT+9), Robert Haas wrote:
> On Tue, Nov 5, 2019 at 10:34 AM Tomas Vondra
> wrote:
> > 2) This adds another hashtable maintenance to BufferAlloc etc. but
> > you've only done tests / benchmark for the case this optimizes. I
> > think we need to see a ben
Hi,
> Another one that I'd need feedback of is the use of new dlist operations
> for this cached buffer list. I did not use in this patch the existing
> Postgres dlist architecture (ilist.h) because I want to save memory space
> as much as possible especially when NBuffers become large. Both d
Hi,
Currently, we need to scan the WHOLE shared buffers when VACUUM
truncated off any empty pages at end of transaction or when relation
is TRUNCATEd.
As for our customer case, we periodically truncate thousands of tables,
and it's possible to TRUNCATE single table per transaction. This can be
pro
92 matches
Mail list logo