On Fri, Mar 12, 2021 at 12:07 PM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Kyotaro Horiguchi
> > About the patch, it would be better to change the type of
> > BUF_DROP_FULL_SCAN_THRESHOLD to uint64, even though the current
> > value
> > doesn't harm.
>
> OK, attached, to be prepared for the di
> From: Tsunakawa, Takayuki/綱川 貴之
> From: Kyotaro Horiguchi
> > About the patch, it would be better to change the type of
> > BUF_DROP_FULL_SCAN_THRESHOLD to uint64, even though the current
> value
> > doesn't harm.
>
> OK, attached, to be prepared for the distant future when NBuffers becomes
>
From: Kyotaro Horiguchi
> About the patch, it would be better to change the type of
> BUF_DROP_FULL_SCAN_THRESHOLD to uint64, even though the current
> value
> doesn't harm.
OK, attached, to be prepared for the distant future when NBuffers becomes
64-bit.
Regards
Takayuki Tsunakawa
At Fri, 12 Mar 2021 05:26:02 +, "tsunakawa.ta...@fujitsu.com"
wrote in
> From: Thomas Munro
> > > uint64
> >
> > +1
>
> Thank you, the patch is attached (we tend to forget how large our world is...
> 64-bit) We're sorry to cause you trouble.
BUF_DROP_FULL_SCAN_THRESHOLD cannot be large
From: Thomas Munro
> > uint64
>
> +1
Thank you, the patch is attached (we tend to forget how large our world is...
64-bit) We're sorry to cause you trouble.
Regards
Takayuki Tsunakawa
v1-0001-Fix-overflow-when-counting-the-number-of-buffers-.patch
Description: v1-0001-Fix-overflo
From: Thomas Munro
> On Fri, Mar 12, 2021 at 5:20 PM Amit Kapila wrote:
> > uint64
>
> +1
+1
I'll send a patch later.
Regards
Takayuki Tsunakawa
On Fri, Mar 12, 2021 at 5:20 PM Amit Kapila wrote:
> uint64
+1
On Fri, Mar 12, 2021 at 4:58 AM Thomas Munro wrote:
>
> While rebasing CF #2933 (which drops the _cached stuff and makes this
> optimisation always available, woo), I happened to notice that we're
> summing the size of many relations and forks into a variable
> nBlocksToInvalidate of type BlockNum
While rebasing CF #2933 (which drops the _cached stuff and makes this
optimisation always available, woo), I happened to notice that we're
summing the size of many relations and forks into a variable
nBlocksToInvalidate of type BlockNumber. That could overflow.
On Wed, January 13, 2021 2:15 PM (JST), Amit Kapila wrote:
> On Wed, Jan 13, 2021 at 7:39 AM Kyotaro Horiguchi
> wrote:
> >
> > At Tue, 12 Jan 2021 08:49:53 +0530, Amit Kapila
> > wrote in
> > > On Fri, Jan 8, 2021 at 7:03 AM Kyotaro Horiguchi
> > > wrote:
> > > >
> > > > At Thu, 7 Jan 2021 09:2
On Wed, Jan 13, 2021 at 7:39 AM Kyotaro Horiguchi
wrote:
>
> At Tue, 12 Jan 2021 08:49:53 +0530, Amit Kapila
> wrote in
> > On Fri, Jan 8, 2021 at 7:03 AM Kyotaro Horiguchi
> > wrote:
> > >
> > > At Thu, 7 Jan 2021 09:25:22 +, "k.jami...@fujitsu.com"
> > > wrote in:
> > > > > Thanks for t
At Tue, 12 Jan 2021 08:49:53 +0530, Amit Kapila wrote
in
> On Fri, Jan 8, 2021 at 7:03 AM Kyotaro Horiguchi
> wrote:
> >
> > At Thu, 7 Jan 2021 09:25:22 +, "k.jami...@fujitsu.com"
> > wrote in:
> > > > Thanks for the detailed tests. NBuffers/32 seems like an appropriate
> > > > value for
On Fri, Jan 8, 2021 at 7:03 AM Kyotaro Horiguchi
wrote:
>
> At Thu, 7 Jan 2021 09:25:22 +, "k.jami...@fujitsu.com"
> wrote in:
> > > Thanks for the detailed tests. NBuffers/32 seems like an appropriate
> > > value for the threshold based on these results. I would like to
> > > slightly modif
At Thu, 7 Jan 2021 09:25:22 +, "k.jami...@fujitsu.com"
wrote in
> On Thu, January 7, 2021 5:36 PM (JST), Amit Kapila wrote:
> >
> > On Wed, Jan 6, 2021 at 6:43 PM k.jami...@fujitsu.com
> > wrote:
> > >
> > > [Results for VACUUM on single relation]
> > > Average of 5 runs.
> > >
> > > 1. %
On Thu, January 7, 2021 5:36 PM (JST), Amit Kapila wrote:
>
> On Wed, Jan 6, 2021 at 6:43 PM k.jami...@fujitsu.com
> wrote:
> >
> > [Results for VACUUM on single relation]
> > Average of 5 runs.
> >
> > 1. % REGRESSION
> > % Regression: (patched - master)/master
> >
> > | rel_size | 128MB | 1GB
On Wed, Jan 6, 2021 at 6:43 PM k.jami...@fujitsu.com
wrote:
>
> [Results for VACUUM on single relation]
> Average of 5 runs.
>
> 1. % REGRESSION
> % Regression: (patched - master)/master
>
> | rel_size | 128MB | 1GB| 20GB | 100GB|
> |--||||--|
> |
>I'd like take a look at them and redo some of the tests using my machine. I'll
>send my test reults in a separate email after this.
I did the same tests with Kirk's scripts using the latest patch on my own
machine. The results look pretty good and similar with Kirk's.
average of 5 runs.
[VAC
Hi Kirk,
>And if you want to test, I have already indicated the detailed steps including
>the scripts I used. Have fun testing!
Thank you for your sharing of test steps and scripts. I'd like take a look at
them and redo some of the tests using my machine. I'll send my test reults in a
separate
On Wed, January 6, 2021 7:04 PM (JST), I wrote:
> I will resume the test similar to Tang, because she also executed the original
> failover test which I have been doing before.
> To avoid confusion and to check if the results from mine and Tang are
> consistent, I also did the recovery/failover tes
On Sunday, January 3, 2021 10:35 PM (JST), Amit Kapila wrote:
> On Sat, Jan 2, 2021 at 7:47 PM k.jami...@fujitsu.com
> wrote:
> >
> > Happy new year. The V38 LGTM.
> > Apologies for a bit of delay on posting the test results, but since
> > it's the start of commitfest, here it goes and the results
Hi Amit,
Sorry for my late reply. Here are my answers for your earlier questions.
>BTW, it is not clear why the advantage for single table is not as big as
>multiple tables with the Truncate command
I guess it's the amount of table blocks caused this difference. For single
table I tested the am
On Sat, Jan 2, 2021 at 7:47 PM k.jami...@fujitsu.com
wrote:
>
> Happy new year. The V38 LGTM.
> Apologies for a bit of delay on posting the test results, but since it's the
> start of commitfest, here it goes and the results were interesting.
>
> I executed a VACUUM test using the same approach th
On Wednesday, December 30, 2020 8:58 PM, Amit Kapila wrote:
> On Wed, Dec 30, 2020 at 11:28 AM Tang, Haiying
> wrote:
> >
> > Hi Amit,
> >
> > In last
> >
> mail(https://www.postgresql.org/message-id/66851e198f6b41eda59e625718
> 2
> > 564b6%40G08CNEXMBPEKD05.g08.fujitsu.local),
> > I've sent you t
On Wed, Dec 30, 2020 at 11:28 AM Tang, Haiying
wrote:
>
> Hi Amit,
>
> In last
> mail(https://www.postgresql.org/message-id/66851e198f6b41eda59e6257182564b6%40G08CNEXMBPEKD05.g08.fujitsu.local),
> I've sent you the performance test results(run only 1 time) on single table.
> Here is my the retes
Hi Amit,
In last
mail(https://www.postgresql.org/message-id/66851e198f6b41eda59e6257182564b6%40G08CNEXMBPEKD05.g08.fujitsu.local),
I've sent you the performance test results(run only 1 time) on single table.
Here is my the retested results(average by 15 times) which I think is more
accurate.
I
Hi Amit,
>I think one table with a varying amount of data is sufficient for the vacuum
>test.
>I think with more number of tables there is a greater chance of variation.
>We have previously used multiple tables in one of the tests because of
>the Truncate operation (which uses DropRelFileNodes
Hi Amit,
>I think one table with a varying amount of data is sufficient for the vacuum
>test.
>I think with more number of tables there is a greater chance of variation.
>We have previously used multiple tables in one of the tests because of the
>Truncate operation (which uses DropRelFileNodes
On Fri, Dec 25, 2020 at 9:28 AM Tang, Haiying
wrote:
>
> Hi Amit,
>
> >But how can we conclude NBuffers/128 is the maximum relation size?
> >Because the maximum size would be where the performance is worse than
> >the master, no? I guess we need to try by NBuffers/64, NBuffers/32,
> > till we
Tom Lane
; Thomas Munro ; Robert Haas
; Tomas Vondra ;
pgsql-hackers
Subject: Re: [Patch] Optimize dropping of relation buffers using dlist
On Thu, Dec 24, 2020 at 2:31 PM Tang, Haiying
wrote:
>
> Hi Amit, Kirk
>
> >One idea could be to remove "nBlocksToInvalidate <
>
Hi Kirk,
>Perhaps there is a confusing part in the presented table where you indicated
>master(512), master(256), master(128).
>Because the master is not supposed to use the BUF_DROP_FULL_SCAN_THRESHOLD and
>just execute the existing default full scan of NBuffers.
>Or I may have misunderstood
On Wed, Dec 23, 2020 at 6:27 PM k.jami...@fujitsu.com
wrote:
>
>
> It compiles. Passes the regression tests too.
> Your feedbacks are definitely welcome.
>
Thanks, the patches look good to me now. I have slightly edited the
patches for comments, commit messages, and removed the duplicate
code/che
On Thu, December 24, 2020 6:02 PM JST, Tang, Haiying wrote:
> Hi Amit, Kirk
>
> >One idea could be to remove "nBlocksToInvalidate <
> >BUF_DROP_FULL_SCAN_THRESHOLD" part of check "if (cached &&
> >nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)" so that it
> always
> >use optimized path for th
On Thu, Dec 24, 2020 at 2:31 PM Tang, Haiying
wrote:
>
> Hi Amit, Kirk
>
> >One idea could be to remove "nBlocksToInvalidate <
> >BUF_DROP_FULL_SCAN_THRESHOLD" part of check "if (cached &&
> >nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)" so that it always
> >use optimized path for the tests
Hi Amit, Kirk
>One idea could be to remove "nBlocksToInvalidate <
>BUF_DROP_FULL_SCAN_THRESHOLD" part of check "if (cached &&
>nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)" so that it always
>use optimized path for the tests. Then use the relation size as
>NBuffers/128, NBuffers/256, NBuffe
From: Jamison, Kirk/ジャミソン カーク
compiles. Passes the regression tests too.
> Your feedbacks are definitely welcome.
The code looks correct and has become further compact. Remains ready for
committer.
Regards
Takayuki Tsunakawa
On Wed, December 23, 2020 5:57 PM (GMT+9), Amit Kapila wrote:
> >
> > At Wed, 23 Dec 2020 04:22:19 +, "tsunakawa.ta...@fujitsu.com"
> > wrote in
> > > From: Amit Kapila
> > > > + /* Get the number of blocks for a relation's fork */ block[i][j]
> > > > + = smgrnblocks(smgr_reln[i], j, &cached)
On Wed, Dec 23, 2020 at 10:42 AM Kyotaro Horiguchi
wrote:
>
> At Wed, 23 Dec 2020 04:22:19 +, "tsunakawa.ta...@fujitsu.com"
> wrote in
> > From: Amit Kapila
> > > + /* Get the number of blocks for a relation's fork */
> > > + block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);
> > > +
> >
On Wed, Dec 23, 2020 at 1:07 PM k.jami...@fujitsu.com
wrote:
>
> On Tuesday, December 22, 2020 9:11 PM, Amit Kapila wrote:
>
> > In this code, I am slightly worried about the additional cost of each time
> > checking smgrexists. Consider a case where there are many relations and only
> > one or fe
On Tuesday, December 22, 2020 9:11 PM, Amit Kapila wrote:
> On Tue, Dec 22, 2020 at 2:55 PM Amit Kapila
> wrote:
> > Next, I'll look into DropRelFileNodesAllBuffers()
> > optimization patch.
> >
>
> Review of v35-0004-Optimize-DropRelFileNodesAllBuffers-in-recovery [1]
> =
Hi,
It is possible to come out of the nested loop without goto.
+ boolcached = true;
...
+* to that fork during recovery.
+*/
+ for (i = 0; i < n && cached; i++)
...
+ if (!cached)
+. break;
Here I changed the initial value for cached to true so that we
At Wed, 23 Dec 2020 04:22:19 +, "tsunakawa.ta...@fujitsu.com"
wrote in
> From: Amit Kapila
> > + /* Get the number of blocks for a relation's fork */
> > + block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);
> > +
> > + if (!cached)
> > + goto buffer_full_scan;
> >
> > Why do we need to u
From: Amit Kapila
> + /* Get the number of blocks for a relation's fork */
> + block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);
> +
> + if (!cached)
> + goto buffer_full_scan;
>
> Why do we need to use goto here? We can simply break from the loop and
> then check if (cached && nBlocksToInvali
On Tue, Dec 22, 2020 at 5:41 PM Amit Kapila wrote:
>
> On Tue, Dec 22, 2020 at 2:55 PM Amit Kapila wrote:
> >
> > Apart from tests, do let me know if you are happy with the changes in
> > the patch? Next, I'll look into DropRelFileNodesAllBuffers()
> > optimization patch.
> >
>
> Review of v35-00
On Wed, Dec 23, 2020 at 6:30 AM k.jami...@fujitsu.com
wrote:
>
> On Tuesday, December 22, 2020 6:25 PM, Amit Kapila wrote:
>
> > Apart from tests, do let me know if you are happy with the changes in the
> > patch? Next, I'll look into DropRelFileNodesAllBuffers() optimization patch.
>
> Thank you,
On Tuesday, December 22, 2020 6:25 PM, Amit Kapila wrote:
> Attached, please find the updated patch with the following modifications, (a)
> updated comments at various places especially to tell why this is a safe
> optimization, (b) merged the patch for extending the smgrnblocks and
> vacuum optim
On Tue, Dec 22, 2020 at 2:55 PM Amit Kapila wrote:
>
> Apart from tests, do let me know if you are happy with the changes in
> the patch? Next, I'll look into DropRelFileNodesAllBuffers()
> optimization patch.
>
Review of v35-0004-Optimize-DropRelFileNodesAllBuffers-in-recovery [1]
==
On Tue, Dec 22, 2020 at 8:30 AM Kyotaro Horiguchi
wrote:
>
> At Tue, 22 Dec 2020 02:48:22 +, "tsunakawa.ta...@fujitsu.com"
> wrote in
> > From: Amit Kapila
> > > Why would all client backends wait for AccessExclusive lock on this
> > > relation? Say, a client needs a buffer for some other r
On Monday, December 21, 2020 10:25 PM, Amit Kapila wrote:
> I have started doing minor edits to the patch especially planning to write a
> theory why is this optimization safe and here is what I can come up with:
> "To
> remove all the pages of the specified relation forks from the buffer pool, we
From: Kyotaro Horiguchi
> Mmm. If that is true, doesn't the unoptimized path also need the
> rechecking?
Yes, the traditional processing does the recheck after acquiring the buffer
header spinlock.
Regards
Takayuki Tsunakawa
At Tue, 22 Dec 2020 02:48:22 +, "tsunakawa.ta...@fujitsu.com"
wrote in
> From: Amit Kapila
> > Why would all client backends wait for AccessExclusive lock on this
> > relation? Say, a client needs a buffer for some other relation and
> > that might evict this buffer after we release the loc
On Tue, Dec 22, 2020 at 8:12 AM Kyotaro Horiguchi
wrote:
>
> At Tue, 22 Dec 2020 08:08:10 +0530, Amit Kapila
> wrote in
>
> > Why would all client backends wait for AccessExclusive lock on this
> > relation? Say, a client needs a buffer for some other relation and
> > that might evict this buffe
On Tue, Dec 22, 2020 at 8:18 AM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Amit Kapila
> > Why would all client backends wait for AccessExclusive lock on this
> > relation? Say, a client needs a buffer for some other relation and
> > that might evict this buffer after we release the lock on the
From: Amit Kapila
> Why would all client backends wait for AccessExclusive lock on this
> relation? Say, a client needs a buffer for some other relation and
> that might evict this buffer after we release the lock on the
> partition. In StrategyGetBuffer, it is important to either have a pin
> on
At Tue, 22 Dec 2020 08:08:10 +0530, Amit Kapila wrote
in
> On Tue, Dec 22, 2020 at 7:13 AM tsunakawa.ta...@fujitsu.com
> wrote:
> >
> > From: Amit Kapila
> > > This answers the second part of the question but what about the first
> > > part (We hold a buffer partition lock, and have done a loo
At Tue, 22 Dec 2020 01:42:55 +, "tsunakawa.ta...@fujitsu.com"
wrote in
> From: Amit Kapila
> > This answers the second part of the question but what about the first
> > part (We hold a buffer partition lock, and have done a lookup in th
> > mapping table. Why are we then rechecking the
> >
On Tue, Dec 22, 2020 at 7:13 AM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Amit Kapila
> > This answers the second part of the question but what about the first
> > part (We hold a buffer partition lock, and have done a lookup in th
> > mapping table. Why are we then rechecking the
> > relfilen
From: Amit Kapila
> This answers the second part of the question but what about the first
> part (We hold a buffer partition lock, and have done a lookup in th
> mapping table. Why are we then rechecking the
> relfilenode/fork/blocknum?)
>
> I think we don't need such a check, rather we can have
On Thu, Nov 19, 2020 at 12:37 PM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Andres Freund
>
> > Smaller comment:
> >
> > +static void
> > +FindAndDropRelFileNodeBuffers(RelFileNode rnode, ForkNumber *forkNum,
> > int nforks,
> > + BlockNumbe
Hello Kirk,
I noticed you have pushed a new version for your patch which has some changes
on TRUNCATE on TOAST relation.
Although you've done performance test for your changed part. I'd like to do a
double check for your patch(hope you don't mind).
Below is the updated recovery performance test
From: Jamison, Kirk/ジャミソン カーク
> Attached are the final updated patches.
Looks good, and the patch remains ready for committer. (Personally, I wanted
the code comment to touch upon the TOAST and FSM/VM for the reader, because we
couldn't think of those possibilities and took some time to find w
On Friday, December 11, 2020 10:27 AM, Amit Kapila wrote:
> On Fri, Dec 11, 2020 at 5:54 AM k.jami...@fujitsu.com
> wrote:
> > So should I still not include that information?
> >
>
> I think we can extend your existing comment like: "Otherwise if the size of a
> relation fork is not cached, we pr
From: tsunakawa.ta...@fujitsu.com
> What's valuable as a code comment to describe the remaining issue is that the
You can attach XXX or FIXME in front of the issue description for easier
search. (XXX appears to be used much more often in Postgres.)
Regards
Takayuki Tsunakawa
On Fri, Dec 11, 2020 at 5:54 AM k.jami...@fujitsu.com
wrote:
>
> On Thursday, December 10, 2020 8:12 PM, Amit Kapila wrote:
> > On Thu, Dec 10, 2020 at 1:40 PM k.jami...@fujitsu.com
> > wrote:
> > >
> > > Yes, I have tested that optimization works for index relations.
> > >
> > > I have attached
From: Jamison, Kirk/ジャミソン カーク
> On Thursday, December 10, 2020 8:12 PM, Amit Kapila wrote:
> > AFAIU, it won't take optimization path only when we have TOAST relation but
> > there is no insertion corresponding to it. If so, then we don't need to
> > mention
> > it specifically because there are
On Thursday, December 10, 2020 8:12 PM, Amit Kapila wrote:
> On Thu, Dec 10, 2020 at 1:40 PM k.jami...@fujitsu.com
> wrote:
> >
> > Yes, I have tested that optimization works for index relations.
> >
> > I have attached the V34, following the conditions that we use "cached"
> > flag for both DropR
On Thu, Dec 10, 2020 at 1:40 PM k.jami...@fujitsu.com
wrote:
>
> Yes, I have tested that optimization works for index relations.
>
> I have attached the V34, following the conditions that we use "cached" flag
> for both DropRelFileNodesBuffers() and DropRelFileNodesBuffers() for
> consistency.
> I
From: Jamison, Kirk/ジャミソン カーク
> I added comment in 0004 the limitation of optimization when there are TOAST
> relations that use NON-PLAIN strategy. i.e. The optimization works if the data
> types used are integers, OID, bytea, etc. But for TOAST-able data types like
> text,
> the optimization wi
On Thursday, December 10, 2020 12:27 PM, Amit Kapila wrote:
> On Thu, Dec 10, 2020 at 7:11 AM Kyotaro Horiguchi
> wrote:
> >
> > At Wed, 9 Dec 2020 16:27:30 +0530, Amit Kapila
> > wrote in
> > > On Wed, Dec 9, 2020 at 6:32 AM Kyotaro Horiguchi
> > > > Mmm. At least btree doesn't need to call smg
On Thu, Dec 10, 2020 at 7:11 AM Kyotaro Horiguchi
wrote:
>
> At Wed, 9 Dec 2020 16:27:30 +0530, Amit Kapila
> wrote in
> > On Wed, Dec 9, 2020 at 6:32 AM Kyotaro Horiguchi
> > > Mmm. At least btree doesn't need to call smgrnblocks except at
> > > expansion, so we cannot get to the optimized path
From: Kyotaro Horiguchi
> Oh, sorry. I wrongly looked to non-recovery path. smgrnblocks is
> called during buffer loading while recovery. So, smgrnblock is called
> for indexes if any update happens on the heap relation.
I misunderstood that you said there's no problem with the TOAST index becaus
At Wed, 9 Dec 2020 16:27:30 +0530, Amit Kapila wrote
in
> On Wed, Dec 9, 2020 at 6:32 AM Kyotaro Horiguchi
> wrote:
> >
> > At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila
> > wrote in
> > > On Tue, Dec 8, 2020 at 12:13 PM tsunakawa.ta...@fujitsu.com
> > > wrote:
> > > >
> > > > From: Jamison
On Wed, Dec 9, 2020 at 6:32 AM Kyotaro Horiguchi
wrote:
>
> At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila
> wrote in
> > On Tue, Dec 8, 2020 at 12:13 PM tsunakawa.ta...@fujitsu.com
> > wrote:
> > >
> > > From: Jamison, Kirk/ジャミソン カーク
> > > > Because one of the rel's cached value was false, it
On Wednesday, December 9, 2020 10:58 AM, Tsunakawa, Takayuki wrote:
> From: Kyotaro Horiguchi
> > At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila
> > wrote in
> > I also can't think of a way to use an optimized path for such cases
> > > but I don't agree with your comment on if it is common enou
From: Kyotaro Horiguchi
> At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila
> wrote in
> I also can't think of a way to use an optimized path for such cases
> > but I don't agree with your comment on if it is common enough that we
> > leave this optimization entirely for the truncate path.
>
> An
At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila wrote
in
> On Tue, Dec 8, 2020 at 12:13 PM tsunakawa.ta...@fujitsu.com
> wrote:
> >
> > From: Jamison, Kirk/ジャミソン カーク
> > > Because one of the rel's cached value was false, it forced the
> > > full-scan path for TRUNCATE.
> > > Is there a possible
On Tue, Dec 8, 2020 at 12:13 PM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Jamison, Kirk/ジャミソン カーク
> > Because one of the rel's cached value was false, it forced the
> > full-scan path for TRUNCATE.
> > Is there a possible workaround for this?
>
> Hmm, the other two relfilenodes are for the TOA
From: Jamison, Kirk/ジャミソン カーク
> Because one of the rel's cached value was false, it forced the
> full-scan path for TRUNCATE.
> Is there a possible workaround for this?
Hmm, the other two relfilenodes are for the TOAST table and index of the target
table. I think the INSERT didn't access those
On Tuesday, December 8, 2020 2:35 PM, Amit Kapila wrote:
> On Tue, Dec 8, 2020 at 10:41 AM Kyotaro Horiguchi
> wrote:
> >
> > At Tue, 8 Dec 2020 08:08:25 +0530, Amit Kapila
> > wrote in
> > > On Tue, Dec 8, 2020 at 7:24 AM Kyotaro Horiguchi
> > > wrote:
> > > > We drop
> > > > buffers for the o
On Tue, Dec 8, 2020 at 10:41 AM Kyotaro Horiguchi
wrote:
>
> At Tue, 8 Dec 2020 08:08:25 +0530, Amit Kapila
> wrote in
> > On Tue, Dec 8, 2020 at 7:24 AM Kyotaro Horiguchi
> > wrote:
> > > We drop
> > > buffers for the old relfilenode on truncation anyway.
> > >
> > > What I did is:
> > >
> > >
At Tue, 8 Dec 2020 08:08:25 +0530, Amit Kapila wrote
in
> On Tue, Dec 8, 2020 at 7:24 AM Kyotaro Horiguchi
> wrote:
> > We drop
> > buffers for the old relfilenode on truncation anyway.
> >
> > What I did is:
> >
> > a: Create a physical replication pair.
> > b: On the master, create a table. (
On Tue, Dec 8, 2020 at 7:24 AM Kyotaro Horiguchi
wrote:
>
> I'm out of it more than usual..
>
> At Tue, 08 Dec 2020 09:45:53 +0900 (JST), Kyotaro Horiguchi
> wrote in
> > At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila
> > wrote in
> > > On Mon, Dec 7, 2020 at 12:32 PM k.jami...@fujitsu.com
> >
I'm out of it more than usual..
At Tue, 08 Dec 2020 09:45:53 +0900 (JST), Kyotaro Horiguchi
wrote in
> At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila
> wrote in
> > On Mon, Dec 7, 2020 at 12:32 PM k.jami...@fujitsu.com
> > wrote:
> > >
> > > On Friday, December 4, 2020 8:27 PM, Amit Kapila
On Tue, Dec 8, 2020 at 6:23 AM Kyotaro Horiguchi
wrote:
>
> At Tue, 08 Dec 2020 09:45:53 +0900 (JST), Kyotaro Horiguchi
> wrote in
> > At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila
> > wrote in
> > > Hmm, how is it possible if Insert is done before Truncate? The insert
> > > should happen in
At Tue, 08 Dec 2020 09:45:53 +0900 (JST), Kyotaro Horiguchi
wrote in
> At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila
> wrote in
> > Hmm, how is it possible if Insert is done before Truncate? The insert
> > should happen in old RelFileNode only. I have verified by adding a
> > break-in (while
At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila wrote
in
> On Mon, Dec 7, 2020 at 12:32 PM k.jami...@fujitsu.com
> wrote:
> >
> > On Friday, December 4, 2020 8:27 PM, Amit Kapila
> > wrote:
> > Hi,
> > I have reported before that it is not always the case that the "cached"
> > flag of
> > srn
On Mon, Dec 7, 2020 at 12:32 PM k.jami...@fujitsu.com
wrote:
>
> On Friday, December 4, 2020 8:27 PM, Amit Kapila
> wrote:
> > On Fri, Nov 27, 2020 at 11:36 AM Kyotaro Horiguchi
> > wrote:
> > >
> > > At Fri, 27 Nov 2020 02:19:57 +, "k.jami...@fujitsu.com"
> > > wrote in
> > > > > From: Ky
On Friday, December 4, 2020 8:27 PM, Amit Kapila
wrote:
> On Fri, Nov 27, 2020 at 11:36 AM Kyotaro Horiguchi
> wrote:
> >
> > At Fri, 27 Nov 2020 02:19:57 +, "k.jami...@fujitsu.com"
> > wrote in
> > > > From: Kyotaro Horiguchi Hello, Kirk.
> > > > Thank you for the new version.
> > >
> > >
On Fri, Nov 27, 2020 at 11:36 AM Kyotaro Horiguchi
wrote:
>
> At Fri, 27 Nov 2020 02:19:57 +, "k.jami...@fujitsu.com"
> wrote in
> > > From: Kyotaro Horiguchi
> > > Hello, Kirk. Thank you for the new version.
> >
> > Hi, Horiguchi-san. Thank you for your very helpful feedback.
> > I'm updat
On Friday, December 4, 2020 12:42 PM, Tang, Haiying wrote:
> Hello, Kirk
>
> Thanks for providing the new patches.
> I did the recovery performance test on them, the results look good. I'd like
> to
> share them with you and everyone else.
> (I also record VACUUM and TRUNCATE execution time on ma
At Thu, 3 Dec 2020 07:18:16 +, "tsunakawa.ta...@fujitsu.com"
wrote in
> From: Jamison, Kirk/ジャミソン カーク
> > Apologies for the delay, but attached are the updated versions to simplify
> > the
> > patches.
>
> Looks good for me. Thanks to Horiguchi-san and Andres-san, the code bebecame
> fu
Thanks for the new version.
This contains only replies. I'll send some further comments in another
mail later.
At Thu, 3 Dec 2020 03:49:27 +, "k.jami...@fujitsu.com"
wrote in
> On Thursday, November 26, 2020 4:19 PM, Horiguchi-san wrote:
> > Hello, Kirk. Thank you for the new ve
Hello, Kirk
Thanks for providing the new patches.
I did the recovery performance test on them, the results look good. I'd like to
share them with you and everyone else.
(I also record VACUUM and TRUNCATE execution time on master/primary in case you
want to have a look.)
1. VACUUM and Failove
From: Jamison, Kirk/ジャミソン カーク
> Apologies for the delay, but attached are the updated versions to simplify the
> patches.
Looks good for me. Thanks to Horiguchi-san and Andres-san, the code bebecame
further compact and easier to read. I've marked this ready for committer.
To the committer:
I
On Thursday, November 26, 2020 4:19 PM, Horiguchi-san wrote:
> Hello, Kirk. Thank you for the new version.
Apologies for the delay, but attached are the updated versions to simplify the
patches.
The changes reflected most of your comments/suggestions.
Summary of changes in the latest versions.
1
From: Kyotaro Horiguchi
> We are relying on the "fact" that the first lseek() call of a
> (startup) process tells the truth. We added an assertion so that we
> make sure that the cached value won't be cleared during recovery. A
> possible remaining danger would be closing of an smgr object of a
Hi!
I've found this patch is RFC on commitfest application. I've quickly
checked if it's really ready for commit. It seems there are still
unaddressed review notes. I'm going to switch it to WFA.
--
Regards,
Alexander Korotkov
At Fri, 27 Nov 2020 02:19:57 +, "k.jami...@fujitsu.com"
wrote in
> > From: Kyotaro Horiguchi
> > Hello, Kirk. Thank you for the new version.
>
> Hi, Horiguchi-san. Thank you for your very helpful feedback.
> I'm updating the patches addressing those.
>
> > + if (!smgrexi
> From: Kyotaro Horiguchi
> Hello, Kirk. Thank you for the new version.
Hi, Horiguchi-san. Thank you for your very helpful feedback.
I'm updating the patches addressing those.
> + if (!smgrexists(rels[i], j))
> + continue;
> +
> +
At Thu, 26 Nov 2020 16:18:55 +0900 (JST), Kyotaro Horiguchi
wrote in
> + /* Zero the array of blocks because these will all be dropped anyway */
> + MemSet(firstDelBlocks, 0, sizeof(BlockNumber) * n * (MAX_FORKNUM + 1));
>
> We don't need to prepare nforks, forks and firstDelBlocks for
Hello, Kirk. Thank you for the new version.
At Thu, 26 Nov 2020 03:04:10 +, "k.jami...@fujitsu.com"
wrote in
> On Thursday, November 19, 2020 4:08 PM, Tsunakawa, Takayuki wrote:
> > From: Andres Freund
> > > DropRelFileNodeBuffers() in recovery? The most common path is
> > > DropRelationFi
1 - 100 of 268 matches
Mail list logo