On Thu, Jun 24, 2021 at 11:04 AM Michael Paquier wrote:
>
> On Thu, Jun 24, 2021 at 12:25:15AM -0400, Tom Lane wrote:
> > Amit Kapila writes:
> >> Okay, I'll change this in back branches and HEAD to keep the code
> >> consistent, or do you think it is better to retain the order in HEAD
> >> as it
On Thu, Jun 24, 2021 at 12:25:15AM -0400, Tom Lane wrote:
> Amit Kapila writes:
>> Okay, I'll change this in back branches and HEAD to keep the code
>> consistent, or do you think it is better to retain the order in HEAD
>> as it is and just change it for back-branches?
>
> As I said, I'd keep th
Amit Kapila writes:
>> I think it's OK in HEAD. I agree we shouldn't do it like that
>> in the back branches.
> Okay, I'll change this in back branches and HEAD to keep the code
> consistent, or do you think it is better to retain the order in HEAD
> as it is and just change it for back-branches
On Wed, Jun 23, 2021 at 8:21 PM Tom Lane wrote:
>
> Tomas Vondra writes:
> > While rebasing a patch broken by 4daa140a2f5, I noticed that the patch
> > does this:
>
> > @@ -63,6 +63,7 @@ enum ReorderBufferChangeType
> > REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID,
> > REORDER_BUFFER_C
Tomas Vondra writes:
> While rebasing a patch broken by 4daa140a2f5, I noticed that the patch
> does this:
> @@ -63,6 +63,7 @@ enum ReorderBufferChangeType
> REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID,
> REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
> REORDER_BUFFER_CHANGE_INTE
Hi,
On 6/18/21 5:50 AM, Amit Kapila wrote:
> On Thu, Jun 17, 2021 at 2:55 PM Amit Kapila wrote:
>>
>> Your patch looks good to me as well. I would like to retain the
>> comment as it is from master for now. I'll do some testing and push it
>> tomorrow unless there are additional comments.
>>
>
>
On Thu, Jun 17, 2021 at 2:55 PM Amit Kapila wrote:
>
> Your patch looks good to me as well. I would like to retain the
> comment as it is from master for now. I'll do some testing and push it
> tomorrow unless there are additional comments.
>
Pushed!
--
With Regards,
Amit Kapila.
On Thu, Jun 17, 2021 at 1:35 PM Amit Langote wrote:
>
> Hi Dilip,
>
> On Thu, Jun 17, 2021 at 4:45 PM Dilip Kumar wrote:
> > On Thu, Jun 17, 2021 at 12:52 PM Amit Langote
> > wrote:
> >
> > > Oh I missed that the problem report is for the PG13 branch.
> > >
> > > How about the attached patch th
Hi Dilip,
On Thu, Jun 17, 2021 at 4:45 PM Dilip Kumar wrote:
> On Thu, Jun 17, 2021 at 12:52 PM Amit Langote wrote:
>
> > Oh I missed that the problem report is for the PG13 branch.
> >
> > How about the attached patch then?
> >
> Looks good,
Thanks for checking.
> one minor comment, how about
On Thu, Jun 17, 2021 at 12:52 PM Amit Langote wrote:
> Oh I missed that the problem report is for the PG13 branch.
>
> How about the attached patch then?
>
Looks good, one minor comment, how about making the below comment,
same as on the head?
- if (!found || !entry->replicate_valid)
+ if (!foun
On Thu, Jun 17, 2021 at 3:42 PM Amit Kapila wrote:
> On Thu, Jun 17, 2021 at 10:39 AM Amit Langote wrote:
> >
> > On Thu, Jun 17, 2021 at 12:56 PM Amit Kapila
> > wrote:
> > > On Wed, Jun 16, 2021 at 8:18 PM Tom Lane wrote:
> > > >
> > > > Amit Kapila writes:
> > > > > Pushed!
> > > >
> > > >
On Thu, Jun 17, 2021 at 10:39 AM Amit Langote wrote:
>
> On Thu, Jun 17, 2021 at 12:56 PM Amit Kapila wrote:
> > On Wed, Jun 16, 2021 at 8:18 PM Tom Lane wrote:
> > >
> > > Amit Kapila writes:
> > > > Pushed!
> > >
> > > skink reports that this has valgrind issues:
> > >
> > > https://buildfarm
On Thu, Jun 17, 2021 at 12:56 PM Amit Kapila wrote:
> On Wed, Jun 16, 2021 at 8:18 PM Tom Lane wrote:
> >
> > Amit Kapila writes:
> > > Pushed!
> >
> > skink reports that this has valgrind issues:
> >
> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-06-15%2020%3A49%3A26
On Wed, Jun 16, 2021 at 8:18 PM Tom Lane wrote:
>
> Amit Kapila writes:
> > Pushed!
>
> skink reports that this has valgrind issues:
>
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-06-15%2020%3A49%3A26
>
The problem happens at line:
rel_sync_cache_relation_cb()
{
..
if
Amit Kapila writes:
> Pushed!
skink reports that this has valgrind issues:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-06-15%2020%3A49%3A26
2021-06-16 01:20:13.344 UTC [2198271][4/0:0] LOG: received replication
command: IDENTIFY_SYSTEM
2021-06-16 01:20:13.384 UTC [21
On Mon, Jun 14, 2021 at 12:06 PM Dilip Kumar wrote:
>
> On Mon, Jun 14, 2021 at 9:44 AM Dilip Kumar wrote:
> >
> > On Mon, Jun 14, 2021 at 8:34 AM Amit Kapila wrote:
> > >
> > > I think the test in this patch is quite similar to what Noah has
> > > pointed in the nearby thread [1] to be failing
On Mon, Jun 14, 2021 at 9:44 AM Dilip Kumar wrote:
>
> On Mon, Jun 14, 2021 at 8:34 AM Amit Kapila wrote:
> >
> > I think the test in this patch is quite similar to what Noah has
> > pointed in the nearby thread [1] to be failing at some intervals. Can
> > you also please once verify the same and
On Mon, Jun 14, 2021 at 8:34 AM Amit Kapila wrote:
>
> I think the test in this patch is quite similar to what Noah has
> pointed in the nearby thread [1] to be failing at some intervals. Can
> you also please once verify the same and if we can expect similar
> failures here then we might want to
On Fri, Jun 11, 2021 at 7:23 PM Amit Kapila wrote:
>
> On Fri, Jun 11, 2021 at 11:37 AM Dilip Kumar wrote:
> >
> > On Thu, Jun 10, 2021 at 7:15 PM Amit Kapila wrote:
> > >
> >
> > >
> > > Please find the patch for HEAD attached. Can you please prepare the
> > > patch for back-branches by doing a
On Fri, Jun 11, 2021 at 11:37 AM Dilip Kumar wrote:
>
> On Thu, Jun 10, 2021 at 7:15 PM Amit Kapila wrote:
> >
>
> >
> > Please find the patch for HEAD attached. Can you please prepare the
> > patch for back-branches by doing all the changes I have done in the
> > patch for HEAD?
>
> Done
>
Than
On Thu, Jun 10, 2021 at 2:12 PM Dilip Kumar wrote:
>
> On Wed, Jun 9, 2021 at 8:59 PM Alvaro Herrera wrote:
> >
> > May I suggest to use a different name in the blurt_and_lock_123()
> > function, so that it doesn't conflict with the one in
> > insert-conflict-specconflict? Thanks
>
> Renamed to
On Wed, Jun 9, 2021 at 8:59 PM Alvaro Herrera wrote:
>
> May I suggest to use a different name in the blurt_and_lock_123()
> function, so that it doesn't conflict with the one in
> insert-conflict-specconflict? Thanks
Renamed to blurt_and_lock(), is that fine?
I haved fixed other comments and a
May I suggest to use a different name in the blurt_and_lock_123()
function, so that it doesn't conflict with the one in
insert-conflict-specconflict? Thanks
--
Álvaro Herrera39°49'30"S 73°17'W
On Wed, Jun 9, 2021 at 4:22 PM Amit Kapila wrote:
> On Wed, Jun 9, 2021 at 4:12 PM Dilip Kumar wrote:
> >> Few comments:
> >> 1. The test has a lot of similarities and test duplication with what
> >> we are doing in insert-conflict-specconflict.spec. Can we move it to
> >> insert-conflict-specco
On Wed, Jun 9, 2021 at 4:12 PM Dilip Kumar wrote:
>
> On Wed, Jun 9, 2021 at 11:03 AM Amit Kapila wrote:
>>
>> On Tue, Jun 8, 2021 at 5:16 PM Dilip Kumar wrote:
>> >
>> > Based on the off list discussion, I have modified the test based on
>> > the idea showed in
>> > "isolation/specs/insert-conf
On Wed, Jun 9, 2021 at 11:03 AM Amit Kapila wrote:
> On Tue, Jun 8, 2021 at 5:16 PM Dilip Kumar wrote:
> >
> > Based on the off list discussion, I have modified the test based on
> > the idea showed in
> > "isolation/specs/insert-conflict-specconflict.spec", other open point
> > we had about the
On Tue, Jun 8, 2021 at 5:16 PM Dilip Kumar wrote:
>
> Based on the off list discussion, I have modified the test based on
> the idea showed in
> "isolation/specs/insert-conflict-specconflict.spec", other open point
> we had about the race condition that how to ensure that when we unlock
> any sess
On Mon, Jun 7, 2021 at 6:45 PM Dilip Kumar wrote:
>>
>>
>> 2. In the test, there seems to be an assumption that we can unlock s2
>> and s3 one after another, and then both will start waiting on s-1 but
>> isn't it possible that before s2 start waiting on s1, s3 completes its
>> insertion and then
On Mon, Jun 7, 2021 at 6:34 PM Amit Kapila wrote:
> On Mon, Jun 7, 2021 at 6:04 PM Dilip Kumar wrote:
> >
> > I have fixed all pending review comments and also added a test case
> which is working fine.
> >
>
> Few observations and questions on testcase:
> 1.
> +step "s1_lock_s2" { SELECT pg_adv
On Mon, Jun 7, 2021 at 6:04 PM Dilip Kumar wrote:
>
> I have fixed all pending review comments and also added a test case which is
> working fine.
>
Few observations and questions on testcase:
1.
+step "s1_lock_s2" { SELECT pg_advisory_lock(2); }
+step "s1_lock_s3" { SELECT pg_advisory_lock(2);
On Mon, Jun 7, 2021 at 8:46 AM Dilip Kumar wrote:
> On Mon, 7 Jun 2021 at 8:30 AM, Amit Kapila
> wrote:
>
>> On Wed, Jun 2, 2021 at 11:52 AM Amit Kapila
>> wrote:
>> >
>> > On Wed, Jun 2, 2021 at 11:38 AM Dilip Kumar
>> wrote:
>> > >
>> > > On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila
>> wrote
On Mon, 7 Jun 2021 at 8:30 AM, Amit Kapila wrote:
> On Wed, Jun 2, 2021 at 11:52 AM Amit Kapila
> wrote:
> >
> > On Wed, Jun 2, 2021 at 11:38 AM Dilip Kumar
> wrote:
> > >
> > > On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila
> wrote:
> > > >
> > > > I think the same relation case might not create
On Wed, Jun 2, 2021 at 11:52 AM Amit Kapila wrote:
>
> On Wed, Jun 2, 2021 at 11:38 AM Dilip Kumar wrote:
> >
> > On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila wrote:
> > >
> > > I think the same relation case might not create a problem because it
> > > won't find the entry for it in the toast_has
On Wed, Jun 2, 2021 at 11:38 AM Dilip Kumar wrote:
>
> On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila wrote:
> >
> > On Tue, Jun 1, 2021 at 5:23 PM Dilip Kumar wrote:
> > >
> > > On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila
> > > wrote:
> > >
> > > > >
> > > > > IMHO, as I stated earlier one way t
On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila wrote:
>
> On Tue, Jun 1, 2021 at 5:23 PM Dilip Kumar wrote:
> >
> > On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila wrote:
> >
> > > >
> > > > IMHO, as I stated earlier one way to fix this problem is that we add
> > > > the spec abort operation (DELETE +
On Tue, Jun 1, 2021 at 5:23 PM Dilip Kumar wrote:
>
> On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila wrote:
>
> > >
> > > IMHO, as I stated earlier one way to fix this problem is that we add
> > > the spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the
> > > queue, maybe with action name
On Tue, Jun 1, 2021 at 8:01 PM Dilip Kumar wrote:
>
>
> The attached patch fixes by queuing the spec abort change and cleaning
> up the toast hash on spec abort. Currently, in this patch I am
> queuing up all the spec abort changes, but as an optimization we can
> avoid
> queuing the spec abort f
On Tue, Jun 1, 2021 at 5:22 PM Dilip Kumar wrote:
>
> On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila wrote:
>
> > >
> > > IMHO, as I stated earlier one way to fix this problem is that we add
> > > the spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the
> > > queue, maybe with action name
On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila wrote:
> >
> > IMHO, as I stated earlier one way to fix this problem is that we add
> > the spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the
> > queue, maybe with action name
> > "REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT" and as part of
On Tue, Jun 1, 2021 at 11:44 AM Dilip Kumar wrote:
>
> On Tue, Jun 1, 2021 at 11:00 AM Dilip Kumar wrote:
> >
> > On Tue, Jun 1, 2021 at 10:21 AM Amit Kapila wrote:
> > >
> > >
> > > Right, I think you can remove the change related to stream xact and
> > > probably write some comments on why we
On Tue, Jun 1, 2021 at 11:00 AM Dilip Kumar wrote:
>
> On Tue, Jun 1, 2021 at 10:21 AM Amit Kapila wrote:
> >
> > On Tue, Jun 1, 2021 at 9:59 AM Dilip Kumar wrote:
> > >
> > > On Tue, Jun 1, 2021 at 9:53 AM Amit Kapila
> > > wrote:
> > > >
> > > > On Mon, May 31, 2021 at 8:12 PM Dilip Kumar
On Tue, Jun 1, 2021 at 10:21 AM Amit Kapila wrote:
>
> On Tue, Jun 1, 2021 at 9:59 AM Dilip Kumar wrote:
> >
> > On Tue, Jun 1, 2021 at 9:53 AM Amit Kapila wrote:
> > >
> > > On Mon, May 31, 2021 at 8:12 PM Dilip Kumar wrote:
> > > >
> > > > On Mon, May 31, 2021 at 6:32 PM Dilip Kumar
> > > >
On Tue, Jun 1, 2021 at 9:59 AM Dilip Kumar wrote:
>
> On Tue, Jun 1, 2021 at 9:53 AM Amit Kapila wrote:
> >
> > On Mon, May 31, 2021 at 8:12 PM Dilip Kumar wrote:
> > >
> > > On Mon, May 31, 2021 at 6:32 PM Dilip Kumar wrote:
> > > >
> > > > I missed to do the test for streaming. I will to tha
On Tue, Jun 1, 2021 at 9:53 AM Amit Kapila wrote:
>
> On Mon, May 31, 2021 at 8:12 PM Dilip Kumar wrote:
> >
> > On Mon, May 31, 2021 at 6:32 PM Dilip Kumar wrote:
> > >
> > > I missed to do the test for streaming. I will to that tomorrow and reply
> > > back.
> >
> > For streaming transaction
On Mon, May 31, 2021 at 8:12 PM Dilip Kumar wrote:
>
> On Mon, May 31, 2021 at 6:32 PM Dilip Kumar wrote:
> >
> > I missed to do the test for streaming. I will to that tomorrow and reply
> > back.
>
> For streaming transactions this issue is not there. Because this
> problem will only occur if
On Mon, May 31, 2021 at 6:32 PM Dilip Kumar wrote:
>
> On Mon, 31 May 2021 at 4:29 PM, Dilip Kumar wrote:
>>
>> On Mon, May 31, 2021 at 8:50 AM Dilip Kumar wrote:
>> >
>> > On Mon, 31 May 2021 at 8:21 AM, Amit Kapila
>> > wrote:
>> >>
>> >> Okay, I think it would be better if we can test this
On Mon, 31 May 2021 at 4:29 PM, Dilip Kumar wrote:
> On Mon, May 31, 2021 at 8:50 AM Dilip Kumar wrote:
> >
> > On Mon, 31 May 2021 at 8:21 AM, Amit Kapila
> wrote:
> >>
> >> Okay, I think it would be better if we can test this once for the
> >> streaming case as well. Dilip, would you like to
On Mon, May 31, 2021 at 8:50 AM Dilip Kumar wrote:
>
> On Mon, 31 May 2021 at 8:21 AM, Amit Kapila wrote:
>>
>> Okay, I think it would be better if we can test this once for the
>> streaming case as well. Dilip, would you like to do that and send the
>> updated patch as per one of the comments by
On Mon, 31 May 2021 at 8:21 AM, Amit Kapila wrote:
> On Sat, May 29, 2021 at 5:45 PM Tomas Vondra
> wrote:
> >
> > On 5/29/21 6:29 AM, Amit Kapila wrote:
> > > On Fri, May 28, 2021 at 5:16 PM Tomas Vondra
> > > wrote:
> > >>
> > >> I wonder if there's a way to free the TOASTed data earlier, ins
On Sat, May 29, 2021 at 5:45 PM Tomas Vondra
wrote:
>
> On 5/29/21 6:29 AM, Amit Kapila wrote:
> > On Fri, May 28, 2021 at 5:16 PM Tomas Vondra
> > wrote:
> >>
> >> I wonder if there's a way to free the TOASTed data earlier, instead of
> >> waiting until the end of the transaction (as this patch
On 5/29/21 6:29 AM, Amit Kapila wrote:
> On Fri, May 28, 2021 at 5:16 PM Tomas Vondra
> wrote:
>>
>> I wonder if there's a way to free the TOASTed data earlier, instead of
>> waiting until the end of the transaction (as this patch does).
>>
>
> IIUC we are anyway freeing the toasted data at the n
On Fri, May 28, 2021 at 5:16 PM Tomas Vondra
wrote:
>
> I wonder if there's a way to free the TOASTed data earlier, instead of
> waiting until the end of the transaction (as this patch does).
>
IIUC we are anyway freeing the toasted data at the next
insert/update/delete. We can try to free at oth
On Fri, May 28, 2021 at 6:01 PM Tomas Vondra
wrote:
>
> On 5/28/21 2:17 PM, Dilip Kumar wrote:
> > On Fri, May 28, 2021 at 5:16 PM Tomas Vondra
> > wrote:
> >> On 5/27/21 6:36 AM, Dilip Kumar wrote:
> >>> On Thu, May 27, 2021 at 9:47 AM Amit Kapila
> >>> wrote:
>
> On Thu, May 27, 202
On 5/28/21 2:17 PM, Dilip Kumar wrote:
> On Fri, May 28, 2021 at 5:16 PM Tomas Vondra
> wrote:
>> On 5/27/21 6:36 AM, Dilip Kumar wrote:
>>> On Thu, May 27, 2021 at 9:47 AM Amit Kapila wrote:
On Thu, May 27, 2021 at 9:40 AM Dilip Kumar wrote:
True, but if you do this clean-up
On Fri, May 28, 2021 at 5:16 PM Tomas Vondra
wrote:
> On 5/27/21 6:36 AM, Dilip Kumar wrote:
> > On Thu, May 27, 2021 at 9:47 AM Amit Kapila wrote:
> >>
> >> On Thu, May 27, 2021 at 9:40 AM Dilip Kumar wrote:
> >>
> >> True, but if you do this clean-up in ReorderBufferCleanupTXN then you
> >> do
On 5/27/21 6:36 AM, Dilip Kumar wrote:
> On Thu, May 27, 2021 at 9:47 AM Amit Kapila wrote:
>>
>> On Thu, May 27, 2021 at 9:40 AM Dilip Kumar wrote:
>>
>> True, but if you do this clean-up in ReorderBufferCleanupTXN then you
>> don't need to take care at separate places. Also, toast_hash is st
On Thu, May 27, 2021 at 9:47 AM Amit Kapila wrote:
>
> On Thu, May 27, 2021 at 9:40 AM Dilip Kumar wrote:
>
> True, but if you do this clean-up in ReorderBufferCleanupTXN then you
> don't need to take care at separate places. Also, toast_hash is stored
> in txn so it appears natural to clean it u
On Thu, May 27, 2021 at 9:40 AM Dilip Kumar wrote:
>
> On Thu, May 27, 2021 at 9:26 AM Amit Kapila wrote:
> >
> > >
> > > Can we consider the possibility to destroy the toast_hash in
> > > ReorderBufferCleanupTXN/ReorderBufferTruncateTXN? It will delay the
> > > clean up of memory till the end of
On Thu, May 27, 2021 at 9:26 AM Amit Kapila wrote:
>
> On Thu, May 27, 2021 at 9:02 AM Amit Kapila wrote:
> >
> > On Thu, Mar 25, 2021 at 11:04 AM Ashutosh Bapat
> > wrote:
> > >
> > > Hi All,
> > > We saw OOM in a system where WAL sender consumed Gigabttes of memory
> > > which was never releas
On Thu, May 27, 2021 at 9:03 AM Amit Kapila wrote:
>
> On Thu, Mar 25, 2021 at 11:04 AM Ashutosh Bapat
> wrote:
> >
> > Hi All,
> > We saw OOM in a system where WAL sender consumed Gigabttes of memory
> > which was never released. Upon investigation, we found out that there
> > were many ReorderB
On Thu, May 27, 2021 at 9:02 AM Amit Kapila wrote:
>
> On Thu, Mar 25, 2021 at 11:04 AM Ashutosh Bapat
> wrote:
> >
> > Hi All,
> > We saw OOM in a system where WAL sender consumed Gigabttes of memory
> > which was never released. Upon investigation, we found out that there
> > were many ReorderB
On Thu, May 27, 2021 at 8:27 AM Peter Geoghegan wrote:
>
> On Wed, Mar 24, 2021 at 10:34 PM Ashutosh Bapat
> wrote:
> > Hi All,
> > We saw OOM in a system where WAL sender consumed Gigabttes of memory
> > which was never released. Upon investigation, we found out that there
> > were many ReorderB
On Thu, Mar 25, 2021 at 11:04 AM Ashutosh Bapat
wrote:
>
> Hi All,
> We saw OOM in a system where WAL sender consumed Gigabttes of memory
> which was never released. Upon investigation, we found out that there
> were many ReorderBufferToastHash memory contexts linked to
> ReorderBuffer context, to
On Wed, Mar 24, 2021 at 10:34 PM Ashutosh Bapat
wrote:
> Hi All,
> We saw OOM in a system where WAL sender consumed Gigabttes of memory
> which was never released. Upon investigation, we found out that there
> were many ReorderBufferToastHash memory contexts linked to
> ReorderBuffer context, toge
64 matches
Mail list logo