On Wed, Sep 08, 2021 at 03:08:28PM +0900, Michael Paquier wrote:
> On top of the tests for needed for custom GUCs, this needs tests for
> the new int64 reloption. I would suggest to add something in
> dummy_index_am, where we test all the reloption APIs.
My review here was three weeks ago, and th
On Tue, Sep 07, 2021 at 02:38:13PM +0800, Julien Rouhaud wrote:
> On Tue, Sep 7, 2021 at 12:20 PM Michael Paquier wrote:
>> And a couple of months later, the development cycle of 15 has begun.
>> While re-reading the thread, I got the impression that there is no
>> reason to not move on with at le
On Tue, Sep 7, 2021 at 12:20 PM Michael Paquier wrote:
>
> On Fri, Mar 26, 2021 at 09:54:21AM -0400, David Steele wrote:
> > I'm going to move this to the 2021-07 CF and leave it in the Waiting for
> > Author state. It would probably be best to wait until just before the CF to
> > rebase since any
On Fri, Mar 26, 2021 at 09:54:21AM -0400, David Steele wrote:
> I'm going to move this to the 2021-07 CF and leave it in the Waiting for
> Author state. It would probably be best to wait until just before the CF to
> rebase since anything you do now will likely be broken by the flurry of
> commits
Hi Jim,
On 3/26/21 12:01 AM, Thomas Munro wrote:
On Fri, Mar 26, 2021 at 2:57 AM David Steele wrote:
On 1/22/21 6:46 PM, Finnerty, Jim wrote:
First 3 patches derived from the original 64-bit xid patch set by Alexander
Korotkov
The patches no longer apply
(http://cfbot.cputube.org/patch_32_
On Fri, Mar 26, 2021 at 2:57 AM David Steele wrote:
> On 1/22/21 6:46 PM, Finnerty, Jim wrote:
> > First 3 patches derived from the original 64-bit xid patch set by Alexander
> > Korotkov
>
> The patches no longer apply
> (http://cfbot.cputube.org/patch_32_2960.log), so marked Waiting on Author.
On 1/22/21 6:46 PM, Finnerty, Jim wrote:
First 3 patches derived from the original 64-bit xid patch set by Alexander
Korotkov
The patches no longer apply
(http://cfbot.cputube.org/patch_32_2960.log), so marked Waiting on Author.
I also removed the PG14 target since this is a fresh patch set
On Thu, Nov 7, 2019 at 10:28 AM Bruce Momjian wrote:
> The above is a very good summary of the constraints that have led to our
> current handling of XID wraparound. If we are concerned about excessive
> vacuum freeze overhead, why is the default autovacuum_freeze_max_age =
> 2 so low? T
On Tue, Nov 5, 2019 at 09:34:52AM +1300, Thomas Munro wrote:
> On Tue, Nov 5, 2019 at 8:45 AM Tomas Vondra
> > Agreed. I think code complexity is part of the trade-off. IMO it's fine
> > to hack existing heap AM initially, and only explore the separate AM if
> > that turns out to be promising.
>
On Tue, Nov 5, 2019 at 8:45 AM Tomas Vondra
wrote:
> On Mon, Nov 04, 2019 at 10:44:53AM -0800, Andres Freund wrote:
> >On 2019-11-04 19:39:18 +0100, Tomas Vondra wrote:
> >> On Mon, Nov 04, 2019 at 10:04:09AM -0800, Andres Freund wrote:
> >> > And "without causing significant issues elsewhere" unf
On Mon, Nov 04, 2019 at 10:44:53AM -0800, Andres Freund wrote:
Hi,
On 2019-11-04 19:39:18 +0100, Tomas Vondra wrote:
On Mon, Nov 04, 2019 at 10:04:09AM -0800, Andres Freund wrote:
> And "without causing significant issues elsewhere" unfortunately
> includes continuing to allow pg_upgrade to wor
Hi,
On 2019-11-04 19:39:18 +0100, Tomas Vondra wrote:
> On Mon, Nov 04, 2019 at 10:04:09AM -0800, Andres Freund wrote:
> > And "without causing significant issues elsewhere" unfortunately
> > includes continuing to allow pg_upgrade to work.
> Yeah. I suppose we could have a different AM implement
On Mon, Nov 04, 2019 at 10:04:09AM -0800, Andres Freund wrote:
Hi,
(I've not read the rest of this thread yet)
On 2019-11-04 16:07:23 +0100, Tomas Vondra wrote:
On Mon, Nov 04, 2019 at 04:39:44PM +0300, Павел Ерёмин wrote:
> And yet, if I try to implement a similar mechanism, if successful,
Hi,
(I've not read the rest of this thread yet)
On 2019-11-04 16:07:23 +0100, Tomas Vondra wrote:
> On Mon, Nov 04, 2019 at 04:39:44PM +0300, Павел Ерёмин wrote:
> > And yet, if I try to implement a similar mechanism, if successful, will my
> > revision be considered?
> >
>
> Why wouldn'
On Mon, Nov 04, 2019 at 04:39:44PM +0300, Павел Ерёмин wrote:
And yet, if I try to implement a similar mechanism, if successful, will my
revision be considered?
Why wouldn't it be considered? If you submit a patch that demonstrably
improves the behavior (in this case reduces per-tuple o
And yet, if I try to implement a similar mechanism, if successful, will my revision be considered? regards03.11.2019, 22:15, "Tomas Vondra" :On Sun, Nov 03, 2019 at 02:17:15PM +0300, Павел Ерёмин wrote: I completely agree with all of the above. Therefore, the proposed mechanism may entail large
On Sat, Nov 2, 2019 at 6:15 AM Tomas Vondra
wrote:
> On Fri, Nov 01, 2019 at 12:05:12PM +0300, Павел Ерёмин wrote:
> > Hi.
> > sorry for my English.
> > I want to once again open the topic of 64 bit transaction id. I did not
> > manage to find in the archive o
On Sun, Nov 03, 2019 at 02:17:15PM +0300, Павел Ерёмин wrote:
I completely agree with all of the above. Therefore, the proposed
mechanism may entail larger improvements (and not only VACUUM).
I think the best think you can do is try implementing this ...
I'm afraid the "improvements" essen
I completely agree with all of the above. Therefore, the proposed mechanism may entail larger improvements (and not only VACUUM).I can offer the following solution.For VACUUM, create a hash table.VACUUM scanning the table sees that the version (tuple1) has t_ctid filled and refers to the address tu
On Sat, Nov 02, 2019 at 11:35:09PM +0300, Павел Ерёмин wrote:
The proposed option is not much different from what it is now.
We are not trying to save some space - we will reuse the existing one. We
just work in 64 bit transaction counters. Correct me if I'm wrong - the
address of the nex
The proposed option is not much different from what it is now.We are not trying to save some space - we will reuse the existing one. We just work in 64 bit transaction counters. Correct me if I'm wrong - the address of the next version of the line is stored in the 6 byte field t_cid in the tuple he
odesílatel Павел Ерёмин <shnoor111gm...@yandex.ru>napsal: Hi. sorry for my English. I want to once again open the topic of 64 bit transaction id. I did not manage to find in the archive of the option that I want to discuss, so I write. If I searched poorly, then please forgive me. The
On Sat, Nov 02, 2019 at 07:07:17PM +0300, Павел Ерёмин wrote:
The proposed option does not need to change the length of either the page
header or tuple header. Therefore, you will not need to physically change
the data.
So how do you link the tuple versions together? Clearly, that
On Fri, Nov 01, 2019 at 12:05:12PM +0300, Павел Ерёмин wrote:
Hi.
sorry for my English.
I want to once again open the topic of 64 bit transaction id. I did not
manage to find in the archive of the option that I want to discuss, so I
write. If I searched poorly, then please forgive me
On Fri, Nov 01, 2019 at 10:25:17AM +0100, Pavel Stehule wrote:
Hi
pá 1. 11. 2019 v 10:11 odesílatel Павел Ерёмин
napsal:
Hi.
sorry for my English.
I want to once again open the topic of 64 bit transaction id. I did not
manage to find in the archive of the option that I want to discuss, so I
Hi
pá 1. 11. 2019 v 10:11 odesílatel Павел Ерёмин
napsal:
> Hi.
> sorry for my English.
> I want to once again open the topic of 64 bit transaction id. I did not
> manage to find in the archive of the option that I want to discuss, so I
> write. If I searched poorly, then pl
Hi.sorry for my English.I want to once again open the topic of 64 bit transaction id. I did not manage to find in the archive of the option that I want to discuss, so I write. If I searched poorly, then please forgive me.The idea is not very original and probably has already been considered, again
Hi,
On 2019-02-13 12:16:33 +0100, Chris Travers wrote:
> As a note here, I have worked on projects where there could be 2-week-long
> idle-in-transaction states (no joke, we tuned things to only use virtual
> xids for most of that time), and having an ability to set
> idle-in-transaction timeouts
On Thu, Mar 1, 2018 at 11:48 PM Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> Hi!
>
> On Fri, Mar 2, 2018 at 1:41 AM, Andres Freund wrote:
>
>> On 2018-01-11 01:02:52 +0300, Alexander Korotkov wrote:
>> > As I get from cputube, patchset doesn't compiles again. Please find
>> > revised
On Fri, Mar 2, 2018 at 12:07 AM Andres Freund wrote:
> On 2018-03-02 01:56:00 +0300, Alexander Korotkov wrote:
> > On Fri, Mar 2, 2018 at 1:51 AM, Andres Freund
> wrote:
> >
> > > On 2018-03-02 01:48:03 +0300, Alexander Korotkov wrote:
> > > > Also, the last commitfest is already too late for su
On 2018-03-02 01:56:00 +0300, Alexander Korotkov wrote:
> On Fri, Mar 2, 2018 at 1:51 AM, Andres Freund wrote:
>
> > On 2018-03-02 01:48:03 +0300, Alexander Korotkov wrote:
> > > Also, the last commitfest is already too late for such big changes.
> > > So, I'm marking this RWF.
> >
> > Agreed. P
On Fri, Mar 2, 2018 at 1:51 AM, Andres Freund wrote:
> On 2018-03-02 01:48:03 +0300, Alexander Korotkov wrote:
> > Also, the last commitfest is already too late for such big changes.
> > So, I'm marking this RWF.
>
> Agreed. Perhaps extract the 64bit GUC patch and track that separately?
> Seems
On 2018-03-02 01:48:03 +0300, Alexander Korotkov wrote:
> Also, the last commitfest is already too late for such big changes.
> So, I'm marking this RWF.
Agreed. Perhaps extract the 64bit GUC patch and track that separately?
Seems like something we should just do...
Greetings,
Andres Freund
Hi!
On Fri, Mar 2, 2018 at 1:41 AM, Andres Freund wrote:
> On 2018-01-11 01:02:52 +0300, Alexander Korotkov wrote:
> > As I get from cputube, patchset doesn't compiles again. Please find
> > revised version attached.
>
> It'd be good if you could maintain the patches as commits with some
> desc
Hi,
On 2018-01-11 01:02:52 +0300, Alexander Korotkov wrote:
> As I get from cputube, patchset doesn't compiles again. Please find
> revised version attached.
It'd be good if you could maintain the patches as commits with some
description of why you're doing these changes. It's a bit hard to
fig
Hi, Ryan!
On Sat, Jan 6, 2018 at 10:10 PM, Ryan Murphy wrote:
> Thanks for this contribution!
> I think it's a hard but important problem to upgrade these xids.
>
> Unfortunately, I've confirmed that this patch
> 0001-64bit-guc-relopt-3.patch doesn't apply correctly on my computer.
>
> Here's wh
Ryan Murphy writes:
> Alexander, what is the process you're using to create the patch? I've heard
> someone (maybe Tom Lane?) say that he sometimes uses "patch" directly instead
> of "git" to create the patch, with better results. I forget the exact
> command.
Nah, you've got that the other
Thanks for this contribution!
I think it's a hard but important problem to upgrade these xids.
Unfortunately, I've confirmed that this patch 0001-64bit-guc-relopt-3.patch
doesn't apply correctly on my computer.
Here's what I did:
I did a "git pull" to the current HEAD, which is
6271fceb8a4f07d
Since the Patch Tester (http://commitfest.cputube.org/) says this Patch will
not apply correctly, I am tempted to change the status to Waiting on Author.
However, I'm new to the CommitFest process, so I'm leaving the status as-is for
now and asking Stephen Frost whether he agrees.
I haven't tri
On Tue, Nov 28, 2017 at 6:41 AM, Alexander Korotkov
wrote:
> On Mon, Nov 27, 2017 at 10:56 PM, Robert Haas wrote:
>>
>> On Fri, Nov 24, 2017 at 5:33 AM, Alexander Korotkov
>> wrote:
>> > pg_prune_xid makes sense only for heap pages. Once we introduce special
>> > area for heap pages, we can mov
On Tue, Nov 28, 2017 at 4:56 AM, Robert Haas wrote:
> On Fri, Nov 24, 2017 at 5:33 AM, Alexander Korotkov
> wrote:
>> pg_prune_xid makes sense only for heap pages. Once we introduce special
>> area for heap pages, we can move pg_prune_xid there and save some bytes in
>> index pages. However, th
On Mon, Nov 27, 2017 at 10:56 PM, Robert Haas wrote:
> On Fri, Nov 24, 2017 at 5:33 AM, Alexander Korotkov
> wrote:
> > pg_prune_xid makes sense only for heap pages. Once we introduce special
> > area for heap pages, we can move pg_prune_xid there and save some bytes
> in
> > index pages. Howe
On Mon, Nov 27, 2017 at 11:56 AM, Robert Haas wrote:
> On Fri, Nov 24, 2017 at 5:33 AM, Alexander Korotkov
> wrote:
>> pg_prune_xid makes sense only for heap pages. Once we introduce special
>> area for heap pages, we can move pg_prune_xid there and save some bytes in
>> index pages. However, t
On Fri, Nov 24, 2017 at 5:33 AM, Alexander Korotkov
wrote:
> pg_prune_xid makes sense only for heap pages. Once we introduce special
> area for heap pages, we can move pg_prune_xid there and save some bytes in
> index pages. However, this is an optimization not directly related to
> 64-bit xids.
On Fri, Nov 24, 2017 at 4:03 PM, Alexander Korotkov
wrote:
>> > 0002-heap-page-special-1.patch
>> > Putting xid and multixact bases into PageHeaderData would take extra 16
>> > bytes on index pages too. That would be waste of space for indexes.
>> > This
>> > is why I decided to put bases into sp
Dear Amit,
Thank you for the attention to this patch.
On Thu, Nov 23, 2017 at 4:39 PM, Amit Kapila
wrote:
> On Thu, Jun 22, 2017 at 9:06 PM, Alexander Korotkov
> wrote:
> > Work on this patch took longer than I expected. It is still in not so
> good
> > shape, but I decided to publish it anyw
On Thu, Jun 22, 2017 at 9:06 PM, Alexander Korotkov
wrote:
> On Wed, Jun 7, 2017 at 11:33 AM, Alexander Korotkov
> wrote:
>>
>> On Tue, Jun 6, 2017 at 4:05 PM, Peter Eisentraut
>> wrote:
>>>
>>> On 6/6/17 08:29, Bruce Momjian wrote:
>>> > On Tue, Jun 6, 2017 at 06:00:54PM +0800, Craig Ringer wr
47 matches
Mail list logo