ilters or we are going to apply only the insert
> > filters?
> >
>
> AFAIK, currently, initial table sync doesn't respect publication
> actions so it should combine all the filters. What do you think?
Yeah, I have the same opinion.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Dec 1, 2021 at 12:07 PM Greg Nancarrow wrote:
>
> On Thu, Nov 25, 2021 at 10:17 PM Dilip Kumar wrote:
> >
> > Thanks for the review and many valuable comments, I have fixed all of
> > them except this comment (/* If we got a cancel signal during the copy
> >
On Fri, Mar 23, 2018 at 1:19 PM, Michael Paquier
wrote:
> On Tue, Mar 20, 2018 at 12:00:47PM +0530, Dilip Kumar wrote:
> > Yeah, you are right. Fixed.
>
> So I have been spending a couple of hours playing with your patch, and
> tested various configurations manually, like swi
On Tue, Sep 3, 2019 at 12:11 PM Dilip Kumar wrote:
>
> On Fri, Aug 16, 2019 at 3:54 PM Jeevan Chalke
> wrote:
> >
> 0003:
> +/*
> + * When to send the whole file, % blocks modified (90%)
> + */
> +#define WHOLE_FILE_THRESHOLD 0.9
>
> How this thresh
; > builds. More importantly, I'm not very convinced that it's impossible
> > to hit the no-mapping case. The original proposal was to fall back
> > to current behavior (test the child-table permissions) if we couldn't
> > match the var to the top parent, and I think that that is still a
> > sane proposal.
>
> OK, I've removed the Assert. For child Vars that can't be translated
> to root parent's, permissions are checked with the child relation,
> like before.
Instead of falling back to the child, isn't it make more sense to
check the permissions on the parent upto which we could translate (it
may not be the root parent)?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Sep 5, 2019 at 2:48 PM Dilip Kumar wrote:
>
> On Thu, Sep 5, 2019 at 2:12 PM Amit Langote wrote:
>
> Thanks for the patch, I was almost about to press the send button with
> my patch. But, this looks similar to my version.
> >
> > On Wed, Sep 4, 2019
On Thu, Sep 5, 2019 at 3:26 PM Amit Langote wrote:
>
> On Thu, Sep 5, 2019 at 6:33 PM Dilip Kumar wrote:
> > /*
> > + * For inheritance child relations, we also need to remember
> > + * the root parent.
> > + */
> > + if (parent->rtekind == RTE_RELAT
e patch but run the same case what you have given and
I can see the similar improvement with the patch.
With the patch 8832.988, without the patch 10252.701ms (median of three reading)
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
l it be a good idea to expand it to support
multi-level partitioning?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
-1) * part_size + 1); then also it
will give 1 as minvalue for the first partition and that will be the
right thing to do. Am I missing something here?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ys
within the defined range minvalue=1 and maxvalue=scale*10, right?
>
> It also probably reduces the cost of checking whether a value belongs to
> the first partition because one test is removed, so there is a small
> additional performance benefit beyond principle and cove
_path */
+ "where c.relname = 'pgbench_accounts' and o.n is not null "
+ "group by 1, 2 "
+ "order by 1 asc "
I have a question, wouldn't it be sufficient to just group by 1? Are
you expecting multiple pgbench_account tables partitioned by different
strategy under the same schema?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
else following this thread, would
> like to comment? It is important to know others opinion on this
> because this will change how pgbench behaves with prior versions.
IMHO, we don't need to invent the error handling at the pgbench
instead we can rely on the server's error.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
c_members) <= 1)
+ continue;
+
I am wondering isn't it possible to just process the missing join
clause? I mean 'generate_join_implied_equalities' has only skipped
the ECs which has const so
can't we create join clause only for those ECs and append it the
"Restri
if there is
no reason to do that then better to keep it how it was earlier.
-extern bool CountOtherDBBackends(Oid databaseId,
- int *nbackends, int *nprepared);
+extern bool CountOtherDBBackends(Oid databaseId, int *nbackends, int
*nprepared);
Unrelated change
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Fri, Sep 20, 2019 at 2:33 PM Richard Guo wrote:
>
> Hi Dilip,
>
> Thank you for reviewing this patch.
>
> On Fri, Sep 20, 2019 at 12:48 PM Dilip Kumar wrote:
>>
>> On Thu, Aug 29, 2019 at 3:15 PM Richard Guo wrote:
>> >
>> >
>> > Att
e
set portal->createSubid to InvalidSubTransactionId. So it seems to me
that the second condition will never reach. Am I missing something?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
to extend an
infrastructure such that we can check the status of the transaction
for which we are decoding the change. Also, I think we need to handle
the ERRCODE_TRANSACTION_ROLLBACK and ignore it.
I have attached a small patch to handle this which can be applied on
top of your patch set.
--
R
e a lot of duplicate code for for_cleanup is true
or false. The only difference is in the error message whether we give
index cleanup or index vacuuming otherwise complete code is the same
for
both the cases. Can't we create some string and based on the value of
the for_cleanup and append it in the error message that way we can
avoid duplicating this at many places?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
vacuum
Spacing after the "." is not uniform, previous comment is using 2
space and newly
added is using 1 space.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Fri, Oct 4, 2019 at 3:35 PM Dilip Kumar wrote:
>
> On Fri, Oct 4, 2019 at 11:01 AM Amit Kapila wrote:
> >
> > On Fri, Oct 4, 2019 at 10:28 AM Masahiko Sawada
> > wrote:
> >>
> Some more comments..
> 1.
> + for (idx = 0; idx <
m which controls more than one things. If this is
> an index AM(GIN) specific issue we might rather want to control the
> memory limit of pending list cleanup by a separate GUC parameter like
> gin_pending_list_limit, say gin_pending_list_work_mem. And we can
> either s
On Wed, Oct 9, 2019 at 2:40 PM Amit Kapila wrote:
>
> On Wed, Oct 9, 2019 at 2:00 PM Dilip Kumar wrote:
> >
> > On Wed, Oct 9, 2019 at 10:22 AM Masahiko Sawada
> > wrote:
> > >
> > > On Tue, Oct 8, 2019 at 2:45 PM Amit Kapila
> > > wrote:
&
On Thu, Oct 3, 2019 at 1:18 PM Dilip Kumar wrote:
>
> I have attempted to test the performance of (Stream + Spill) vs
> (Stream + BGW pool) and I can see the similar gain what Alexey had
> shown[1].
>
> In addition to this, I have rebased the latest patchset [2] without
>
(txn->toast_hash == NULL)
> return;
>
> + /*
> + * We're going modify the size of the change, so to make sure the
> + * accounting is correct we'll make it look like we're removing the
> + * change now (with the old size), and then re-add it at the end.
> + */
> + ReorderBufferChangeMemoryUpdate(rb, change, false);
>
> It is not very clear why this change is required. Basically, this is done at
> commit time after which actually we shouldn't attempt to spill these changes.
> This is mentioned in comments as well, but it is not clear if that is the
> case, then how and when accounting can create a problem. If possible, can
> you explain it with an example?
>
IIUC, we are keeping the track of the memory in ReorderBuffer which is
common across the transactions. So even if this transaction is
committing and will not spill to dis but we need to keep the memory
accounting correct for the future changes in other transactions.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
gt; >prepare that atop of
> >0001-Add-logical_decoding_work_mem-to-limit-ReorderBuffer.
> >
>
> Sure, I wasn't really proposing to adding all stats from that patch,
> including those related to streaming. We need to extract just those
> related to spilling. And yes, it needs to be moved right after 0001.
>
I have extracted the spilling related code to a separate patch on top
of 0001. I have also fixed some bugs and review comments and attached
as a separate patch. Later I can merge it to the main patch if you
agree with the changes.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
0001-Add-logical_decoding_work_mem-to-limit-ReorderBuffer.patch
Description: Binary data
bugs_and_review_comments_fix.patch
Description: Binary data
0002-Track-statistics-for-spilling.patch
Description: Binary data
he cleanup phase,
> then we don't have the problem for gist indexes.
>
> This is not directly related to this patch, so we can discuss these
> observations in a separate thread as well, but before that, I wanted
> to check your opinion to see if this makes sense to you as this will
> help us in moving this patch forward.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
suggest "amestimatestat"
or "amstatsize" and for copy stat data we can add "amcopystat"? It
would be helpful to extend the parallel vacuum for the indexes which
has extended stats.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
f
> vacuum, in a multi-pass vacuum.
Here is a patch to fix this issue.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
user_correct_memorycontext_gist_stat.patch
Description: Binary data
t; compact and expandable, to make multi-pass vacuums even more rare. So I
> > don't think we need to jump through many hoops to optimize the multi-pass
> > case.
> >
>
> Yeah, that will be a good improvement.
+1
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas wrote:
>
> On 17/10/2019 05:31, Amit Kapila wrote:
> > On Wed, Oct 16, 2019 at 11:20 AM Dilip Kumar wrote:
> >>
> >> On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas wrote:
> >>>
> >>>
x27;t work fine.
>
> So you meant that all workers share the cost count and if a parallel
> vacuum worker increase the cost and it reaches the limit, does the
> only one worker sleep? Is that okay even though other parallel workers
> are still running and then the sleep might not help
On Thu, 17 Oct 2019, 14:59 Amit Kapila, wrote:
> On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar wrote:
> >
> > On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas
> wrote:
> > >
> > > On 17/10/2019 05:31, Amit Kapila wrote:
> > > >
> > > &g
On Thu, Oct 17, 2019 at 4:00 PM Amit Kapila wrote:
>
> On Thu, Oct 17, 2019 at 3:25 PM Dilip Kumar wrote:
> >
> > On Thu, Oct 17, 2019 at 2:12 PM Masahiko Sawada
> > wrote:
> > >
> > > On Thu, Oct 17, 2019 at 5:30 PM Amit Kapila
> > > wro
On Thu, Oct 17, 2019 at 6:32 PM Dilip Kumar wrote:
>
> On Thu, 17 Oct 2019, 14:59 Amit Kapila, wrote:
>>
>> On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar wrote:
>> >
>> > On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas
>> > wrote:
>>
nly if it is called from parallel vacuum
workers or in general?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Fri, Oct 18, 2019 at 11:25 AM Amit Kapila wrote:
>
> On Fri, Oct 18, 2019 at 8:45 AM Dilip Kumar wrote:
> >
> > On Thu, Oct 17, 2019 at 4:00 PM Amit Kapila wrote:
> > >
> > > On Thu, Oct 17, 2019 at 3:25 PM Dilip Kumar wrote:
> > > >
> &g
On Fri, Oct 18, 2019 at 10:55 AM Amit Kapila wrote:
>
> On Fri, Oct 18, 2019 at 9:41 AM Dilip Kumar wrote:
> >
> > On Wed, Oct 16, 2019 at 7:22 PM Heikki Linnakangas wrote:
> > >
> > > On 16 October 2019 12:57:03 CEST, Amit Kapila
> > > wrote:
t;spillTxns and after that set txn->serialized to true. From next
time onwards if we try to serialize the same transaction we will not
increment the rb->spillTxns so that we count each transaction only
once.
>
> Also, isn't spillTxns count bit confusing, because in some cases it
> will include subtransactions and other cases (where the largest picked
> transaction is a subtransaction) it won't include it?
I did not understand your comment completely. Basically, every
transaction which we are serializing we will increase the count first
time right? whether it is the main transaction or the sub-transaction.
Am I missing something?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
er patch, there is no leak. I will commit the patch early next
> > week unless Heikki or someone wants some more tests.
> >
>
> Pushed.
Thanks.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Mon, Oct 21, 2019 at 2:30 PM Andrey Borodin wrote:
>
> Hi!
>
> > 18 окт. 2019 г., в 13:21, Dilip Kumar написал(а):
> >
> > On Fri, Oct 18, 2019 at 10:55 AM Amit Kapila
> > wrote:
> >>
> >>
> >> I think we can do it in general as ad
On Mon, Oct 21, 2019 at 2:50 PM Amit Kapila wrote:
>
> On Mon, Oct 21, 2019 at 10:48 AM Dilip Kumar wrote:
> >
> > On Fri, Oct 18, 2019 at 5:32 PM Amit Kapila wrote:
> >
> > > 3.
> > > @@ -2479,7 +2480,7 @@ ReorderBufferSerializeTXN(Reor
On Mon, Oct 21, 2019 at 2:58 PM Andrey Borodin wrote:
>
>
>
> > 21 окт. 2019 г., в 11:12, Dilip Kumar написал(а):
> >
> > On Mon, Oct 21, 2019 at 2:30 PM Andrey Borodin wrote:
> >>
> >> I've took a look into the patch, and cannot unde
On Fri, Oct 18, 2019 at 5:32 PM Amit Kapila wrote:
>
> On Mon, Oct 14, 2019 at 3:09 PM Dilip Kumar wrote:
> >
> > On Thu, Oct 3, 2019 at 4:03 AM Tomas Vondra
> > wrote:
> > >
> > >
> > > Sure, I wasn't really proposing to adding all
On Tue, Oct 22, 2019 at 9:10 AM Amit Kapila wrote:
>
> On Fri, Oct 18, 2019 at 4:51 PM Dilip Kumar wrote:
> >
> > I have prepared a first version of the patch. Currently, I am
> > performing an empty page deletion for all the cases.
> >
>
> Few co
On Tue, Oct 22, 2019 at 10:46 AM Amit Kapila wrote:
>
> On Thu, Oct 3, 2019 at 1:18 PM Dilip Kumar wrote:
> >
> > I have attempted to test the performance of (Stream + Spill) vs
> > (Stream + BGW pool) and I can see the similar gain what Alexey had
> > shown[1].
llow this convention. Is there any specific reason to do
this?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Tue, Oct 22, 2019 at 10:53 AM Amit Kapila wrote:
>
> On Tue, Oct 22, 2019 at 10:50 AM Dilip Kumar wrote:
> >
> > On Tue, Oct 22, 2019 at 9:10 AM Amit Kapila wrote:
> > >
> > > On Fri, Oct 18, 2019 at 4:51 PM Dilip Kumar wrote:
> > > >
>
issible, moreover here
> I'm actually doing a refactoring patch, so it seems ok to include that.
>
I see. I was under impression that we don't use this style in PG.
But, since we are already using this style other places so no
objection from my side for this particular point.
Sorry for the noise.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Fri, Oct 18, 2019 at 12:18 PM Dilip Kumar wrote:
>
> On Fri, Oct 18, 2019 at 11:25 AM Amit Kapila wrote:
> >
> > On Fri, Oct 18, 2019 at 8:45 AM Dilip Kumar wrote:
> > >
> > > On Thu, Oct 17, 2019 at 4:00 PM Amit Kapila
> > > wrote:
> >
e == DB_SHUTDOWNED)
XLogCtl->unloggedLSN = ControlFile->unloggedLSN;
else
XLogCtl->unloggedLSN = 1;
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Oct 24, 2019 at 4:21 PM Amit Kapila wrote:
>
> On Thu, Oct 24, 2019 at 11:51 AM Dilip Kumar wrote:
> >
> > On Fri, Oct 18, 2019 at 12:18 PM Dilip Kumar wrote:
> > >
> > > On Fri, Oct 18, 2019 at 11:25 AM Amit Kapila
> > > wrote:
> >
On Thu, Oct 24, 2019 at 8:12 PM Masahiko Sawada wrote:
>
> On Thu, Oct 24, 2019 at 3:21 PM Dilip Kumar wrote:
> >
> > On Fri, Oct 18, 2019 at 12:18 PM Dilip Kumar wrote:
> > >
> > > On Fri, Oct 18, 2019 at 11:25 AM Amit Kapila
> > > wrote:
> >
On Fri, Oct 25, 2019 at 10:22 AM Masahiko Sawada wrote:
>
> On Fri, Oct 25, 2019 at 12:44 PM Dilip Kumar wrote:
> >
> > On Thu, Oct 24, 2019 at 8:12 PM Masahiko Sawada
> > wrote:
> > >
> > > On Thu, Oct 24, 2019 at 3:21 PM Dilip Kumar wrote:
> &
On Fri, Oct 25, 2019 at 7:42 AM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Dilip Kumar
> > I have noticed that in StartupXlog also we reset it with 1, you might
> > want to fix that as well?
> >
> > StartupXLOG
> > {
> > ...
> > /*
> >
On Fri, Oct 25, 2019 at 9:19 PM Masahiko Sawada wrote:
>
> On Fri, Oct 25, 2019 at 2:06 PM Dilip Kumar wrote:
> >
> > On Fri, Oct 25, 2019 at 10:22 AM Masahiko Sawada
> > wrote:
> > >
> > > For more detail of my idea it is that the first worker who en
On Mon, Jul 22, 2019 at 3:51 PM Dilip Kumar wrote:
>
Please find my review comments for
0013-Allow-foreground-transactions-to-perform-undo-action
+ /* initialize undo record locations for the transaction */
+ for (i = 0; i < UndoLogCategories; i++)
+ {
+ s->start_urec_ptr[i] = InvalidU
> * initialized. After these many attempts it will return error and the user
> * can retry the operation.
> */
> #define ROLLBACK_HT_INIT_WAIT_TRY 60
>
> %s/error/an error
>
This macro also added under 0014.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
y_offset, size) <= slot->meta.end)
+ {
+ *need_xact_header = false;
+ return try_offset;
+ }
Here also you are returning directly try_offset instead of UndoRecPtr
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Jul 25, 2019 at 11:25 AM Dilip Kumar wrote:
>
> Hi Thomas,
>
> I have started reviewing 0003-Add-undo-log-manager, I haven't yet
> reviewed but some places I noticed that instead of UndoRecPtr you are
> directly
> using UndoLogOffset. Which seems like bugs
erver log as you have not enabled the logging collector
the WARNING is printed on your console.
>
> I'm not sure if this is already known.
> I'm not sure if this is widely used scenario or not.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Jul 17, 2019 at 2:39 PM Amit Langote wrote:
>
> On Wed, Jul 10, 2019 at 2:43 PM Dilip Kumar wrote:
> > On Wed, Jul 10, 2019 at 10:15 AM Amit Langote
> > wrote:
> > > Thanks for checking. There has been a lot of churn in the inheritance
> > > plann
On Fri, Jul 19, 2019 at 2:28 PM Amit Kapila wrote:
>
> On Thu, Jul 11, 2019 at 9:17 AM Dilip Kumar wrote:
> >
> > On Thu, Jul 11, 2019 at 12:38 AM Robert Haas wrote:
> > >
> > > I don't like the fact that undoaccess.c has a new global,
> > &g
On Tue, Jul 30, 2019 at 12:21 PM Thomas Munro wrote:
>
> On Tue, Jul 30, 2019 at 5:03 PM Dilip Kumar wrote:
> >
> > I think this idea is good for the DO time but during REDO time it will
> > not work as we will not have the transaction state. Having said that
> >
when it's shared
> memory. Copying data in tiny increments, with a bunch of branches, is
> even more expensive, especially when it's shared memory, especially when
> all that shared meory is locked at once.
My idea is, indeed of keeping all these fields duplicated in the
context, just allocate a single memory segment equal to the expected
record size (maybe the payload data can keep separate). Now, based on
uur_info pack all the field of UnpackedUndoRecord in that memory
segment. After that In InsertUndoData, we just need one call to
InsertUndoBytes for copying complete header in one shot and another
call for copying payload data. Does this sound reasonable to you?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
the last undo record.
I agree that in undoprocessing patch set we always need to fetch one
record so instead of repeating this pattern everywhere we can write
one function and move this sequence of calls in that function.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Tue, Jul 30, 2019 at 1:54 PM Dilip Kumar wrote:
>
> On Tue, Jul 30, 2019 at 12:21 PM Thomas Munro wrote:
> >
> > One data structure that could perhaps hold this would be
> > UndoLogTableEntry (the per-backend cache, indexed by undo log number,
> > with pretty
On Thu, Jul 18, 2019 at 4:58 PM Amit Kapila wrote:
>
> On Tue, Jul 16, 2019 at 2:20 PM Dilip Kumar wrote:
> >
>
> Few comments on the new patch:
>
> 1.
> Additionally,
> +there is a mechanism for multi-insert, wherein multiple records are prepared
> +and insert
On Fri, Jul 19, 2019 at 2:28 PM Amit Kapila wrote:
>
> On Thu, Jul 11, 2019 at 9:17 AM Dilip Kumar wrote:
> >
> > On Thu, Jul 11, 2019 at 12:38 AM Robert Haas wrote:
> > >
> > > I don't like the fact that undoaccess.c has a new global,
> > &g
>
>
> %s/the page the/the page then
>
> 12)
>
> /*
> * If the transaction's undo records are split across the undo logs. So
> * we need to update our own transaction header in the previous log.
> */
>
> double space between "to" and &
On Tue, Jul 30, 2019 at 12:21 PM Thomas Munro wrote:
>
> Hi Dilip,
>
> > commit 2f3c127b9e8bc7d27cf7adebff0a355684dfb94e
> > Author: Dilip Kumar
> > Date: Thu May 2 11:28:13 2019 +0530
> >
> >Provide interfaces to store and fetch undo recor
we only find this out by allocating? This seems like an API
> deficiency of the storage layer to me. The information is in the und log
> slot's metadata, no?
I agree with this. I think if Thomas agree we can provide an API in
undo log which can provide us this information before we do the actual
allocation.
>
>
> > + urec->uur_next = InvalidUndoRecPtr;
> > + UndoRecordSetInfo(urec);
> > + urec->uur_info |= UREC_INFO_TRANSACTION;
> > + urec->uur_info |= UREC_INFO_LOGSWITCH;
> > + size = UndoRecordExpectedSize(urec);
> > +
> > + /* Allocate space for the record. */
> > + if (InRecovery)
> > + {
> > + /*
> > + * We'll figure out where the space needs to be allocated by
> > + * inspecting the xlog_record.
> > + */
> > + Assert(context->alloc_context.persistence == UNDO_PERMANENT);
> > + urecptr = UndoLogAllocateInRecovery(&context->alloc_context,
> > +
> > XidFromFullTransactionId(txid),
> > +
> > size,
> > +
> > &need_xact_header,
> > +
> > &last_xact_start,
> > +
> > &prevlog_xact_start,
> > +
> > &prevlogurp);
> > + }
> > + else
> > + {
> > + /* Allocate space for writing the undo record. */
>
> That's basically the same comment as before the if.
Removed
>
>
> > + urecptr = UndoLogAllocate(&context->alloc_context,
> > + size,
> > +
> > &need_xact_header, &last_xact_start,
> > +
> > &prevlog_xact_start, &prevlog_insert_urp);
> > +
> > + /*
> > + * If prevlog_xact_start is a valid undo record pointer that
> > means
> > + * this transaction's undo records are split across undo logs.
> > + */
> > + if (UndoRecPtrIsValid(prevlog_xact_start))
> > + {
> > + uint16 prevlen;
> > +
> > + /*
> > + * If undo log is switch during transaction then we
> > must get a
>
> "is switch" is right.
This code is removed now.
>
> > +/*
> > + * Insert a previously-prepared undo records.
>
> s/a//
Fixed
>
>
> More tomorrow.
>
refer the latest patch at
https://www.postgresql.org/message-id/CAFiTN-uf4Bh0FHwec%2BJGbiLq%2Bj00V92W162SLd_JVvwW-jwREg%40mail.gmail.com
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
))
> > + {
> > + page = BufferGetPage(buffer);
> > + pagedata = (char *) page;
> > + }
> > +
> > + /*
> > + * Length if the previous undo record is store at the end of that
> > record
> > + * so just fe
+ {
> > + int len =
> > ucontext->urec_payload.urec_tuple_len;
> > +
> > + if (len > 0)
> > + {
> > + /* Insert tuple data. */
> > + if (!InsertUndoBytes((char *)
> > ucontext->urec_tupledata,
> > +
> >len, &writeptr, endptr,
> > +
> >&ucontext->already_processed,
> > +
> >&ucontext->partial_bytes))
> > + return;
> > + }
> > + ucontext->stage = UNDO_PACK_STAGE_UNDO_LENGTH;
> > + }
> > + /* fall through */
> > +
> > + case UNDO_PACK_STAGE_UNDO_LENGTH:
> > + /* Insert undo length. */
> > + if (!InsertUndoBytes((char *) &ucontext->undo_len,
> > +
> > sizeof(uint16), &writeptr, endptr,
> > +
> > &ucontext->already_processed,
> > +
> > &ucontext->partial_bytes))
> > + return;
> > +
> > + ucontext->stage = UNDO_PACK_STAGE_DONE;
> > + /* fall through */
> > +
> > + case UNDO_PACK_STAGE_DONE:
> > + /* Nothing to be done. */
> > + break;
> > +
> > + default:
> > + Assert(0); /* Invalid stage */
> > + }
> > +}
>
> I don't understand. The only purpose of this is that we can partially
> write a packed-but-not-actually-packed record onto a bunch of pages? And
> for that we have an endless chain of copy and pasted code calling
> InsertUndoBytes()? Copying data into shared buffers in tiny increments?
>
> If we need to this, what is the whole packed record format good for?
> Except for adding a bunch of functions with 10++ ifs and nearly
> identical code?
>
> Copying data is expensive. Copying data in tiny increments is more
> expensive. Copying data in tiny increments, with a bunch of branches, is
> even more expensive. Copying data in tiny increments, with a bunch of
> branches, is even more expensive, especially when it's shared
> memory. Copying data in tiny increments, with a bunch of branches, is
> even more expensive, especially when it's shared memory, especially when
> all that shared meory is locked at once.
>
>
> > +/*
> > + * Read the undo record from the input page to the unpack undo context.
> > + *
> > + * Caller can call this function multiple times until desired stage is
> > reached.
> > + * This will read the undo record from the page and store the data into
> > unpack
> > + * undo context, which can be later copied to unpacked undo record by
> > calling
> > + * FinishUnpackUndo.
> > + */
> > +void
> > +UnpackUndoData(UndoPackContext *ucontext, Page page, int starting_byte)
> > +{
> > + char *readptr = (char *) page + starting_byte;
> > + char *endptr = (char *) page + BLCKSZ;
> > +
> > + switch (ucontext->stage)
> > + {
> > + case UNDO_PACK_STAGE_HEADER:
>
> You know roughly what I'm thinking.
I have expressed my thought on this in last comment.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ad due to a large context with a lot of small
> allocations that then get individually freed).
Ok, I got your idea. I will analyze it further and work on this if
there is no problem.
That will make the
> sorting of undo a bit more CPU inefficient, because individual records
> will need to be partially unpacked for comparison, but I think that's
> going to be a far smaller loss than the win.
Right.
>
>
> - My reading of the current xact.c integration is that it's not workable
> as is. Undo is executed outside of a valid transaction state,
> exceptions aren't properly undone, logic would need to be duplicated
> to a significant degree, new kind of critical section.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Aug 14, 2019 at 10:35 PM Andres Freund wrote:
>
> Hi,
>
> On 2019-08-14 14:48:07 +0530, Dilip Kumar wrote:
> > On Wed, Aug 14, 2019 at 12:27 PM Andres Freund wrote:
> > > - I think there's two fairly fundamental, and related, problems with
>
On Wed, Aug 14, 2019 at 2:48 PM Dilip Kumar wrote:
>
> On Wed, Aug 14, 2019 at 12:27 PM Andres Freund wrote:
> > I think that batch reading should just copy the underlying data into a
> > char* buffer. Only the records that currently are being used by
> > higher la
On Fri, Aug 16, 2019 at 10:56 AM Andres Freund wrote:
>
> Hi,
>
> On 2019-08-16 09:44:25 +0530, Dilip Kumar wrote:
> > On Wed, Aug 14, 2019 at 2:48 PM Dilip Kumar wrote:
> > >
> > > On Wed, Aug 14, 2019 at 12:27 PM Andres Freund wrote:
> >
> > &
ed to
somehow deal with the multi-insert code in the zheap because in that
code for inserting in a single page we write one undo record per range
if all the tuple which we are inserting on a single page are
interleaved. But, maybe we can handle that by just inserting one undo
record which can have multiple ranges.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
e caller need not worry
about setting them. So now you are suggesting to put other headers
also as structures in UnpackedUndoRecord. I as such don't have much
problem in doing that but I think initially Robert designed
UnpackedUndoRecord structure this way so it will be good if Robert
provides his opinion on this.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Aug 14, 2019 at 10:35 PM Andres Freund wrote:
>
> Hi,
>
> On 2019-08-14 14:48:07 +0530, Dilip Kumar wrote:
> > On Wed, Aug 14, 2019 at 12:27 PM Andres Freund wrote:
> I don't think we can normally pin the undo buffers properly at that
> stage. Without knowin
On Tue, Aug 20, 2019 at 7:57 PM Robert Haas wrote:
>
> On Mon, Aug 19, 2019 at 2:04 AM Dilip Kumar wrote:
> > Currently, In UnpackedUndoRecord we store all members directly which
> > are set by the caller. We store pointers to some header which are
> > allocated internal
On Wed, Aug 21, 2019 at 9:04 PM Robert Haas wrote:
>
> On Wed, Aug 21, 2019 at 3:55 AM Dilip Kumar wrote:
> > I have already attempted that part and I feel it is not making code
> > any simpler than what we have today. For packing, it's fine because I
> > can pr
On Thu, Aug 22, 2019 at 9:58 AM Andres Freund wrote:
>
> Hi,
>
> On 2019-08-22 09:51:22 +0530, Dilip Kumar wrote:
> > We can not know the complete size of the record even by reading the
> > header because we have a payload that is variable part and payload
> > le
On Thu, Aug 22, 2019 at 10:24 AM Andres Freund wrote:
>
> Hi,
>
> On 2019-08-22 10:19:04 +0530, Dilip Kumar wrote:
> > On Thu, Aug 22, 2019 at 9:58 AM Andres Freund wrote:
> > >
> > > Hi,
> > >
> > > On 2019-08-22 09:51:22 +0530, Dilip Kum
On Thu, Aug 22, 2019 at 9:21 PM Dilip Kumar wrote:
>
> On Thu, Aug 22, 2019 at 7:34 PM Robert Haas wrote:
> >
> > On Thu, Aug 22, 2019 at 1:34 AM Dilip Kumar wrote:
> > > Yeah, we can handle the bulk fetch as you suggested and it will make
> > > it
On Thu, Aug 22, 2019 at 7:34 PM Robert Haas wrote:
>
> On Thu, Aug 22, 2019 at 1:34 AM Dilip Kumar wrote:
> > Yeah, we can handle the bulk fetch as you suggested and it will make
> > it a lot easier. But, currently while registering the undo request
> > (especially d
On Thu, Aug 22, 2019 at 9:55 PM Andres Freund wrote:
>
> Hi
>
> On August 22, 2019 9:14:10 AM PDT, Dilip Kumar wrote:
> > But, those requests will
> >ultimately be used for collecting the record by the bulk fetch. So if
> >we are planning to change the bulk fetch
function.
I have read 0003 and 0004 patch and there are few cosmetic comments.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Oct 24, 2019 at 4:33 PM Dilip Kumar wrote:
>
> On Thu, Oct 24, 2019 at 4:21 PM Amit Kapila wrote:
> >
> > On Thu, Oct 24, 2019 at 11:51 AM Dilip Kumar wrote:
> > >
> > > On Fri, Oct 18, 2019 at 12:18 PM Dilip Kumar
> > > wrote:
> >
On Mon, Oct 28, 2019 at 12:20 PM Amit Kapila wrote:
>
> On Sun, Oct 27, 2019 at 12:52 PM Dilip Kumar wrote:
> >
> > On Fri, Oct 25, 2019 at 9:19 PM Masahiko Sawada
> > wrote:
> > >
> > >
> > I haven't yet read the new set of the patch. Bu
On Fri, Oct 25, 2019 at 9:19 PM Masahiko Sawada wrote:
>
> On Fri, Oct 25, 2019 at 2:06 PM Dilip Kumar wrote:
> >
> > On Fri, Oct 25, 2019 at 10:22 AM Masahiko Sawada
> > wrote:
> > >
> > > For more detail of my idea it is that the first worker who en
ot;btree search"
2.
/* copy&pasted from .../src/backend/utils/adt/datetime.c
- * and changesd struct pg_tm to struct tm
+ * and changes struct pg_tm to struct tm
*/
Seems like this comment meant "Changed struct pg_tm to struct tm"
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Tue, Oct 29, 2019 at 10:01 AM Masahiko Sawada wrote:
>
> On Mon, Oct 28, 2019 at 6:08 PM Dilip Kumar wrote:
> >
> > On Fri, Oct 25, 2019 at 9:19 PM Masahiko Sawada
> > wrote:
> > >
> > > On Fri, Oct 25, 2019 at 2:06 PM Dilip Kumar wrote:
> &
On Tue, Oct 29, 2019 at 1:59 PM Masahiko Sawada wrote:
>
> On Tue, Oct 29, 2019 at 4:06 PM Masahiko Sawada wrote:
> >
> > On Mon, Oct 28, 2019 at 2:13 PM Dilip Kumar wrote:
> > >
> > > On Thu, Oct 24, 2019 at 4:33 PM Dilip Kumar wrote:
> > > &
On Tue, Oct 29, 2019 at 3:11 PM Dilip Kumar wrote:
>
> On Tue, Oct 29, 2019 at 1:59 PM Masahiko Sawada wrote:
> >
> > On Tue, Oct 29, 2019 at 4:06 PM Masahiko Sawada
> > wrote:
> > >
> > > On Mon, Oct 28, 2019 at 2:13 PM Dilip Kumar wrote:
> > &
ubworkmem;
Another problem is that there is no handling if the subform->subworkmem is NULL.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
7665, total 104149
[1]
https://www.postgresql.org/message-id/CAD21AoAqT17QwKJ_sWOqRxNvg66wMw1oZZzf9Rt-E-zD%2BXOh_Q%40mail.gmail.com
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Oct 31, 2019 at 11:33 AM Dilip Kumar wrote:
>
> On Tue, Oct 29, 2019 at 1:59 PM Masahiko Sawada wrote:
> > Actually after increased shared_buffer I got expected results:
> >
> > * Test1 (after increased shared_buffers)
> > normal : 2807 ms (hit 56295
On Fri, Nov 1, 2019 at 2:21 PM Masahiko Sawada wrote:
>
> On Thu, Oct 31, 2019 at 3:45 PM Dilip Kumar wrote:
> >
> > On Thu, Oct 31, 2019 at 11:33 AM Dilip Kumar wrote:
> > >
> > > On Tue, Oct 29, 2019 at 1:59 PM Masahiko Sawada
> > > wrote:
>
401 - 500 of 1819 matches
Mail list logo