On Thu, Aug 26, 2021 at 9:51 AM Peter Smith <smithpb2...@gmail.com> wrote: > > On Thu, Aug 26, 2021 at 1:20 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > > > On Thu, Aug 26, 2021 at 7:37 AM Peter Smith <smithpb2...@gmail.com> wrote: > > > > > > On Wed, Aug 25, 2021 at 3:28 PM Amit Kapila <amit.kapil...@gmail.com> > > > wrote: > > > > > > > ... > > > > > > > > Hmm, I think the gain via caching is not visible because we are using > > > > simple expressions. It will be visible when we use somewhat complex > > > > expressions where expression evaluation cost is significant. > > > > Similarly, the impact of this change will magnify and it will also be > > > > visible when a publication has many tables. Apart from performance, > > > > this change is logically correct as well because it would be any way > > > > better if we don't invalidate the cached expressions unless required. > > > > > > Please tell me what is your idea of a "complex" row filter expression. > > > Do you just mean a filter that has multiple AND conditions in it? I > > > don't really know if few complex expressions would amount to any > > > significant evaluation costs, so I would like to run some timing tests > > > with some real examples to see the results. > > > > > > > I think this means you didn't even understand or are convinced why the > > patch has cache in the first place. As per your theory, even if we > > didn't have cache, it won't matter but that is not true otherwise, the > > patch wouldn't have it. > > I have never said there should be no caching. On the contrary, my > performance test results [1] already confirmed that caching ExprState > is of benefit for the millions of times it may be used in the > pgoutput_row_filter function. My only doubts are in regard to how much > observable impact there would be re-evaluating the filter expression > just a few extra times by the get_rel_sync_entry function. >
I think it depends but why in the first place do you want to allow re-evaluation when there is a way for not doing that? -- With Regards, Amit Kapila.