On Tue, Sep 1, 2020 at 3:39 PM Greg Nancarrow <gregn4...@gmail.com> wrote: > > Hi Vignesh, > > >Can you share with me the script you used to generate the data & the ddl of > >the table, so that it will help me check that >scenario you faced the > >>problem. > > Unfortunately I can't directly share it (considered company IP), > though having said that it's only doing something that is relatively > simple and unremarkable, so I'd expect it to be much like what you are > currently doing. I can describe it in general. > > The table being used contains 100 columns (as I pointed out earlier), > with the first column of "bigserial" type, and the others of different > types like "character varying(255)", "numeric", "date" and "time > without timezone". There's about 60 of the "character varying(255)" > overall, with the other types interspersed. >
Thanks Greg for executing & sharing the results. I tried with a similar test case that you suggested, I was not able to reproduce the degradation scenario. If it is possible, can you run perf for the scenario with 1 worker & non parallel mode & share the perf results, we will be able to find out which of the functions is consuming more time by doing a comparison of the perf reports. Steps for running perf: 1) get the postgres pid 2) perf record -a -g -p <above pid> 3) Run copy command 4) Execute "perf report -g" once copy finishes. Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com