Thanks all for your inputs. We will try to implement inserts in single
transaction. I feel that is the best approach.

Thanks,
AD.

On Saturday, March 5, 2022, Bruce Momjian <br...@momjian.us> wrote:

> On Fri, Mar  4, 2022 at 01:42:39PM -0500, Tom Lane wrote:
> > aditya desai <admad...@gmail.com> writes:
> > > One of the service layer app is inserting Millions of records in a
> table
> > > but one row at a time. Although COPY is the fastest way to import a
> file in
> > > a table. Application has a requirement of processing a row and
> inserting it
> > > into a table. Is there any way this INSERT can be tuned by increasing
> > > parameters? It is taking almost 10 hours for just 2.2 million rows in a
> > > table. Table does not have any indexes or triggers.
> >
> > Using a prepared statement for the INSERT would help a little bit.
>
> Yeah, I thought about that but it seems it would only minimally help.
>
> > What would help more, if you don't expect any insertion failures,
> > is to group multiple inserts per transaction (ie put BEGIN ... COMMIT
> > around each batch of 100 or 1000 or so insertions).  There's not
> > going to be any magic bullet that lets you get away without changing
> > the app, though.
>
> Yeah, he/she could insert via multiple rows too:
>
>         CREATE TABLE test (x int);
>         INSERT INTO test VALUES (1), (2), (3);
>
> > It's quite possible that network round trip costs are a big chunk of your
> > problem, in which case physically grouping multiple rows into each INSERT
> > command (... or COPY ...) is the only way to fix it.  But I'd start with
> > trying to reduce the transaction commit overhead.
>
> Agreed, turning off synchronous_commit for that those queries would be
> my first approach.
>
> --
>   Bruce Momjian  <br...@momjian.us>        https://momjian.us
>   EDB                                      https://enterprisedb.com
>
>   If only the physical world exists, free will is an illusion.
>
>

Reply via email to