Hi Longlong,

> > i think this is a better idea.
> from *NikhilS *
> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00584.php
> But instead of using a per insert or a batch insert substraction, I am
> thinking that we can start off a subtraction and continue it till we
> encounter a failure. The moment an error is encountered, since we have the
> offending (already in heap) tuple around, we can call a simple_heap_delete
> on the same and commit (instead of aborting) this subtransaction after doing
> some minor cleanup. This current input data row can also be logged into a
> bad file. Recall that we need to only handle those errors in which the
> simple_heap_insert is successful, but the index insertion or the after row
> insert trigger causes an error. The rest of the load then can go ahead with
> the start of a new subtransaction.
> the simplest thing are often the best.
> i think it's hard to implement or some other deficiency since you want
> subtransaction or every "n" rows.
>


Yeah simpler things are often the best, but as folks are mentioning, we need
a carefully thought out approach here. The reply from Tom to my posting
there raises issues which need to be taken care of. Although I still think
that if we carry out *sanity* checks before starting the load about presence
of triggers, constrainsts, fkey constraints etc, if others do not have any
issues with the approach, the simple_heap_delete idea should work in some
cases. Although the term I used "after some minor cleanup" might need some
thought too now that I think more of it..

Also if Fkey checks or complex triggers are around, maybe we can fall back
to a subtransaction per row insert too as a worse case measure..

Regards,
Nikhils
-- 
EnterpriseDB http://www.enterprisedb.com

Reply via email to