Stephan Szabo wrote:
I don't know if there will be or not, but in one case it's a single table
select with constant values, in the other it's probably some kind of scan
and subselect. I'm just not going to rule out the possibility, so we
should profile it in large transactions with say 100k single inserts and
see.
You're talking about bulk operations, that should be handled carefully
either. Usually loading all data into a temporary table, and making a
INSERT INTO xxx SELECT FROM tmptable should give a better performance if
indices and constraints are concerned. PostgreSQL shouldn't be
considered to accept the most abusive ways of operation, but it should
offer a reasonable set of tools enabling the jobs in a convenient way.
Best situation available is if many small random transactions are
performed good, for TPC like loads, as well as bulk operations. Nobody
should expect that a database will smootly convert a bunch of single
transactions into an optimized bulk one. That's the job of a programmer.
Yeah, the 5 above are pretty easy to show that it's safe, but other cases
and referential action cases won't necessarily be so easy.
So it's the programmers responsibility to offer mass data to the
backend, not separate inserts that by chance might be handled in a
similar way. A RDBMS is not a clairvoyant.
Regards,
Andreas
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings