On Fri, 26 Oct 2007, Jean-David Beyer wrote:
I think it was Jon Louis Bently who wrote (in his book, "Writing
Efficient Programs") something to the effect, "Premature optimization is
the root of all evil."
That quote originally comes from Tony Hoare, popularized by a paper
written by Donald
Chris Browne wrote:
> Further, the Right Thing is to group related data together, and come
> up with a policy that is driven primarily by the need for data
> consistency. If things work well enough, then don't go off trying to
> optimize something that doesn't really need optimization, and perhap
Heikki Linnakangas wrote:
> Jean-David Beyer wrote:
>
>> My IO system has two Ultra/320 LVD SCSI controllers and 6 10,000rpm SCSI
>> hard drives. The dual SCSI controller is on its own PCI-X bus (the machine
>> has 5 independent PCI-X busses). Two hard drives are on one SCSI controller
>> and the
[EMAIL PROTECTED] (Jean-David Beyer) writes:
> Chris Browne wrote:
>> [EMAIL PROTECTED] (Jean-David Beyer) writes:
>>> But what is the limitation on such a thing? In this case, I am just
>>> populating the database and there are no other users at such a time. I am
>>> willing to lose the whole inse
Chris Browne wrote:
> [EMAIL PROTECTED] (Jean-David Beyer) writes:
>> But what is the limitation on such a thing? In this case, I am just
>> populating the database and there are no other users at such a time. I am
>> willing to lose the whole insert of a file if something goes wrong -- I
>> would
[EMAIL PROTECTED] (Jean-David Beyer) writes:
> But what is the limitation on such a thing? In this case, I am just
> populating the database and there are no other users at such a time. I am
> willing to lose the whole insert of a file if something goes wrong -- I
> would fix whatever went wrong an
Jean-David Beyer <[EMAIL PROTECTED]> writes:
> But what is the limitation on such a thing?
AFAIR, the only limit on the size of a transaction is 2^32 commands
(due to CommandCounter being 32 bits).
regards, tom lane
---(end of broadcast)---
Jean-David Beyer wrote:
> This means, of course, that the things I think of as transactions have been
> bunched into a much smaller number of what postgreSQL thinks of as large
> transactions, since there is only one per file rather than one per record.
> Now if a file has several thousand records,
I have just changed around some programs that ran too slowly (too much time
in io-wait) and they speeded up greatly. This was not unexpected, but I
wonder about the limitations.
By transaction, I mean a single INSERT or a few related INSERTs.
What I used to do is roughly like this:
for each file
On Oct 25, 2007, at 10:30 AM, Jean-David Beyer wrote:
I have just changed around some programs that ran too slowly (too
much time
in io-wait) and they speeded up greatly. This was not unexpected,
but I
wonder about the limitations.
By transaction, I mean a single INSERT or a few related INS
10 matches
Mail list logo