> You might try running the ten thousand inserts as a single transaction
> (do "begin" and "end" around them).
A HUGE difference (also completely took away the ID field (serial) having
only name):
Database vacuumed
pg: Trying 25000 inserts on index_with...
Time taken: 12 seconds
Database vac
Thanks Tom,
really appreciate it!
Daniel Akerud
> [EMAIL PROTECTED] writes:
> > CREATE TABLE index_with (
> > id SERIAL,
> > name TEXT
> > );
> > CREATE INDEX name_index ON index_with(name);
>
> > CREATE TABLE index_without (
> > id SERIAL,
> > name TEXT
> > );
>
> Actually, what you
I just rerun the application to confirm that it was really like that. So,
using the test-environment previously described i got the following output:
Database vacuumed
pg: Trying 1000 inserts with indexing on...
Time taken: 24 seconds
pg: Trying 1000 inserts with indexing off...
Time taken: 22
The test script that set up the tables is the following:
---
/* Cleanup */
DROP SEQUENCE index_with_id_seq;
DROP SEQUENCE index_without_id_seq;
DROP INDEX name_index;
DROP TABLE index_with;
DROP TABLE index_without;
/* Create a table with an index */
CREATE TABLE index_with (
id SERIAL,
I'm writing a pool manager to manage x connections to a pgsql postmaster.
The getConnection method has to check if the connection is still valid,
and try to reconnect if it ain't.
Do you guys have a good way of doing this?
Can I use PQstatus?
Should I make a little runCommand on it to try it
I were testing out (using psql) transactions and locking in postgresql using
only BEGIN/UPDATE(on a specific table)/COMMIT&ROLLBACK and notices several
time that instead of waiting it went into *ABORT STATE*. Why is this?
Also I notice that COMMIT'ing a deadlock'ed transaction did nothing but a