Re: [GENERAL] How to use Php connecting to pgsql

2006-12-29 Thread Ireneusz Pluta
马庆 napisał(a): Dear All: I come across a problem that I can't solve it, Can anyone help me, Thanks. My configuration is RedHat AS4 + php5.0 + pgsql 8.1. When I finish install php, I install php_pgsql module in order to use php connecting pgsql. To my surprised , I can't get ac

[GENERAL] How to use Php connecting to pgsql

2006-12-29 Thread 马庆
Dear All: I come across a problem that I can't solve it, Can anyone help me, Thanks. My configuration is RedHat AS4 + php5.0 + pgsql 8.1. When I finish install php, I install php_pgsql module in order to use php connecting pgsql. To my surprised , I can't get access to pgslq, th

Re: [GENERAL] psql script error handling

2006-12-29 Thread Michael Fuhr
On Fri, Dec 29, 2006 at 07:21:12PM -0500, James Neff wrote: > I have an sql script that I am trying to execute in psql client on the > database itself. The script is just a bunch (hundreds of thousands) of > INSERT statements. Hundreds of thousands? Is there a reason you're not using COPY inst

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread Guy Rouillier
Frank Finner wrote: In Java, assuming you have a Connection c, you simply say "c.commit();" after doing some action on the database. After every commit, the transaction will be executed and closed and a new one opened, which runs until the next commit. Assuming, of course, you started with c.se

Re: [GENERAL] Geographical redundancy

2006-12-29 Thread Ben
If you're sure that data loss is unacceptable no matter what happens to either site, then I'm not aware of too many options. As I understand it, pgpool can be configured to send data-altering queries to multiple servers in order to simulate a multi-master cluster, but it's never been clear

[GENERAL] psql script error handling

2006-12-29 Thread James Neff
I have an sql script that I am trying to execute in psql client on the database itself. The script is just a bunch (hundreds of thousands) of INSERT statements. I don't know how, but I seem to have bad characters throughout my file and when I run the script it will of course error out complai

Re: [GENERAL] Autovacuum Improvements

2006-12-29 Thread Alvaro Herrera
Christopher Browne wrote: > Seems to me that you could get ~80% of the way by having the simplest > "2 queue" implementation, where tables with size < some threshold get > thrown at the "little table" queue, and tables above that size go to > the "big table" queue. > > That should keep any small

Re: [GENERAL] out of memory woes

2006-12-29 Thread Alvaro Herrera
Angva wrote: > Guess I'm about ready to wrap up this thread, but I was just wondering > if Alvaro might have confused work_mem with maintenance_work_mem. The > docs say that work_mem is used for internal sort operations, but they > also say maintenance_work_mem is used for create index. My tests s

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread Nikola Milutinovic
> The fastest way will be copy. > The second fastest will be multi value inserts in batches.. eg.; > > INSERT INTO data_archive values () () () (I don't knwo what the max is) > > but commit every 1000 inserts or so. Is this some empirical value? Can someone give heuristics as to how to calculate

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread James Neff
Frank Finner wrote: In Java, assuming you have a Connection c, you simply say "c.commit();" after doing some action on the database. After every commit, the transaction will be executed and closed and a new one opened, which runs until the next commit. Regards, Frank. That did it, thank

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread Frank Finner
In Java, assuming you have a Connection c, you simply say "c.commit();" after doing some action on the database. After every commit, the transaction will be executed and closed and a new one opened, which runs until the next commit. Regards, Frank. On Fri, 29 Dec 2006 13:23:37 -0500 James Neff

Re: [GENERAL] Backup Restore

2006-12-29 Thread Bob Pawley
Following is the error message on pg_restore:- "pg_restore: ERROR: duplicate key violates unique constraint "spatial_ref_sys_pkey" CONTEXT: COPY spatial_ref_sys, line 1: "2000 EPSG 2000 PROJCS["Anguilla 1957 / British West Indies Grid",GEOGCS["Anguilla 1957",DATUM["Angui..." pg_restore: [arch

Re: [GENERAL] Backup Restore

2006-12-29 Thread Dave Page
Bob Pawley wrote: Hi Dave I can get the restore working if I dump the project spelling out "*.backup" and not relying on the default. However the restore is being aborted due to a pk error for the spatial coordinates. I've removed the gis feature from both applications but still get the er

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread Rodrigo Gonzalez
Joshua D. Drake wrote: On Fri, 2006-12-29 at 13:21 -0500, James Neff wrote: Joshua D. Drake wrote: Also as you are running 8.2 you can use multi valued inserts... INSERT INTO data_archive values () () () Would this speed things up? Or is that just another way to do it? The fastest way wi

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread Joshua D. Drake
On Fri, 2006-12-29 at 13:21 -0500, James Neff wrote: > Joshua D. Drake wrote: > > Also as you are running 8.2 you can use multi valued inserts... > > > > INSERT INTO data_archive values () () () > > > > Would this speed things up? Or is that just another way to do it? The fastest way will be c

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread James Neff
Joshua D. Drake wrote: You need to vacuum during the inserts :) Joshua D. Drake I ran the vacuum during the INSERT and it seemed to help a little, but its still relatively slow compared to the first 2 million records. Any other ideas? Thanks, James ---(end of br

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread Frank Finner
When do you commit these inserts? I occasionally found similiar problems, when I do heavy inserting/updating within one single transaction. First all runs fast, after some time everything slows down. If I commit the inserts every some 1000 rows (large rows, small engine), this phenomenon does no

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread James Neff
Joshua D. Drake wrote: Also as you are running 8.2 you can use multi valued inserts... INSERT INTO data_archive values () () () Would this speed things up? Or is that just another way to do it? Thanks, James ---(end of broadcast)--- TIP 6:

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread Joshua D. Drake
> > there is also an index on batchid. > > > > The insert command is like so: > > > > "INSERT INTO data_archive (batchid, claimid, memberid, raw_data, status, > > line_number) VALUES ('" + commandBatchID + "', '', '', '" + raw_data + > > "', '1', '" + myFilter.claimLine + "');"; Also as you

Re: [GENERAL] could not open relation:no such file or directory

2006-12-29 Thread Ragnar
On þri, 2006-12-26 at 02:43 -0800, karthik wrote: > i facing a problem when trying to select values from a table in > postgresql. do you face this problem with any table or only from a particular table? >when i execute a query like "select title from itemsbytitle;" what do you mean by

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread Rodrigo Gonzalez
James Neff wrote: Greetings, Ive got a java application I am reading data from a flat file and inserting it into a table. The first 2 million rows (each file contained about 1 million lines) went pretty fast. Less than 40 mins to insert into the database. After that the insert speed is sl

Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread Joshua D. Drake
On Fri, 2006-12-29 at 12:39 -0500, James Neff wrote: > Greetings, > > Ive got a java application I am reading data from a flat file and > inserting it into a table. The first 2 million rows (each file > contained about 1 million lines) went pretty fast. Less than 40 mins to > insert into the

[GENERAL] slow speeds after 2 million rows inserted

2006-12-29 Thread James Neff
Greetings, Ive got a java application I am reading data from a flat file and inserting it into a table. The first 2 million rows (each file contained about 1 million lines) went pretty fast. Less than 40 mins to insert into the database. After that the insert speed is slow. I think I may

Re: [GENERAL] LDAP configuration problem

2006-12-29 Thread Joshua D. Drake
> > The rpms for Fedora 6 from www.postgresql.com don't seem to have the > LDAP support built-in, it shows that Hmmm... that isn't good. Although, do we want to -require- ldap? Joshua D. Drake > > invalid entry in file "/pub/pgsql/data/pg_hba.conf" at line 79, token > "ldap" > > But afte

Re: [GENERAL] How to unlock a row

2006-12-29 Thread Jerry Sievers
"vinjvinj" <[EMAIL PROTECTED]> writes: > One of my updates is hanging on a postgres table. I'm guessing if the > table or row is locked. > > Questions: > > 1. What table can I select from to find the lock? > 2. How do I clear the lock? 1pg_locks view, perhaps joined with pg_class and pg_stat_ac

Re: [GENERAL] LDAP configuration problem

2006-12-29 Thread Wenjian Yang
Magnus, You are absolutely correct. Sorry that I didn't see the last line since GMAIL hid it for me. The rpms for Fedora 6 from www.postgresql.com don't seem to have the LDAP support built-in, it shows that invalid entry in file "/pub/pgsql/data/pg_hba.conf" at line 79, token "ldap" But after

[GENERAL] Index vacuum improvements in 8.2

2006-12-29 Thread Wes
>From 8.2 release notes: Speed up vacuuming of B-Tree indexes (Heikki Linnakangas, Tom) >From "2. Vacuum is now done in one phase, scanning the index in physical order. That significantly speeds up index vacuums of large inde

Re: [GENERAL] How to unlock a row

2006-12-29 Thread vinjvinj
> 1. What table can I select from to find the lock? pg_locks shows no rows returned. But the update still hangs. VJ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings

[GENERAL] How to unlock a row

2006-12-29 Thread vinjvinj
One of my updates is hanging on a postgres table. I'm guessing if the table or row is locked. Questions: 1. What table can I select from to find the lock? 2. How do I clear the lock? Thanks for your help VJ ---(end of broadcast)--- TIP 5: don't

Re: [GENERAL] Why ContinueUpdateOnError is not implemented in npgsql

2006-12-29 Thread Andrus
>> There are only two ways fix this issue. >> NpgsqlDataAdapter must invoke automatic ROLLBACK after each error >> or use checkpoints before each command. > > Yup, a savepoint before each command is required if that's the behavior > you want. Yes, that adds overhead. The reason it's not automatic

Re: [GENERAL] ERROR: could not access status of transaction

2006-12-29 Thread Tom Lane
"Stuart Grimshaw" <[EMAIL PROTECTED]> writes: > On 12/23/06, Stuart Grimshaw <[EMAIL PROTECTED]> wrote: >> berble=# select * from headlines ; >> ERROR: could not access status of transaction 1668180339 >> DETAIL: could not open file "pg_clog/0636": No such file or directory >> >> Using Postgres

Re: [GENERAL] could not open relation:no such file or directory

2006-12-29 Thread Adrian Klaver
On Tuesday 26 December 2006 2:43 am, karthik wrote: > hello, > > my name is karthik . > > i facing a problem when trying to select values from a table in > postgresql. > >when i execute a query like "select title from itemsbytitle;" i > get error as > >Error:Could not open relati

Re: [GENERAL] out of memory woes

2006-12-29 Thread Martijn van Oosterhout
On Wed, Dec 27, 2006 at 07:15:48AM -0800, Angva wrote: > Just wanted to post an update. Not going too well. Each time the > scripts were run over this holiday weekend, more statements failed with > out of memory errors, including more and more create index statements > (it had only been clusters pr

Re: [GENERAL] Backup Restore

2006-12-29 Thread Dave Page
Bob Pawley wrote: When I change it to view "all files" it's there - but it won't do anything. So I assume you've used a different extension than the one the dialogue is expecting by default? When you say "it won't do anything." do you mean you cannot select the file, or that nothing happens

Re: [GENERAL] LDAP configuration problem

2006-12-29 Thread Magnus Hagander
Wenjian Yang wrote: > > Sorry, below are the lines in the log file: > > LOG: invalid entry in file "/pub/pgsql/data/pg_hba.conf" at line 78, > token "ldap://dc.domain.com/dc=domain^Adc=com;DOMAIN\"; > FATAL: missing or erroneous pg_hba.conf file > HINT: See server log for details. > > And the