Re: [GENERAL] Group by bug?

2012-12-27 Thread wd
Oh, I see, thanks for your quick reply. On Fri, Dec 28, 2012 at 3:47 PM, Jov wrote: > > > 2012/12/28 wd > >> hi, >> >> wd_test=# \d t1 >> Table "public.t1" >> Column | Type |Modifiers >> +-+---

Re: [GENERAL] Group by bug?

2012-12-27 Thread Jov
2012/12/28 wd > hi, > > wd_test=# \d t1 > Table "public.t1" > Column | Type |Modifiers > +-+- > id | integer | not null default nextval('t1_id_seq'::regclass) > tag| text

Re: [GENERAL] Group by bug?

2012-12-27 Thread wd
Sorry, forget to say, PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3), 64-bit psql (9.2.2) On Fri, Dec 28, 2012 at 3:24 PM, wd wrote: > hi, > > wd_test=# \d t1 > Table "public.t1" > Column | Type |

[GENERAL] Group by bug?

2012-12-27 Thread wd
hi, wd_test=# \d t1 Table "public.t1" Column | Type |Modifiers +-+- id | integer | not null default nextval('t1_id_seq'::regclass) tag| text| wd_test=# select * from t1;

Re: [GENERAL] update from a csv file?

2012-12-27 Thread François Beausoleil
Le 2012-12-27 à 09:54, Kirk Wythers a écrit : > I have been using COPY FROM to do a mass import of records from CSV files > into a new database. I have discover however, a small number of records ( a > few thousand) in one of the files that contain new data that needs to be > added to the data

Re: [GENERAL] update table from csv file

2012-12-27 Thread Craig Ringer
On 12/28/2012 12:31 AM, Kirk Wythers wrote: > I have been using COPY FROM to do a mass import of records from CSV files > into a new database. I have discover however, a small number of records ( a > few thousand) in one of the files that contain new data that needs to be > added to the database

Re: [GENERAL] Cursor fetch Problem.

2012-12-27 Thread Harry
Below is the Linux ps -ef | grep postgres output :- 501 12163 5473 0 Dec19 ?00:00:00 postgres: enterprisedb sampledb 192.168.0.231[53991] ?EDB-SPL Procedure successfully completed 501 12167 5473 0 Dec19 ?00:00:00 postgres: enterprisedb sampledb 192.168.0.231[53995] ?E

[GENERAL] update from a csv file?

2012-12-27 Thread Kirk Wythers
I have been using COPY FROM to do a mass import of records from CSV files into a new database. I have discover however, a small number of records ( a few thousand) in one of the files that contain new data that needs to be added to the database, but on rows that have a primary key and have alrea

[GENERAL] pg_dirtyread doesnt work

2012-12-27 Thread Alejandro Carrillo
Hi, After of very tried to compile this PostgreSQL C function for Windows, I compile that (with VS C++ 2008), but the function get a error when try to read a deleted row. The example: CREATE FUNCTION pg_dirtyread(oid) RETURNS setof record AS E'$libdir/pg_dirtyread', 'pg_finfo_pg_dirtyread' LANG

Re: [GENERAL] libpq thread safety

2012-12-27 Thread Mark Morgan Lloyd
Tom Lane wrote: Mark Morgan Lloyd writes: Do any special precautions need to be taken when PQNotifies is being called, to make sure that nothing else is referencing the handle? It's pretty much the same as any other operation on a PGconn: if there could be more than one thread touching the co

Re: [GENERAL] progress of long running operation

2012-12-27 Thread Scott Ribe
On Dec 27, 2012, at 12:46 PM, Tom Lane wrote: > Or you could run contrib/pgstattuple's pgstattuple() function every so > often --- it will report the uncommitted tuples as "dead", which is > inaccurate, but you'd be able to see how fast the number is increasing. That's exactly the kind of thing I

Re: [GENERAL] progress of long running operation

2012-12-27 Thread Tom Lane
Scott Ribe writes: > Is there any way to get some insight into the progress of: > insert into foo select distinct on (...) from bar where... Watching the physical size of the foo table might be close enough. Or you could run contrib/pgstattuple's pgstattuple() function every so often --- it will

[GENERAL] progress of long running operation

2012-12-27 Thread Scott Ribe
Is there any way to get some insight into the progress of: insert into foo select distinct on (...) from bar where... It's got to with importing some legacy data, which has no proper primary key, and duplicates, and garbage that won't be accepted. And there's 30,000,000 rows, and I'm running on

Re: [GENERAL] libpq thread safety

2012-12-27 Thread Tom Lane
Mark Morgan Lloyd writes: > Do any special precautions need to be taken when PQNotifies is being > called, to make sure that nothing else is referencing the handle? It's pretty much the same as any other operation on a PGconn: if there could be more than one thread touching the connection object

[GENERAL] libpq thread safety

2012-12-27 Thread Mark Morgan Lloyd
Do any special precautions need to be taken when PQNotifies is being called, to make sure that nothing else is referencing the handle? The sort of nightmare scenario I'm thinking about is when a background thread is periodically pulling data from a table into a buffer, but a foreground (GUI) t

Re: [GENERAL] update table from a csv file

2012-12-27 Thread Adrian Klaver
On 12/27/2012 08:50 AM, Kirk Wythers wrote: On Dec 27, 2012, at 10:39 AM, Adrian Klaver mailto:adrian.kla...@gmail.com>> wrote: No. Some questions though. Thanks for the reply Adrian. What version pf Postgres? 9.1 Is that the actual UPDATE statement, I see no SET? I was reading the

Re: [GENERAL] update table from a csv file

2012-12-27 Thread Kirk Wythers
On Dec 27, 2012, at 10:39 AM, Adrian Klaver wrote: > No. Some questions though. Thanks for the reply Adrian. > > What version pf Postgres? 9.1 > Is that the actual UPDATE statement, I see no SET? I was reading the docs but obviously don't understand the syntax of the update statement.

Re: [GENERAL] New Zealand Postgis DBA job vacancy

2012-12-27 Thread Bexley Hall
Hi Martin, On 12/27/2012 8:31 AM, Martin Gainty wrote: so...why doesn't Postgres port to embedded systems? IME, it requires lots of resources (the vast majority of embedded systems are resource starved -- resources == $$ and when you are selling things in volume, every penny saved adds up qui

Re: [GENERAL] update table from a csv file

2012-12-27 Thread Adrian Klaver
On 12/27/2012 08:27 AM, Kirk Wythers wrote: I have been using COPY FROM to do a mass import of records from CSV files into a new database. I have discover however, a small number of records ( a few thousand) in one of the files that contain new data that needs to be added to the database, but

[GENERAL] update table from csv file

2012-12-27 Thread Kirk Wythers
I have been using COPY FROM to do a mass import of records from CSV files into a new database. I have discover however, a small number of records ( a few thousand) in one of the files that contain new data that needs to be added to the database, but on rows that have a primary key and have alrea

[GENERAL] update table from a csv file

2012-12-27 Thread Kirk Wythers
I have been using COPY FROM to do a mass import of records from CSV files into a new database. I have discover however, a small number of records ( a few thousand) in one of the files that contain new data that needs to be added to the database, but on rows that have a primary key and have alrea

Re: [GENERAL] New Zealand Postgis DBA job vacancy

2012-12-27 Thread Martin Gainty
> From: bexley...@yahoo.com > To: pgsql-general@postgresql.org > Subject: Re: [GENERAL] New Zealand Postgis DBA job vacancy > > > Thinking (entirely) *in* metric doesn't. The problem is working > with *both*, simultaneously, requires some mental agility. > > Nearby, we have one of the few (only

Re: [GENERAL] Cursor fetch Problem.

2012-12-27 Thread Amit Kapila
On Thursday, December 27, 2012 11:51 AM Harry wrote: > Hi Amit, > Thanks for Reply. > Kindly see my below output. > > Also, tried to Kill it Firstly by using Cancel Backend and then > Terminate > Backend output showing "True" but still remaining as a process (i.e. in > pg_stat_activity). Can you

Re: [GENERAL] Cursor fetch Problem.

2012-12-27 Thread Amit Kapila
On Thursday, December 27, 2012 11:51 AM Harry wrote: > Hi Amit, > Thanks for Reply. > Kindly see my below output. > 16650;"sampledb";11965;10;"enterprisedb";"";"192.168.0.231";"";53897;"* > 2012-12-19 > 11:39:48.234799+05:30";"2012-12-19 11:39:53.288441+05:30";"2012-12-19 > 11:39:53.288441+05:30*";