Re: [GENERAL] Weird library problem

2014-10-16 Thread Holger.Friedrich-Fa-Trivadis
Adrian Klaver wrote:
> Library mismatch. Looks like rtpostgis-2.0.so is looking for something that 
> /lib64/libldap_r-2.4.so.2 is not providing. So, if I am following correctly 
> that would be the OpenSCG version. From the looks of it, that library is not 
> recent enough.

For the record:  Apparently there's a third library in the mix, which was not 
explicitly mentioned by the error messages.  Looks like the newer 
libldap_r-2.4.so.2 sort of knows about the "something" but does not quite 
provide it itself.  Turns out I also needed to copy over a newer 
liblber-2.4.so.2 to go with the newer libldap_r-2.4.so.2, and now the PostGIS 
package builds again.

What seemed weird was that the newer liblap_r-2.4.so.2 allowed PostGIS to run 
but broke the build.  The liblber-2.4.so.2 thing could possibly explain that.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: How to start several PostgreSQL clusters at boot time on a Debian OS

2014-10-16 Thread Léa Massiot
Hello Adrian,
Yes, I am aware of it. I even used to use Debian pg_createcluster and other
tools before.
Even so, thank you for your comment.
Best regards.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-start-several-PostgreSQL-clusters-at-boot-time-on-a-Debian-OS-tp5823085p5823264.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Steve Wampler


Hi,

This is with Postgresql 9.3.5.

I'm looking at using a COPY command (via jdbc) to do bulk inserts into a table 
that
includes a BIGSERIAL column.   Is there a way to mark the data in that
column so it gets assigned a new value on entry - akin to the use of 'default'
in an INSERT?   Some of the rows have values for the serial column, others
don't.

Or is the only way to use COPY for this task:

  COPY table_name (columnnameA, columnnameB, columnnameD) FROM source;

where the serial column name is omitted?  This wouldn't preserve the values
for the serial column on rows that have one already.

Thanks!
Steve
--
Steve Wampler -- swamp...@noao.edu
The gods that smiled on your birth are now laughing out loud.

--
This mail was scanned by a trial version of BitDefender
For more information please visit http://www.bitdefender.com/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Rob Sargent

On 10/16/2014 10:33 AM, Steve Wampler wrote:


Hi,

This is with Postgresql 9.3.5.

I'm looking at using a COPY command (via jdbc) to do bulk inserts into 
a table that

includes a BIGSERIAL column.   Is there a way to mark the data in that
column so it gets assigned a new value on entry - akin to the use of 
'default'
in an INSERT?   Some of the rows have values for the serial column, 
others

don't.

Or is the only way to use COPY for this task:

  COPY table_name (columnnameA, columnnameB, columnnameD) FROM source;

where the serial column name is omitted?  This wouldn't preserve the 
values

for the serial column on rows that have one already.

Thanks!
Steve

Doesn't this guarantee collision at some point?

I might add a column to the target table which would contain the 
"foreign" serial id and give all records the "local" serial. Update 
local to foreign iff safe and desired.





[GENERAL] Misunderstanding deadlocks

2014-10-16 Thread snacktime
I'm confused about how deadlock detection and breaking deadlocks works.
Googling around it seems that the server should be detecting deadlocks and
aborting one of the queries.

But I'm getting occasional deadlocks that literally hang forever.  I'm
assuming they are deadlocks because they show up when running the queries I
got from this url:

https://wiki.postgresql.org/wiki/Lock_Monitoring


I'm running postgres 9.3 on ubuntu, configuration is the default.

Chris


Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Steve Wampler

On 10/16/2014 09:42 AM, Rob Sargent wrote:

On 10/16/2014 10:33 AM, Steve Wampler wrote:

This is with Postgresql 9.3.5.

I'm looking at using a COPY command (via jdbc) to do bulk inserts into a table 
that
includes a BIGSERIAL column.   Is there a way to mark the data in that
column so it gets assigned a new value on entry - akin to the use of 'default'
in an INSERT?   Some of the rows have values for the serial column, others
don't.

Or is the only way to use COPY for this task:

  COPY table_name (columnnameA, columnnameB, columnnameD) FROM source;

where the serial column name is omitted?  This wouldn't preserve the values
for the serial column on rows that have one already.


Doesn't this guarantee collision at some point?


Depends - without the UNIQUE tag on that column it shouldn't matter.
Or, with a bigserial there's a lot of room to play with.  The rows with existing
serial values might all have negative values for that column, for example.


I might add a column to the target table which would contain the "foreign" serial id and 
give all records the "local"
serial. Update local to foreign iff safe and desired.


I don't think this addresses the problem of having entry rows with no serial 
column in them.

Let me generalize the problem a bit:  How can I specify that the default value 
of a column
is to be used with a COPY command when some rows have values for that column and
some don't?

--
Steve Wampler -- swamp...@noao.edu
The gods that smiled on your birth are now laughing out loud.

--
This mail was scanned by a trial version of BitDefender
For more information please visit http://www.bitdefender.com/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any postgres API available to get errorcode for PQerrorMessage

2014-10-16 Thread Roopeshakumar Narayansa Shalgar (rshalgar)
Appreciate any replies on this.

/rK

-Original Message-
From: Roopeshakumar Narayansa Shalgar (rshalgar) 
Sent: Wednesday, October 15, 2014 10:19 PM
To: 'Tom Lane'; pgsql-...@postgresql.org; pgsql-general@postgresql.org
Subject: RE: [GENERAL] Any postgres API available to get errorcode for 
PQerrorMessage

Thanks Tom,

But that's not what I am looking for. 

I need the 'integer' errorcode equivalent of " errormsg   string" returned by 
PQerrorMessage.

/rK

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Wednesday, October 15, 2014 10:07 PM
To: Roopeshakumar Narayansa Shalgar (rshalgar)
Cc: pgsql-...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Any postgres API available to get errorcode for 
PQerrorMessage

"Roopeshakumar Narayansa Shalgar (rshalgar)"  writes:
> HI,
> PQerrorMessage pirints the error message like (no space available,etc). Does 
> postgres provide
> any  API which gives the error code listed in the below appendix;

> http://www.postgresql.org/docs/9.1/static/errcodes-appendix.html

Use PQresultErrorField(..., PG_DIAG_SQLSTATE)

http://www.postgresql.org/docs/9.3/static/libpq-exec.html#LIBPQ-EXEC-MAIN

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to Install Extensions

2014-10-16 Thread Naveen Krishna
Respected Sir,

i want to use postgresql DataBase. but when i am trying to execute
script it is showing me error in extensions, i tried in many ways but i am
not able to execute script, please tell me how to install Extension in
postgresql. i am waiting for your kind replay



With Regards
M N Muralikrishna


[GENERAL] Newb question - PostgreSQL ODBC Driver for Teiid

2014-10-16 Thread Conover, Paul W
Hello,

I have a Virtual database connection to Teiid using PostgreSQL ODBC driver 
8.04.02.00. When running my SQL queries everything complets successfully, but 
the Column Names are all coming back lower case. I did quite a bit of research 
on this and am finding that if the alias is in Double Quotes it should be 
case-insensitive, but this is not what is happening. Is there a setting in the 
configuration for this? I have gone through all the setting and did not see 
anything specific to case-insensitive.

Thank you,

Paul Conover
Senior Application Analyst | IT Warehouse Development
Argus Health Systems, Inc. |1300 Washington Street, Kansas City, MO 64105
Phone: 816.435.2492 Email: 
paul.cono...@argushealth.com
[cid:image001.png@01CDB77C.874F20B0]



Please consider the environment before printing this email and any attachments.

This e-mail and any attachments are intended only for the individual or company 
to which it is addressed and may contain information which is privileged, 
confidential and prohibited from disclosure or unauthorized use under 
applicable law. If you are not the intended recipient of this e-mail, you are 
hereby notified that any use, dissemination, or copying of this e-mail or the 
information contained in this e-mail is strictly prohibited by the sender. If 
you have received this transmission in error, please return the material 
received to the sender and delete all copies from your system.


Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread David G Johnston
Steve Wampler wrote
> Let me generalize the problem a bit:  How can I specify that the default
> value of a column
> is to be used with a COPY command when some rows have values for that
> column and
> some don't?

If you provide a value for a column, including NULL, the default expression
is not evaluated.

COPY is dumb but fast.  If you need logic you need to add it yourself. 
Either before the copy or copy into a temporary UNLOGGED table and write
smart SQL to migrate from that to the live table.

You can also put smarts into a trigger.

Personally I would generally stage all the data then write two INSERT INTO
... SELECT statements; one for the known values and one where you omit the
column and let the system use the default.

David J.

 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/COPY-data-into-a-table-with-a-SERIAL-column-tp5823278p5823291.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Rob Sargent

On 10/16/2014 11:04 AM, Steve Wampler wrote:

On 10/16/2014 09:42 AM, Rob Sargent wrote:

On 10/16/2014 10:33 AM, Steve Wampler wrote:

This is with Postgresql 9.3.5.

I'm looking at using a COPY command (via jdbc) to do bulk inserts 
into a table that

includes a BIGSERIAL column.   Is there a way to mark the data in that
column so it gets assigned a new value on entry - akin to the use of 
'default'
in an INSERT?   Some of the rows have values for the serial column, 
others

don't.

Or is the only way to use COPY for this task:

  COPY table_name (columnnameA, columnnameB, columnnameD) FROM source;

where the serial column name is omitted?  This wouldn't preserve the 
values

for the serial column on rows that have one already.


Doesn't this guarantee collision at some point?


Depends - without the UNIQUE tag on that column it shouldn't matter.
Or, with a bigserial there's a lot of room to play with.  The rows 
with existing
serial values might all have negative values for that column, for 
example.


I might add a column to the target table which would contain the 
"foreign" serial id and give all records the "local"

serial. Update local to foreign iff safe and desired.


I don't think this addresses the problem of having entry rows with no 
serial column in them.

No data in the column (null) or no column at all?

I appreciate the vastness of bigserial but I think it starts at 1. Are 
negative numbers even allowed?  To clarify my suggestion: all incoming 
records would get a new "local" big serial and those incoming records 
WITH a value would set the "foreign" bigserial though that column would 
have to be typed as bigint nullable (this would allow negative values).
That said, according to my test, the supplied bigserial value would get 
insert as supplied if not null (without the extra column I suggested)

My test

   postgres=# create table t (id bigserial, name text);
   CREATE TABLE
   postgres=# insert into t values('rjs');
   ERROR:  invalid input syntax for integer: "rjs"
   LINE 1: insert into t values('rjs');
 ^
   postgres=#
   postgres=# insert into t (name) values('rjs');
   INSERT 0 1
   postgres=# select * from t;
 id | name
   +--
  1 | rjs
   (1 row)

   postgres=# insert into t (id, name) values(777, 'rjs');
   INSERT 0 1
   postgres=# select * from t;
 id  | name
   -+--
   1 | rjs
 777 | rjs
   (2 rows)






Let me generalize the problem a bit:  How can I specify that the 
default value of a column
is to be used with a COPY command when some rows have values for that 
column and

some don't?





Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Rob Sargent

On 10/16/2014 11:38 AM, David G Johnston wrote:

Steve Wampler wrote

Let me generalize the problem a bit:  How can I specify that the default
value of a column
is to be used with a COPY command when some rows have values for that
column and
some don't?

If you provide a value for a column, including NULL, the default expression
is not evaluated.

COPY is dumb but fast.  If you need logic you need to add it yourself.
Either before the copy or copy into a temporary UNLOGGED table and write
smart SQL to migrate from that to the live table.

You can also put smarts into a trigger.

Personally I would generally stage all the data then write two INSERT INTO
... SELECT statements; one for the known values and one where you omit the
column and let the system use the default.

David J.

  




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/COPY-data-into-a-table-with-a-SERIAL-column-tp5823278p5823291.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.



Yeah, part three of my test proves his point:

postgres=# insert into t (id, name) values(null, 'rjs');
ERROR:  null value in column "id" violates not-null constraint
DETAIL:  Failing row contains (null, rjs).




Re: [GENERAL] How to Install Extensions

2014-10-16 Thread Andreas Kretschmer
Naveen Krishna  wrote:

> Respected Sir,
> 
>     i want to use postgresql DataBase. but when i am trying to execute script
> it is showing me error in extensions, i tried in many ways but i am not able 
> to
> execute script, please tell me how to install Extension in postgresql. i am
> waiting for your kind replay

create extension ...;


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread David G Johnston
On Thu, Oct 16, 2014 at 11:44 AM, lup [via PostgreSQL] <
ml-node+s1045698n5823292...@n5.nabble.com> wrote:

>
>
> I appreciate the vastness of bigserial but I think it starts at 1.  Are
> negative numbers even allowed?


​http://www.postgresql.org/docs/9.3/static/sql-createsequence.html

A DEFAULT sequence starts at one but it is able to generate any biginteger
value.​  Regardless, the value generated by the sequence and the allowed
values for the target column are distinct - which is why a sequence
attached to a normal integer will start throwing "value out of bounds"
errors before it runs out of values.

Therefore, by default if one is able to live with disallowing half of the
bigint range for auto-generation using the negative half of the range for
manual assignment is a quick-and-simple solution to the problem.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/COPY-data-into-a-table-with-a-SERIAL-column-tp5823278p5823296.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] How to Install Extensions

2014-10-16 Thread Pavel Stehule
Hi

2014-10-16 15:27 GMT+02:00 Naveen Krishna :

> Respected Sir,
>
> i want to use postgresql DataBase. but when i am trying to execute
> script it is showing me error in extensions, i tried in many ways but i am
> not able to execute script, please tell me how to install Extension in
> postgresql. i am waiting for your kind replay
>

please, specify a error message? What extension, do you want to install?

first you have to have installed extension on server.

second, you have to register extension in your database by execution of
statement CREATE EXTENSION extension_name;

Regards

Pavel Stehule


>
>
>
> With Regards
> M N Muralikrishna
>
>
>


Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Rob Sargent

On 10/16/2014 11:52 AM, David G Johnston wrote:
On Thu, Oct 16, 2014 at 11:44 AM, lup [via PostgreSQL] <[hidden email] 
>wrote:




I appreciate the vastness of bigserial but I think it starts at
1.  Are negative numbers even allowed?


​http://www.postgresql.org/docs/9.3/static/sql-createsequence.html

A DEFAULT sequence starts at one but it is able to generate any 
biginteger value.​  Regardless, the value generated by the sequence 
and the allowed values for the target column are distinct - which is 
why a sequence attached to a normal integer will start throwing "value 
out of bounds" errors before it runs out of values.


Therefore, by default if one is able to live with disallowing half of 
the bigint range for auto-generation using the negative half of the 
range for manual assignment is a quick-and-simple solution to the problem.


David J.



View this message in context: Re: COPY data into a table with a SERIAL 
column? 

Sent from the PostgreSQL - general mailing list archive 
 
at Nabble.com.

As proven by part 4 :)
postgres=# insert into t (id, name) values(-777, 'rjs');
INSERT 0 1
postgres=# select * from t;
  id  | name
--+--
1 | rjs
  777 | rjs
 -777 | rjs
(3 rows)



Re: [GENERAL] Misunderstanding deadlocks

2014-10-16 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of snacktime
Sent: Thursday, October 16, 2014 1:02 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Misunderstanding deadlocks

I'm confused about how deadlock detection and breaking deadlocks works.  
Googling around it seems that the server should be detecting deadlocks and 
aborting one of the queries.

But I'm getting occasional deadlocks that literally hang forever.  I'm assuming 
they are deadlocks because they show up when running the queries I got from 
this url:

https://wiki.postgresql.org/wiki/Lock_Monitoring


I'm running postgres 9.3 on ubuntu, configuration is the default.

Chris


Deadlocks don’t “hang forever”.
Postgres is pretty good at discovering deadlocks.
Do you see circular dependency in your locks?
The fact that some query hangs forever only means that some resource that this 
query is looking for was not released by some other connection (user locked 
some object and went for a coffee break ☺

Regards,
Igor Neyman


Re: [GENERAL] Misunderstanding deadlocks

2014-10-16 Thread Bill Moran
On Thu, 16 Oct 2014 10:02:08 -0700
snacktime  wrote:

> I'm confused about how deadlock detection and breaking deadlocks works.
> Googling around it seems that the server should be detecting deadlocks and
> aborting one of the queries.
> 
> But I'm getting occasional deadlocks that literally hang forever.  I'm
> assuming they are deadlocks because they show up when running the queries I
> got from this url:
> 
> https://wiki.postgresql.org/wiki/Lock_Monitoring
> 
> I'm running postgres 9.3 on ubuntu, configuration is the default.

Yes, PostgreSQL will detect deadlocks and randomly kill one of the locked
queries to break the deadlock.

Without seeing the actual queries you're having trouble with, I can only
speculate, but my speculation is that you're creating a situation that
creates a deadlock in the application that is not detectable from Postgres.
This isn't that hard to do, really. For example, if you're using Java, the
multi-threaded synchronization doesn't have deadlock detection, so if your
taking out some DB locks in PG, then grabbing some Object locks in Java that
create a Java deadlock, Java will hold the PG locks until you kill the
process.

The other thing I've seen people get confused about is the fact that deadlocks
have nothing to do with time. A deadlock is a very specific condition where it
becomes impossible for two processes to ever move forward, and this is what
Postgres detects and breaks. It's entirely possible that your application could
be taking out non-deadlocking locks on the DB and holding them for a long time
for whatever reason, causing other processes to wait. This is a performance
problem to be solved, but it is _not_ a deadlock because eventually the blocking
process will finish and other processes will be allowed to continue. A common
mistake I've seen is processes that start transactions, then lock various 
tables,
then don't commit the transaction until some other time-consuming operation
finishes. E.g., not committing a transaction until a web browser responds is
putting the ability to lock your database solid into the hands of anyone who
can access your web site.

HTH

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Steve Wampler

On 10/16/2014 10:44 AM, Rob Sargent wrote:

On 10/16/2014 11:38 AM, David G Johnston wrote:

COPY is dumb but fast.  If you need logic you need to add it yourself.
Either before the copy or copy into a temporary UNLOGGED table and write
smart SQL to migrate from that to the live table.

You can also put smarts into a trigger.


Never thought about a trigger on a COPY before.  I'll look into that and
see what the hit is.

I was kinda hoping there was the equivalent of \N for indicating the use
of a default value instead of a null value, but I accept that such a thing
might be too expensive for COPY's goal in life.

Maybe the best approach is to switch to a batched insert, which might be
fast enough for my needs.

Thanks for the suggestions!

--
Steve Wampler -- swamp...@noao.edu
The gods that smiled on your birth are now laughing out loud.

--
This mail was scanned by a trial version of BitDefender
For more information please visit http://www.bitdefender.com/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newb question - PostgreSQL ODBC Driver for Teiid

2014-10-16 Thread Adrian Klaver

On 10/16/2014 06:56 AM, Conover, Paul W wrote:

Hello,

I have a Virtual database connection to Teiid using PostgreSQL ODBC
driver 8.04.02.00. When running my SQL queries everything complets
successfully, but the Column Names are all coming back lower case. I did
quite a bit of research on this and am finding that if the alias is in
Double Quotes it should be case-insensitive, but this is not what is
happening. Is there a setting in the configuration for this? I have gone
through all the setting and did not see anything specific to
case-insensitive.


If I am following you, then you have this backwards.

See:

http://www.postgresql.org/docs/9.3/static/sql-syntax-lexical.html

Quoting an identifier also makes it case-sensitive, whereas unquoted 
names are always folded to lower case. For example, the identifiers FOO, 
foo, and "foo" are considered the same by PostgreSQL, but "Foo" and 
"FOO" are different from these three and each other. (The folding of 
unquoted names to lower case in PostgreSQL is incompatible with the SQL 
standard, which says that unquoted names should be folded to upper case. 
Thus, foo should be equivalent to "FOO" not "foo" according to the 
standard. If you want to write portable applications you are advised to 
always quote a particular name or never quote it.)






Thank you,

*Paul Conover***




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Tables used in query not listed in EXPLAIN

2014-10-16 Thread Alvaro Melo

Hi,

I have a peculiar situation here. I'm optimizing some queries and I 
noticed that EXPLAIN doesn't always show the tables involved in the 
query. Below is a little example, but I have other examples with longer 
queries. Note that the tables tabela_documento_fiscal and pessoa aren't 
considered on it's output. Am I missing someting or it might be a bug?


Details:
PostgreSQL 9.2.6 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 
4.7.2-5) 4.7.2, 64-bit

Debian GNU/Linux 7.3 (wheezy)


vidal=> EXPLAIN
vidal-> SELECT COUNT(*)
vidal-> FROM documento_eletronico de
vidal-> ;
 QUERY PLAN

 Aggregate  (cost=4856.12..4856.14 rows=1 width=0)
   ->  Seq Scan on documento_eletronico de  (cost=0.00..4606.50 
rows=99850 width=0)


vidal=> EXPLAIN
vidal-> SELECT COUNT(*)
vidal-> FROM documento_eletronico de
vidal->  LEFT OUTER JOIN tabela_documento_fiscal tdf ON 
tdf.cd_documento = de.cd_documento

vidal-> ;
 QUERY PLAN

 Aggregate  (cost=4856.12..4856.14 rows=1 width=4)
   ->  Seq Scan on documento_eletronico de  (cost=0.00..4606.50 
rows=99850 width=4)

(2 rows)

vidal=> EXPLAIN SELECT COUNT(*)
FROM documento_eletronico de
 LEFT OUTER JOIN tabela_documento_fiscal tdf ON tdf.cd_documento = 
de.cd_documento
 LEFT OUTER JOIN pessoa   pc ON pc.cd_pessoa = 
de.cd_pessoa_usuario

;
 QUERY PLAN

 Aggregate  (cost=4856.12..4856.14 rows=1 width=8)
   ->  Seq Scan on documento_eletronico de  (cost=0.00..4606.50 
rows=99850 width=8)

(2 rows)

vidal=> EXPLAIN
vidal-> SELECT * FROM pessoa;
   QUERY PLAN

 Seq Scan on pessoa  (cost=0.00..3102.07 rows=133007 width=117)



--
Álvaro Nunes MeloAtua Sistemas de Informação
alv...@atua.com.br   http://www.atua.com.br
(54) 9976-0106   (54) 3045-4144



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tables used in query not listed in EXPLAIN

2014-10-16 Thread Tom Lane
Alvaro Melo  writes:
> I have a peculiar situation here. I'm optimizing some queries and I 
> noticed that EXPLAIN doesn't always show the tables involved in the 
> query. Below is a little example, but I have other examples with longer 
> queries. Note that the tables tabela_documento_fiscal and pessoa aren't 
> considered on it's output. Am I missing someting or it might be a bug?

It looks to me like it's optimizing away the left joins, presumably
on the grounds that they join to a unique key so there can't be more
than one matching row --- and this query has no need to know exactly
which row, if any, matches.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CopyManager(In/out) vs. delete/insert directly

2014-10-16 Thread Emi Lu

  
  
Hello,
  
  
  Two options for data (>1M), may I know which one better please?
  
  
  (1) copyOut (JDBC copyManager)
  
   t1 into a.csv
  
   delete t2 where pk.cols in t1
  
   copyIn t2 from a.csv
  
  
  (2) setautoCommit(false);
  
   delete t2 where pk.cols in t1;
  
   insert t2 select * from t1;
  
  
  Thank you
  
  Emi

  




Re: [GENERAL] Any postgres API available to get errorcode for PQerrorMessage

2014-10-16 Thread Adrian Klaver

On 10/16/2014 12:09 AM, Roopeshakumar Narayansa Shalgar (rshalgar) wrote:

Appreciate any replies on this.


It has been answered, see Tom Lanes link below. In particular you are 
looking for PG_DIAG_SQLSTATE.




/rK

-Original Message-
From: Roopeshakumar Narayansa Shalgar (rshalgar)
Sent: Wednesday, October 15, 2014 10:19 PM
To: 'Tom Lane'; pgsql-...@postgresql.org; pgsql-general@postgresql.org
Subject: RE: [GENERAL] Any postgres API available to get errorcode for 
PQerrorMessage

Thanks Tom,

But that's not what I am looking for.

I need the 'integer' errorcode equivalent of " errormsg   string" returned by 
PQerrorMessage.

/rK

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Wednesday, October 15, 2014 10:07 PM
To: Roopeshakumar Narayansa Shalgar (rshalgar)
Cc: pgsql-...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Any postgres API available to get errorcode for 
PQerrorMessage

"Roopeshakumar Narayansa Shalgar (rshalgar)"  writes:

HI,
PQerrorMessage pirints the error message like (no space available,etc). Does 
postgres provide
any  API which gives the error code listed in the below appendix;



http://www.postgresql.org/docs/9.1/static/errcodes-appendix.html


Use PQresultErrorField(..., PG_DIAG_SQLSTATE)

http://www.postgresql.org/docs/9.3/static/libpq-exec.html#LIBPQ-EXEC-MAIN

regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Adrian Klaver

On 10/16/2014 11:17 AM, Steve Wampler wrote:

On 10/16/2014 10:44 AM, Rob Sargent wrote:

On 10/16/2014 11:38 AM, David G Johnston wrote:

COPY is dumb but fast.  If you need logic you need to add it yourself.
Either before the copy or copy into a temporary UNLOGGED table and write
smart SQL to migrate from that to the live table.

You can also put smarts into a trigger.


Never thought about a trigger on a COPY before.  I'll look into that and
see what the hit is.

I was kinda hoping there was the equivalent of \N for indicating the use
of a default value instead of a null value, but I accept that such a thing
might be too expensive for COPY's goal in life.

Maybe the best approach is to switch to a batched insert, which might be
fast enough for my needs.


Might want to take a look at pg_bulkload:

http://pgbulkload.projects.pgfoundry.org/pg_bulkload.html

in particular its FILTER function:

http://pgbulkload.projects.pgfoundry.org/pg_bulkload.html#filter



Thanks for the suggestions!




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Will pg_repack improve this query performance?

2014-10-16 Thread Josh Kupershmidt
On Wed, Oct 15, 2014 at 5:03 AM, Alban Hertroys  wrote:
> A CLUSTER would help putting rows with the same to_id together. Disk access 
> would be less random that way, so it would help some.
>
> According to your query plan, accessing disks (assuming that’s what made the 
> difference) was 154 (7700 ms / 50 ms) times slower than accessing memory. I 
> don’t have the numbers for your disks or memory, but that doesn’t look like 
> an incredibly unrealistic difference. That begs the question, how random was 
> that disk access and how much can be gained from clustering that data?

Other than grouping tuples in a more favorable order to minimize I/O,
the big benefit of running a CLUSTER or pg_repack is that you
eliminate any accumulated bloat. (And if bloat is your real problem,
ideally you can adjust your autovacuum settings to avoid the problem
in the future.) You may want to check on the bloat of that table and
its indexes with something like this:

https://wiki.postgresql.org/wiki/Show_database_bloat


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Will pg_repack improve this query performance?

2014-10-16 Thread Abelard Hoffman
On Wed, Oct 15, 2014 at 2:03 AM, Alban Hertroys  wrote:

>
> On 15 Oct 2014, at 4:33, Abelard Hoffman  wrote:
>
> > I believe this query is well optimized, but it's slow if the all the
> blocks aren't already in memory.
> >
> > Here's example explain output. You can see it takes over 7 seconds to
> run when it needs to hit the disk, and almost all of it is related to
> checking if the user has "messages."
> >
> >   http://explain.depesz.com/s/BLT
>
> From that plan it is obvious that the index scan takes the most time. It
> looks like you have 3315 rows matching to_id = users.id, of which only 10
> match your query conditions after applying the filter.
>
> With your current setup, the database first needs to find candidate rows
> in the index and then has to check the other conditions against the table,
> which is likely to involve some disk access.
>
> > On a second run, it's extremely fast (< 50ms). So I'm thinking it's a
> lack of clustering on the "Index Cond: (to_id = users.user_id)" that's the
> culprit.
>
> That probably means that the relevant parts of the table were still in
> memory, which means the scan did not need to visit the disk to load the
> matched rows to filter the NULL conditions in your query.
>
> > I'm afraid of using CLUSTER due to the exclusive lock, but I found
> pg_repack while researching:
> > http://reorg.github.io/pg_repack/
>
> A CLUSTER would help putting rows with the same to_id together. Disk
> access would be less random that way, so it would help some.
>
> According to your query plan, accessing disks (assuming that’s what made
> the difference) was 154 (7700 ms / 50 ms) times slower than accessing
> memory. I don’t have the numbers for your disks or memory, but that doesn’t
> look like an incredibly unrealistic difference. That begs the question, how
> random was that disk access and how much can be gained from clustering that
> data?
>
> Did you try a partial index on to_id with those NULL conditions? That
> should result in a much smaller index size, which in turn makes it faster
> to scan - much so if the index is difficult to keep in memory because of
> its size. More importantly though, the scan wouldn’t need to visit the
> table to verify those NULL fields.
>

No, I haven't tried a more constrained index. Good point, makes sense.


> > Does it seem likely that doing an --order-by on the to_id column would
> have a significant impact in this case? pg_repack seems pretty stable and
> safe at this point?
>
> Not being familiar with pg_repack I can’t advise on that.
>
> > I am going to try and test this in a dev environment first but wanted
> feedback if this seemed like a good direction?
>
> You can try that CLUSTER or the approach with pg_repack regardless of my
> suggestion for the partial index. It should speed disk access to those
> records up regardless of how they are indexed.
>

I tried pg_repack in dev and it did make a dramatic improvement (pg_repack
took ~ 65 minutes to run). After the repack, I couldn't get the query to
take longer than 750ms. Should be much, much faster in production too.

It seems like maybe the partial index is a better long-term fix though.

And thank you, Josh, about the tip on table bloat. I'll take a look at that
too.

-- 
Best,
AH


Re: [GENERAL] Weird library problem

2014-10-16 Thread Adrian Klaver

On 10/16/2014 07:58 AM, holger.friedrich-fa-triva...@it.nrw.de wrote:

Adrian Klaver wrote:

Library mismatch. Looks like rtpostgis-2.0.so is looking for something that 
/lib64/libldap_r-2.4.so.2 is not providing. So, if I am following correctly 
that would be the OpenSCG version. From the looks of it, that library is not 
recent enough.


For the record:  Apparently there's a third library in the mix, which was not explicitly 
mentioned by the error messages.  Looks like the newer libldap_r-2.4.so.2 sort of knows 
about the "something" but does not quite provide it itself.  Turns out I also 
needed to copy over a newer liblber-2.4.so.2 to go with the newer libldap_r-2.4.so.2, and 
now the PostGIS package builds again.

What seemed weird was that the newer liblap_r-2.4.so.2 allowed PostGIS to run 
but broke the build.  The liblber-2.4.so.2 thing could possibly explain that.


This is where ldd is your friend.

Do:

ldd libldap_r-2.4.so.2

and if you want more information:

ldd -v libldap_r-2.4.so.2








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird library problem

2014-10-16 Thread Adrian Klaver

On 10/16/2014 07:58 AM, holger.friedrich-fa-triva...@it.nrw.de wrote:

Adrian Klaver wrote:

Library mismatch. Looks like rtpostgis-2.0.so is looking for something that 
/lib64/libldap_r-2.4.so.2 is not providing. So, if I am following correctly 
that would be the OpenSCG version. From the looks of it, that library is not 
recent enough.


For the record:  Apparently there's a third library in the mix, which was not explicitly 
mentioned by the error messages.  Looks like the newer libldap_r-2.4.so.2 sort of knows 
about the "something" but does not quite provide it itself.  Turns out I also 
needed to copy over a newer liblber-2.4.so.2 to go with the newer libldap_r-2.4.so.2, and 
now the PostGIS package builds again.

What seemed weird was that the newer liblap_r-2.4.so.2 allowed PostGIS to run 
but broke the build.  The liblber-2.4.so.2 thing could possibly explain that.





This is where ldd is your friend.

Do:

ldd libldap_r-2.4.so.2

or for more information

ldd -v libldap_r-2.4.so.2


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general