Re: [GENERAL] valgrind

2015-06-19 Thread Albe Laurenz
Peter Kroon wrote:
> ==3814== Memcheck, a memory error detector
> ==3814== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
> ==3814== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
> ==3814== Command: ./pgsql_check
> ==3814==
> ==3814==
> ==3814== HEAP SUMMARY:
> ==3814== in use at exit: 47,288 bytes in 2,864 blocks
> ==3814==   total heap usage: 5,671 allocs, 2,807 frees, 331,460 bytes 
> allocated
> ==3814==
> ==3814== LEAK SUMMARY:
> ==3814==definitely lost: 0 bytes in 0 blocks
> ==3814==indirectly lost: 0 bytes in 0 blocks
> ==3814==  possibly lost: 0 bytes in 0 blocks
> ==3814==still reachable: 47,288 bytes in 2,864 blocks
> ==3814== suppressed: 0 bytes in 0 blocks
> ==3814== Rerun with --leak-check=full to see details of leaked memory
> ==3814==
> ==3814== For counts of detected and suppressed errors, rerun with: -v
> ==3814== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
> 
> There is still reachable data. Is this a bug or have I forgotten to free 
> something?

I'd recommend that you do as valgrind suggests and use --leak-check=full to see
where it thinks the problems are.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to craft a query that uses memory?

2015-06-19 Thread Holger.Friedrich-Fa-Trivadis
Hello list,

What would be the best way of setting up a query that uses lots of memory, in 
order to study the effects of resource usage parameters on the performance of 
the query?

I thought that a query on a large table involving JOINs would be a way, but 
while the backend initially used 1.7 gigs of  memory, it only uses a couple 
dozen megs of memory anymore after I restarted PostgreSQL because the 
checkpointer process had 2.4 gigs of memory in use.  (The table consumes nearly 
1.3 gigs of disk space, and the query returns one billion as the join used 
results in one billion rows.)

The query I was using is this one:

SELECT count(*) FROM pgbench_accounts a JOIN pgbench_accounts b ON (a.abalance 
= b.abalance);

which I ran after initializing the table using pgbench -I -s 100, and 
populating the abalance field with

UPDATE pgbench_accounts SET abalance = aid % 10 + 1;

The query plan reads,

postgres=# EXPLAIN ANALYZE SELECT count(*) FROM pgbench_accounts a JOIN 
pgbench_accounts b ON (a.abalance = b.abalance);
QUERY 
PLAN
---
 Aggregate  (cost=22479359.09..22479359.10 rows=1 width=0) (actual 
time=787769.456..787769.456 rows=1 loops=1)
   ->  Merge Join  (cost=3126660.67..19725402.17 rows=1101582767 width=0) 
(actual time=62254.349..621119.691 rows=10 loops=1)
 Merge Cond: (a.abalance = b.abalance)
 ->  Sort  (cost=1563330.33..1588330.33 rows=1000 width=4) (actual 
time=27706.051..42305.488 rows=1000 loops=1)
   Sort Key: a.abalance
   Sort Method: external merge  Disk: 136624kB
   ->  Seq Scan on pgbench_accounts a  (cost=0.00..263935.00 
rows=1000 width=4) (actual time=0.260..11592.979 rows=1000 loops=1)
 ->  Materialize  (cost=1563330.33..1613330.33 rows=1000 width=4) 
(actual time=34548.224..209123.713 rows=99901 loops=1)
   ->  Sort  (cost=1563330.33..1588330.33 rows=1000 width=4) 
(actual time=34548.202..43988.283 rows=1000 loops=1)
 Sort Key: b.abalance
 Sort Method: external merge  Disk: 136624kB
 ->  Seq Scan on pgbench_accounts b  (cost=0.00..263935.00 
rows=1000 width=4) (actual time=0.017..16807.894 rows=1000 loops=1)
 Total runtime: 787836.841 ms
(13 Zeilen)

The point of the SELECT count(*) (as opposed to something like SELECT a.*) is, 
of course, to make sure we are not measuring the impact of the memory 
consumption of pgbench.

There are sorts involved, which I heard might require working memory but could 
also "spill to disk" as someone put it.  So how do I craft a query that 
actually does use lots of memory?

Cheers,
Holger Friedrich




Re: [GENERAL] How to craft a query that uses memory?

2015-06-19 Thread Albe Laurenz
Holger Friedrich wrote:
> So how do I craft a query that actually does use lots of memory?

You increase the parameter "work_mem".

You can do that globally in postgresql.conf or with SET for one session
or with SET LOCAL for one transaction.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transaction abort difference between psql, isql, ODBC and JDBC pgsql 8.4

2015-06-19 Thread Gary Cowell
Hello

I'm aware of the automatic transaction abort that occurs in PostgreSQL if
you have DML throw an error during a transaction, this prevents future
selects within transaction, until rollback or commit (and hence, new
transaction). I'm okay with this.

Doing all this on Red Hat 6.5 with Postgresql 8.4 (shipped repository
version in Red Hat 6.5).
Example in psql:

$ psql
psql (8.4.20)
Type "help" for help.

e5=# begin transaction;
BEGIN
e5=# select 1;
 ?column?
--
1
(1 row)

e5=# insert into conc values(1,'mouse');
ERROR:  duplicate key value violates unique constraint "conc_key"
e5=# select 1;
ERROR:  current transaction is aborted, commands ignored until end of
transaction block
e5=# \q


So I start a transaction, then get a DML error, and I can't select any more.

Same thing happens with JDBC :

$ java -cp .:/usr/share/java/postgresql-jdbc3.jar t
PostgreSQL 8.4.20 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)
4.4.7 20120313 (Red Hat 4.4.7-11), 64-bit
Jun 19, 2015 11:39:55 AM t main
SEVERE: ERROR: duplicate key value violates unique constraint "conc_key"
org.postgresql.util.PSQLException: ERROR: duplicate key value violates
unique constraint "conc_key"
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2094)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1827)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:508)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:370)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:362)
at t.main(t.java:48)

Jun 19, 2015 11:39:55 AM t main
SEVERE: ERROR: current transaction is aborted, commands ignored until end
of transaction block
org.postgresql.util.PSQLException: ERROR: current transaction is aborted,
commands ignored until end of transaction block
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2094)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1827)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:508)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:370)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:250)
at t.main(t.java:56)

I'm just selecting version() before and after a duplicate insert. Again the
transaction is aborted.

But with ODBC in isql, and with other ODBC apps, we get this:

+---+
| Connected!|
|   |
| sql-statement |
| help [tablename]  |
| quit  |
|   |
+---+
SQL> begin transaction
SQLRowCount returns -1
SQL> select 1
++
| ?column?   |
++
| 1  |
++
SQLRowCount returns 1
1 rows fetched
SQL> insert into conc values(1,'mouse');
[23505][unixODBC]ERROR: duplicate key value violates unique constraint
"conc_key";
Error while executing the query
[ISQL]ERROR: Could not SQLExecute
SQL> select 1
++
| ?column?   |
++
| 1  |
++
SQLRowCount returns 1
1 rows fetched

The transaction is not aborted with ODBC, but is with JDBC

My odbcinst.ini says:

# Driver from the postgresql-odbc package
# Setup from the unixODBC package
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver  = /usr/lib/psqlodbc.so
Setup   = /usr/lib/libodbcpsqlS.so
Driver64= /usr/lib64/psqlodbc.so
Setup64 = /usr/lib64/libodbcpsqlS.so
FileUsage   = 1

and the driver odbc.ini:
[e5]
Description = Test to Postgres
Driver  = PostgreSQL
Trace   = Yes
TraceFile   = sql.log
Database= e5
Servername  = localhost
UserName=
Password=
Port= 5432
Protocol= 6.4
ReadOnly= No
RowVersioning   = No
ShowSystemTables= No
ShowOidColumn   = No
FakeOidIndex= No
ConnSettings=


I don't mind which way it works, either aborting transactions after failed
dml, or not. But I would like to know why the behavior is inconsistent
between connection methods. Even if the answer is 'upgrade' or "you've
messed a setting up"

Thanks


Re: [GENERAL] Transaction abort difference between psql, isql, ODBC and JDBC pgsql 8.4

2015-06-19 Thread Glyn Astill

> From: Gary Cowell 
>To: pgsql-general@postgresql.org 
>Sent: Friday, 19 June 2015, 12:15
>Subject: [GENERAL] Transaction abort difference between psql, isql, ODBC and 
>JDBC pgsql 8.4
> 
>
>
>Hello
>
>I'm aware of the automatic transaction abort that occurs in PostgreSQL if you 
>have DML throw an error during a transaction, this prevents future selects 
>within transaction, until rollback or commit (and hence, new transaction). I'm 
>okay with this.
>
>
>Doing all this on Red Hat 6.5 with Postgresql 8.4 (shipped repository version 
>in Red Hat 6.5).
>Example in psql:
>
>$ psql
>psql (8.4.20)
>Type "help" for help.
>
>e5=# begin transaction;
>BEGIN
>e5=# select 1;
> ?column?
>--
>1
>(1 row)
>
>e5=# insert into conc values(1,'mouse');
>ERROR:  duplicate key value violates unique constraint "conc_key"
>e5=# select 1;
>ERROR:  current transaction is aborted, commands ignored until end of 
>transaction block
>e5=# \q
>
>
>So I start a transaction, then get a DML error, and I can't select any more.
>
>Same thing happens with JDBC :
>
>$ java -cp .:/usr/share/java/postgresql-jdbc3.jar t
>PostgreSQL 8.4.20 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.4.7 
>20120313 (Red Hat 4.4.7-11), 64-bit
>Jun 19, 2015 11:39:55 AM t main
>SEVERE: ERROR: duplicate key value violates unique constraint "conc_key"
>org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique 
>constraint "conc_key"
>at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2094)
>at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1827)
>at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:508)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:370)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:362)
>at t.main(t.java:48)
>
>Jun 19, 2015 11:39:55 AM t main
>SEVERE: ERROR: current transaction is aborted, commands ignored until end of 
>transaction block
>org.postgresql.util.PSQLException: ERROR: current transaction is aborted, 
>commands ignored until end of transaction block
>at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2094)
>at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1827)
>at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:508)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:370)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:250)
>at t.main(t.java:56)
>
>I'm just selecting version() before and after a duplicate insert. Again the 
>transaction is aborted.
>
>But with ODBC in isql, and with other ODBC apps, we get this:
>
>+---+
>| Connected!|
>|   |
>| sql-statement |
>| help [tablename]  |
>| quit  |
>|   |
>+---+
>SQL> begin transaction
>SQLRowCount returns -1
>SQL> select 1
>++
>| ?column?   |
>++
>| 1  |
>++
>SQLRowCount returns 1
>1 rows fetched
>SQL> insert into conc values(1,'mouse');
>[23505][unixODBC]ERROR: duplicate key value violates unique constraint 
>"conc_key";
>Error while executing the query
>[ISQL]ERROR: Could not SQLExecute
>SQL> select 1
>++
>| ?column?   |
>++
>| 1  |
>++
>SQLRowCount returns 1
>1 rows fetched
>
>The transaction is not aborted with ODBC, but is with JDBC
>
>My odbcinst.ini says:
>
>
># Driver from the postgresql-odbc package
># Setup from the unixODBC package
>[PostgreSQL]
>Description = ODBC for PostgreSQL
>Driver  = /usr/lib/psqlodbc.so
>Setup   = /usr/lib/libodbcpsqlS.so
>Driver64= /usr/lib64/psqlodbc.so
>Setup64 = /usr/lib64/libodbcpsqlS.so
>FileUsage   = 1
>
>
>and the driver odbc.ini:
>[e5]
>Description = Test to Postgres
>Driver  = PostgreSQL
>Trace   = Yes
>TraceFile   = sql.log
>Database= e5
>Servername  = localhost
>UserName=
>Password=
>Port= 5432
>Protocol= 6.4
>ReadOnly= No
>RowVersioning   = No
>ShowSystemTables= No
>ShowOidColumn   = No
>FakeOidIndex= No
>ConnSettings=
>
>
>
>I don't mind which way it works, either aborting transactions after failed 
>dml, or no

Re: [GENERAL] How to craft a query that uses memory?

2015-06-19 Thread Holger.Friedrich-Fa-Trivadis
Albe Laurenz wrote on Friday, June 19, 2015 12:56 PM:
> Holger Friedrich wrote:
> > So how do I craft a query that actually does use lots of memory?

> You increase the parameter "work_mem".

> You can do that globally in postgresql.conf or with SET for one session or 
> with SET LOCAL for one transaction.

Thank you to both Albe Laurenz and Andreas Kretschmer, who both gave the same 
advice.  It was spot-on:  my query actually seems to be good for a performance 
study, setting work_mem to various values causes the query planner to either 
sort in memory or on disk, and of course this has some impact on performance.

(Sorry for erroneously asking for suggesting a "better" query, rather than for 
tuning my configuration, which turned out to be the sensible thing to do...)

Best regards,
Holger Friedrich

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction abort difference between psql, isql, ODBC and JDBC pgsql 8.4

2015-06-19 Thread Adrian Klaver

On 06/19/2015 04:15 AM, Gary Cowell wrote:

Hello

I'm aware of the automatic transaction abort that occurs in PostgreSQL
if you have DML throw an error during a transaction, this prevents
future selects within transaction, until rollback or commit (and hence,
new transaction). I'm okay with this.

Doing all this on Red Hat 6.5 with Postgresql 8.4 (shipped repository
version in Red Hat 6.5).
Example in psql:

$ psql
psql (8.4.20)
Type "help" for help.

e5=# begin transaction;
BEGIN
e5=# select 1;
  ?column?
--
 1
(1 row)

e5=# insert into conc values(1,'mouse');
ERROR:  duplicate key value violates unique constraint "conc_key"
e5=# select 1;
ERROR:  current transaction is aborted, commands ignored until end of
transaction block
e5=# \q


So I start a transaction, then get a DML error, and I can't select any more.

Same thing happens with JDBC :

$ java -cp .:/usr/share/java/postgresql-jdbc3.jar t
PostgreSQL 8.4.20 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)
4.4.7 20120313 (Red Hat 4.4.7-11), 64-bit
Jun 19, 2015 11:39:55 AM t main
SEVERE: ERROR: duplicate key value violates unique constraint "conc_key"
org.postgresql.util.PSQLException: ERROR: duplicate key value violates
unique constraint "conc_key"
 at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2094)
 at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1827)
 at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
 at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:508)
 at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:370)
 at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:362)
 at t.main(t.java:48)

Jun 19, 2015 11:39:55 AM t main
SEVERE: ERROR: current transaction is aborted, commands ignored until
end of transaction block
org.postgresql.util.PSQLException: ERROR: current transaction is
aborted, commands ignored until end of transaction block
 at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2094)
 at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1827)
 at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
 at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:508)
 at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:370)
 at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:250)
 at t.main(t.java:56)

I'm just selecting version() before and after a duplicate insert. Again
the transaction is aborted.

But with ODBC in isql, and with other ODBC apps, we get this:

+---+
| Connected!|
|   |
| sql-statement |
| help [tablename]  |
| quit  |
|   |
+---+
SQL> begin transaction
SQLRowCount returns -1
SQL> select 1
++
| ?column?   |
++
| 1  |
++
SQLRowCount returns 1
1 rows fetched
SQL> insert into conc values(1,'mouse');
[23505][unixODBC]ERROR: duplicate key value violates unique constraint
"conc_key";
Error while executing the query
[ISQL]ERROR: Could not SQLExecute
SQL> select 1
++
| ?column?   |
++
| 1  |
++
SQLRowCount returns 1
1 rows fetched

The transaction is not aborted with ODBC, but is with JDBC

My odbcinst.ini says:

# Driver from the postgresql-odbc package
# Setup from the unixODBC package
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver  = /usr/lib/psqlodbc.so
Setup   = /usr/lib/libodbcpsqlS.so
Driver64= /usr/lib64/psqlodbc.so
Setup64 = /usr/lib64/libodbcpsqlS.so
FileUsage   = 1

and the driver odbc.ini:
[e5]
Description = Test to Postgres
Driver  = PostgreSQL
Trace   = Yes
TraceFile   = sql.log
Database= e5
Servername  = localhost
UserName=
Password=
Port= 5432
Protocol= 6.4
ReadOnly= No
RowVersioning   = No
ShowSystemTables= No
ShowOidColumn   = No
FakeOidIndex= No
ConnSettings=


I don't mind which way it works, either aborting transactions after
failed dml, or not. But I would like to know why the behavior is
inconsistent between connection methods. Even if the answer is 'upgrade'
or "you've messed a setting up"



Different implementations of autocommit.

For psql see here:
http://www.postgresql.org/docs/9.4/interactive/app-psql.html
AUTOCOMMIT

For ODBC see here:
https://msdn.mic

Re: [GENERAL] Planner cost adjustments

2015-06-19 Thread Daniel Begin
Here is a last follow-up on adjusting the planner costs calculation 

1 - Francisco, you provide me with a clue I was looking for at this point!

I learned enough PLpgsql over the last week to reprogram all my processes (not 
always elegant but it works). By processing individual records using loop, I 
will be able to move forward with my work and save individual results as they 
are processed. 

Furthermore, it is now easier to predict jobs completion times, since the DB 
uses similar plans for each record. I did a test on a stratified sampling of 
records and I expect the jobs will run for about two weeks (contrarily to 
months in previous estimates!-)

2 - PLpgsql helped me to bypass the problem. As PT suggested, I will try to 
setup a test case that demonstrates the problem and post it to the developers' 
list so they might figure out what to do in such situation.

Thanks to all,
Daniel

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Francisco Olarte
Sent: June-11-15 10:05
To: Daniel Begin
Cc: Bill Moran; pgsql-general@postgresql.org; Tomas Vondra; Melvin Davidson
Subject: Re: [GENERAL] Planner cost adjustments

Hi Daniel:

On Thu, Jun 11, 2015 at 2:38 PM, Daniel Begin  wrote:
.
> The remaining problem seems related to the statistics of some large tables.
> On one hand, I might increase the statistic target for these tables to 
> 500, or even to 1000 and look at the results (but I have doubts it 
> will help). On the other hand, I could simply force enable_seqscan to 
> OFF for queries dealing with them.
>
> Still not sure about the best solution but the problem is definitely 
> narrower :-)

One somehow unrelated point. IIRC your problems where related to queries doing 
joins with selected sets of indexed IDs on a smaller table which then have to 
be looked up on some very large tables. ( I'm not able to follow too well which 
is which, users on changesets, I'm a bit lost ). Given your runtimes are always 
high ( in the seconds range, so it seems wire speed / latencies are not too 
much of an issue
) and that selectivity estimates on huge tables are always problematic and may 
be thwarting your plans you may be able to get faster results splitting your 
query.

 If I read your plans correctly, that would be selecting your 600 users in one 
query and then preparing the changeset query for a single user_id, which should 
be indexed, and looping it for every user. Given current machines can easily 
send-receive 600 queries in a second it may lead to a simpler solution. This 
mean you're using the DB as a somehow inteligent plain old indexed file, but 
sometimes this is the simpler approach ( heck, some of my code uses algorithms 
from the tape era as they were the faster way I could do it ).

I needed to do this in one of my programs, the optimizer kept selecting bad 
plans so I did the first query, held the results in memory, and then prepared 
and executed in a loop from the app, my query was selecting a couple thousand 
values from submillion table, and joining with a multimillion one, getting a 
couple hundreds matches per original value. Splitting it made the queries on 
the big table always run indexed and fast ( and as a side bonus avoided 
duplicating the parts of the first record in the wire a hundred times, which 
was nice since the short table was wide and I only needed 3 short fields from 
the second one, and that made the first query run at wire speed and the second 
at disk speed ).

Francisco Olarte.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to craft a query that uses memory?

2015-06-19 Thread Scott Marlowe
Note that after cranking up work mem you then need a query that can
use it. Quickest way to use LOTS of memory is to do a lot of
unconstrained joins.

select * from table a join table b on (true) join table c on (true)
join table d on (true); Result set is size of a*b*c*d

On Fri, Jun 19, 2015 at 7:21 AM,
 wrote:
> Albe Laurenz wrote on Friday, June 19, 2015 12:56 PM:
>> Holger Friedrich wrote:
>> > So how do I craft a query that actually does use lots of memory?
>
>> You increase the parameter "work_mem".
>
>> You can do that globally in postgresql.conf or with SET for one session or 
>> with SET LOCAL for one transaction.
>
> Thank you to both Albe Laurenz and Andreas Kretschmer, who both gave the same 
> advice.  It was spot-on:  my query actually seems to be good for a 
> performance study, setting work_mem to various values causes the query 
> planner to either sort in memory or on disk, and of course this has some 
> impact on performance.
>
> (Sorry for erroneously asking for suggesting a "better" query, rather than 
> for tuning my configuration, which turned out to be the sensible thing to 
> do...)
>
> Best regards,
> Holger Friedrich
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-19 Thread Piotr Gackiewicz
Tom Lane  wrote:
> Douglas Stetner  writes:
>> Looking for confirmation there is an issue with pg_dump failing after
>> upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux.
>
> Quick thought --- did you restart the Postgres service after upgrading
> openssl?  If not, your server is still using the old library version,
> while pg_dump would be running the new version on the client side.
> I don't know exactly what was done to openssl in the last round of
> revisions, but maybe there is some sort of version compatibility issue.
>
> Also, you really ought to be running something newer than PG 8.4.9.

Hi,

I have the same problem with fresh postgresql 9.2.13.
Started after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64

Since then pg_dump aborts after dumping circa 2GB:

pg_dump: [archiver (db)] query failed: SSL error: unexpected message
pg_dump: [archiver (db)] query was: FETCH 100 FROM _pg_dump_cursor

openssl-1.0.1e-30.el6_6.11.x86_64 on both ends (connecting via localhost)

pg_dump via unix socket, without "-h localhost" - there is no problem.

Fetching 2.5 GB of such text dump via https (apache + mod_ssl +
openssl-1.0.1e-30.el6_6.11.x86_64) => wget +
openssl-1.0.1e-30.el6_6.11.x86_64  - there is no problem

Looks like postgresql+ssl issue.

postgres=#  select name,setting,unit from pg_settings where name ~ 'ssl' ;
  name   |  setting  | unit 
-+---+--
 ssl | on| 
 ssl_ca_file |   | 
 ssl_cert_file   | server.crt| 
 ssl_ciphers | ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH | 
 ssl_crl_file|   | 
 ssl_key_file| server.key| 
 ssl_renegotiation_limit | 524288| kB


Any thoughts?

Regards,

-- 
Piotr Gackiewicz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-19 Thread Douglas Stetner
> 
> On 18 Jun 2015, at 02:06 , Tom Lane  wrote:
> 
> Douglas Stetner  writes:
>> Looking for confirmation there is an issue with pg_dump failing after 
>> upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux.
> 
> Quick thought --- did you restart the Postgres service after upgrading
> openssl?  If not, your server is still using the old library version,
> while pg_dump would be running the new version on the client side.
> I don't know exactly what was done to openssl in the last round of
> revisions, but maybe there is some sort of version compatibility issue.
> 
> Also, you really ought to be running something newer than PG 8.4.9.
> 
>   regards, tom lane


Thanks for the reply Tom.  Unfortunately restart did not help.  Will try an 
upgrade to 8.4.20 (other software depends on 8.4.x) A remote client with 8.4.20 
works, so fingers crossed.

Douglas Stetner
Mobile 0474 082 019
UNIX - Live Free Or Die



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql 9.2 has standby server lost data?

2015-06-19 Thread Paula Price
I have Postgresql 9.2.10 streaming replication set up with log shipping in
case the replication falls behind.  I discovered that the log-shipping had
been disabled at some point in time.  I enabled the log shipping again.

If at some point in time the streaming replication fell behind and the
standby server was not able to retrieve the necessary WAL file(s) from the
primary, would the standby server continue to function normally?  Do I need
to rebuild the standby server?  I have restarted the standby server and it
is up and running with no issues.  I need to know if the
data integrity has been compromised.

I have run this query to determine the lag time for the standby(in case
this tells me anything):
"SELECT now(), now() - pg_last_xact_replay_timestamp()  AS time_lag;
RESULT:
"2015-06-19 00:40:48.83701+00";"00:00:01.078616"


Thank you,
Paula P


[GENERAL] Postgres SIGALRM timer

2015-06-19 Thread Scottix
I am running a posgres server with a zabbix server and zabbix agent and I
am getting a bunch errors about SIGALRM, It makes postgres drop connections
randomly.

Ubuntu 14.04.2 LTS
Linux Host 3.19.0-21-generic #21~14.04.1-Ubuntu SMP Sun Jun 14 18:45:42 UTC
2015 x86_64 x86_64 x86_64 GNU/Linux
Postgres 9.4.4

CPU:
model name : Intel(R) Core(TM) i3 CPU 540  @ 3.07GHz

Memory:
4GB

No significant load is on the machine.

2015-06-19 08:45:22 PDT FATAL:  could not enable SIGALRM timer: Invalid
argument
2015-06-19 08:45:53 PDT FATAL:  could not enable SIGALRM timer: Invalid
argument
2015-06-19 08:45:53 PDT FATAL:  could not enable SIGALRM timer: Invalid
argument
2015-06-19 08:45:53 PDT FATAL:  could not enable SIGALRM timer: Invalid
argument
2015-06-19 08:45:53 PDT FATAL:  could not enable SIGALRM timer: Invalid
argument
2015-06-19 08:45:53 PDT FATAL:  could not enable SIGALRM timer: Invalid
argument
2015-06-19 08:45:53 PDT FATAL:  could not enable SIGALRM timer: Invalid
argument

Any help appreciated.


Re: [GENERAL] Postgresql 9.2 has standby server lost data?

2015-06-19 Thread Adrian Klaver

On 06/18/2015 05:45 PM, Paula Price wrote:

I have Postgresql 9.2.10 streaming replication set up with log shipping in
case the replication falls behind.  I discovered that the log-shipping had
been disabled at some point in time.  I enabled the log shipping again.

If at some point in time the streaming replication fell behind and the
standby server was not able to retrieve the necessary WAL file(s) from the
primary, would the standby server continue to function normally?  Do I need
to rebuild the standby server?  I have restarted the standby server and it
is up and running with no issues.


Well that seems at odds with it being unable to retrieve the WAL files. 
This leads to these questions:


1) What makes you think it did not retrieve the WAL files via streaming?

2) What does the postgres log show at the time you restarted the standby?

  I need to know if the

data integrity has been compromised.

I have run this query to determine the lag time for the standby(in case
this tells me anything):
"SELECT now(), now() - pg_last_xact_replay_timestamp()  AS time_lag;
RESULT:
"2015-06-19 00:40:48.83701+00";"00:00:01.078616"


Thank you,
Paula P



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-19 Thread Tom Lane
Douglas Stetner  writes:
> On 18 Jun 2015, at 02:06 , Tom Lane  wrote:
>> Douglas Stetner  writes:
>>> Looking for confirmation there is an issue with pg_dump failing after 
>>> upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux.

>> Quick thought --- did you restart the Postgres service after upgrading
>> openssl?  If not, your server is still using the old library version,
>> while pg_dump would be running the new version on the client side.
>> I don't know exactly what was done to openssl in the last round of
>> revisions, but maybe there is some sort of version compatibility issue.
>> 
>> Also, you really ought to be running something newer than PG 8.4.9.

> Thanks for the reply Tom.  Unfortunately restart did not help.  Will try
> an upgrade to 8.4.20 (other software depends on 8.4.x) A remote client
> with 8.4.20 works, so fingers crossed.

Hm.  The only possibly SSL-relevant patch I see in the 8.4 git history is
this:

Author: Tom Lane 
Branch: master Release: REL9_4_BR [74242c23c] 2013-12-05 12:48:28 -0500
Branch: REL9_3_STABLE Release: REL9_3_3 [2a6e1a554] 2013-12-05 12:48:31 -0500
Branch: REL9_2_STABLE Release: REL9_2_7 [41042970b] 2013-12-05 12:48:35 -0500
Branch: REL9_1_STABLE Release: REL9_1_12 [ad910ccdc] 2013-12-05 12:48:37 -0500
Branch: REL9_0_STABLE Release: REL9_0_16 [36352ceb4] 2013-12-05 12:48:41 -0500
Branch: REL8_4_STABLE Release: REL8_4_20 [7635dae55] 2013-12-05 12:48:44 -0500

Clear retry flags properly in replacement OpenSSL sock_write function.

Current OpenSSL code includes a BIO_clear_retry_flags() step in the
sock_write() function.  Either we failed to copy the code correctly, or
they added this since we copied it.  In any case, lack of the clear step
appears to be the cause of the server lockup after connection loss reported
in bug #8647 from Valentine Gogichashvili.  Assume that this is correct
coding for all OpenSSL versions, and hence back-patch to all supported
branches.

Diagnosis and patch by Alexander Kukushkin.

Although the problem that was reported at the time isn't much like yours,
it's possible that this missing step has additional effects with the
latest openssl version; so it's certainly worth trying.

Whether this fixes your immediate issue or not, you really ought to be
using the last available 8.4.x version, which is 8.4.22.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres SIGALRM timer

2015-06-19 Thread Tom Lane
Scottix  writes:
> I am running a posgres server with a zabbix server and zabbix agent and I
> am getting a bunch errors about SIGALRM, It makes postgres drop connections
> randomly.
> 2015-06-19 08:45:22 PDT FATAL:  could not enable SIGALRM timer: Invalid
> argument

That's really weird.  [ looks at code... ]  You aren't using a
--disable-integer-timestamps build by any chance, are you?
Even if you were, I don't see how we could be passing a bad
value to setitimer().

I'm a bit inclined to say there's something wrong with your kernel,
since we've not heard similar reports elsewhere.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any way to measure disk activity for each query?

2015-06-19 Thread Julien Rouhaud
Le 19/06/2015 01:07, Jeff Janes a écrit :
> On Thu, Jun 18, 2015 at 3:05 PM, Oleg Serov  > wrote:
> 
> Hello!
> 
> I'm wondering, if there any way to measure how much disk-io were
> generated by a query?
> 
> 
> For an individual query execution, you can explain it with 
> 
> explain (analyze, buffers) select .
> 
> It will report on the pages hit in the buffer cache versus the pages
> read.  However, for pages which were hit in the OS filesystem cache,
> those will be reported as if they were read from disk.  There is no way
> (that I know of) to distinguish at the statement level true disk io from
> OS caching.  The best way may be to turn track_io_timing on, then you
> can see how much time it spent waiting on pages.  If not much time was
> spent, then it must be coming from the OS cache.
> 
> If you enable pg_stat_statements extension, you can get the same data
> summed over all natural calls of the same query string.  'Natural'
> meaning executions from applications, not just queries manually
> decorated with 'explain (analyze,buffers)'.  This too is best used in
> conjunction with track_io_timing.
> 
> I've been thinking about making individual statements which
> exceed log_min_duration_statement log their track_io_timing numbers and
> their rusage numbers into the server log, rather than just their
> wall-time durations as it does now.  I'm not sure how that idea is going
> to work out yet, though.  Anyway, it wouldn't be until version 9.6 at
> minimum.
> 
> Also, for temp file, see log_temp_files config parameter.
> 

Also, if you need current disk activity for a query, you can use tools
like pg_activity of pg_view to monitor it.

And if you are using postgres 9.4 or more, you can also use
pg_stat_statement and pg_stat_kcache extensions to get actual disk reads
and writes for all normalized queries.

Regards.

> Cheers,
> 
> Jeff


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.2 has standby server lost data?

2015-06-19 Thread Adrian Klaver

On 06/19/2015 01:05 PM, Paula Price wrote:



On Fri, Jun 19, 2015 at 12:01 PM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 06/18/2015 05:45 PM, Paula Price wrote:

I have Postgresql 9.2.10 streaming replication set up with log
shipping in
case the replication falls behind.  I discovered that the
log-shipping had
been disabled at some point in time.  I enabled the log shipping
again.

If at some point in time the streaming replication fell behind
and the
standby server was not able to retrieve the necessary WAL
file(s) from the
primary, would the standby server continue to function
normally?  Do I need
to rebuild the standby server?  I have restarted the standby
server and it
is up and running with no issues.


Well that seems at odds with it being unable to retrieve the WAL
files. This leads to these questions:

1) What makes you think it did not retrieve the WAL files via streaming?

​
It _may_ _not _have fallen behind via replication.  We do have standby
servers that fall behind, but since we have log-shipping it is not a
concern.  On this server, i have no idea how long we were running
without log-shipping.  I have no idea how many log files I would have to
go through to find out when log-shipping stopped.
My basic question is:
If a standby server falls behind with streaming replication AND the
standby server cannot obtain the WAL file needed from the primary, will
you get an error from the standby server?  Or does it just hiccup and
try to carry on?​


No it will fall over:

http://www.postgresql.org/docs/9.2/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER

wal_keep_segments (integer)

Specifies the minimum number of past log file segments kept in the 
pg_xlog directory, in case a standby server needs to fetch them for 
streaming replication. Each segment is normally 16 megabytes. If a 
standby server connected to the sending server falls behind by more than 
wal_keep_segments segments, the sending server might remove a WAL 
segment still needed by the standby, in which case the replication 
connection will be terminated. Downstream connections will also 
eventually fail as a result. (However, the standby server can recover by 
fetching the segment from archive, if WAL archiving is in use.)


This sets only the minimum number of segments retained in pg_xlog; 
the system might need to retain more segments for WAL archival or to 
recover from a checkpoint. If wal_keep_segments is zero (the default), 
the system doesn't keep any extra segments for standby purposes, so the 
number of old WAL segments available to standby servers is a function of 
the location of the previous checkpoint and status of WAL archiving. 
This parameter can only be set in the postgresql.conf file or on the 
server command line.


When you started up if the necessary WAL files where not on the server 
you would have seen Postgres throwing errors in the log.


I would check out the below to verify:

http://www.postgresql.org/docs/9.2/interactive/warm-standby.html#STREAMING-REPLICATION

25.2.5.2. Monitoring




2) What does the postgres log show at the time you restarted the
standby?

​2015-06-18 01:12:41.871 UTC::@:[28168]:2015-06-18 01:12:41 UTC:
LOG:  0: database system was shut down in recovery at
2015-06-18 01:12:14 UTC

2015-06-18 01:12:41.871 UTC::@:[28168]:2015-06-18 01:12:41 UTC:
LOCATION:  StartupXLOG, xlog.c:6298

2015-06-18 01:12:41.904 UTC::@:[28168]:2015-06-18 01:12:41 UTC:
LOG:  0: entering standby mode

2015-06-18 01:12:41.904 UTC::@:[28168]:2015-06-18 01:12:41 UTC:
LOCATION:  StartupXLOG, xlog.c:6384

2015-06-18 01:12:41.987 UTC::@:[28168]:2015-06-18 01:12:41 UTC:
LOG:  0: redo starts at 867/FDF32E18

2015-06-18 01:12:41.987 UTC::@:[28168]:2015-06-18 01:12:41 UTC:
LOCATION:  StartupXLOG, xlog.c:6855

2015-06-18 01:12:42.486
UTC::[unknown]@[unknown]:[28213]:2015-06-18 01:12:42 UTC: LOG:
  0: connection received: host=[local]

2015-06-18 01:12:42.486
UTC::[unknown]@[unknown]:[28213]:2015-06-18 01:12:42 UTC:
LOCATION:  BackendInitialize, postmaster.c:3501

2015-06-18 01:12:42.486
UTC:[local]:postgres@postgres:[28213]:2015-06-18 01:12:42 UTC:
FATAL:  57P03: the database system is starting up

2015-06-18 01:12:42.486
UTC:[local]:postgres@postgres:[28213]:2015-06-18 01:12:42 UTC:
LOCATION:  ProcessStartupPacket, postmaster.c:1792

2015-06-18 01:12:43.488
UTC::[unknown]@[unknown]:[28270]:2015-06-18 01:12:43 UTC: LOG:
  0: connection received: host=[local]

2015-06-18 01:12:43.488
UTC::[unknown]@[unknown]:[28270]:2015-06-18 01:12:43 UTC:
LOCATION:  BackendInitializ