Re: [GENERAL] Extracting data from the view to retrieve the foreign key is not declared

2014-03-11 Thread Albe Laurenz
nill wrote:
> Given a view, I need to extract tables, the join columns (ON) . I need to do
> this analysis because of the view (agreements with the join condition and
> where) I can say that there is a foreign key

Do I understand right that you want to find out the tables involved in
a view definition?

Maybe querying pg_depend can help, since there are dependency relationships
between the view and its query rewrite rule, but also between the rule and
the columns of the underlying tables.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to make PG use work_mem?

2014-03-11 Thread Torsten Förtsch
Hi,

I have a query that involves an external sort:

   ->  Sort  (cost=13662680.01..13850498.48 rows=75127389 width=16)
 (actual time=980098.397..1021411.862 rows=74181544 loops=1)
 Sort Key: (ROW(account_id, (purchase_time)::date))
 Sort Method: external merge  Disk: 3118088kB
 Buffers: shared hit=1568637 read=1327223,
  temp read=389763 written=389763

What puzzles me is that this happens even when I set work_mem to 50GB in
the session.

Why does it still use the external merge?

The query runs on a streaming replica if that matters.

Torsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication timeout in pg_basebackup

2014-03-11 Thread Aggarwal, Ajay
Thats exactly what I was thinking after all other experiments. Couple of 
questions:
1) why did you say that 300 seconds is the upper limit? Is this enforced by 
Postgres? What if I want to set it to 10 minutes?
2) whats the downside of bigger replication timeout?

Thanks.

Ajay

From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] 
on behalf of John R Pierce [pie...@hogranch.com]
Sent: Monday, March 10, 2014 9:58 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] replication timeout in pg_basebackup

On 3/9/2014 6:52 PM, Aggarwal, Ajay wrote:
Our replication timeout is default 60 seconds. If we increase the replication 
time to say 180 seconds, we see better results but backups still fail 
occasionally.

so increase it to 300 seconds, or whatever.   thats an upper limit, it needs to 
be big enough that you DONT get into problems when doing stuff like basebackups.






--
john r pierce  37N 122W
somewhere on the middle of the left coast


Re: [GENERAL] Increase in max_connections

2014-03-11 Thread Igor Neyman

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Anand Kumar, Karthik
Sent: Monday, March 10, 2014 9:04 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Increase in max_connections

Hi all,

We're running postgres 9.3.2, server configuration below. 

Seemingly randomly, we will see the number of active queries in postgres go up 
until we hit max_connections. The DB will recover after a few minutes.

We had the issue a couple of times in Feb 2014. We then upgraded the postgres 
server from 9.1 to 9.3.2, and the occurrence has gone up significantly - to 
several times a day. 

The user CPU goes up as well to a 100%, no increase in I/O or system CPU.
We have slow query logging, and there is no dramatic change in the slow queries 
either. 
There is a corresponding spike in shared locks, but that seems to be an effect 
not a cause - it corresponds to an increase in the number of running processes 
at the time.

We had a similar issue in the past - that was solved by disabling 
transparent_huge_pages - but the difference there was that we'd see queries 
slow down dramatically. Currently, we don't. Also, transparent_huge_pages is 
still disabled.

I do realize the issue would be caused by a spurt in incoming connections - we 
do not yet have conclusive evidence on whether that's happening (active queries 
climbs up, however no conclusive proof on whether thats because of slow down, 
or because of increase in traffic). Working on getting the information, will 
update with that information as soon as we have it.

I thought I'd send a post out to the group before then, to see if anyone has 
run into anything similar.

Thanks,
Karthik

site=# SELECT version();
PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 
20120305 (Red Hat 4.4.6-4), 64-bit

site=# SELECT name, current_setting(name), source
site-#   FROM pg_settings
site-#   WHERE source NOT IN ('default', 'override');
application_name|psql|client
archive_command|/usr/bin/archiver.sh %f %p|configuration file
archive_mode|on|configuration file
autovacuum_freeze_max_age|25000|configuration file
autovacuum_max_workers|6|configuration file
bgwriter_lru_maxpages|1000|configuration file
bgwriter_lru_multiplier|4|configuration file
checkpoint_completion_target|0.8|configuration file
checkpoint_segments|250|configuration file
checkpoint_timeout|15min|configuration file
checkpoint_warning|6min|configuration file
client_encoding|UTF8|client
commit_siblings|25|configuration file
cpu_tuple_cost|0.03|configuration file
DateStyle|ISO, MDY|configuration file
default_statistics_target|300|configuration file
default_text_search_config|pg_catalog.english|configuration file
effective_cache_size|568GB|configuration file
fsync|on|configuration file
lc_messages|en_US.UTF-8|configuration file
lc_monetary|en_US.UTF-8|configuration file
lc_numeric|en_US.UTF-8|configuration file
lc_time|en_US.UTF-8|configuration file
listen_addresses|*|configuration file
log_autovacuum_min_duration|0|configuration file
log_checkpoints|on|configuration file
log_connections|on|configuration file
log_destination|syslog|configuration file
log_directory|pg_log|configuration file
log_filename|postgresql-%a.log|configuration file
log_line_prefix|user=%u,db=%d,ip=%h |configuration file
log_min_duration_statement|100ms|configuration file
log_min_messages|debug1|configuration file
log_rotation_age|1d|configuration file
log_rotation_size|0|configuration file
log_timezone|US/Pacific|configuration file
log_truncate_on_rotation|on|configuration file
logging_collector|off|configuration file
maintenance_work_mem|1GB|configuration file
max_connections|1500|configuration file
max_locks_per_transaction|1000|configuration file
max_stack_depth|2MB|environment variable
max_wal_senders|5|configuration file
port|5432|command line
random_page_cost|2|configuration file
shared_buffers|8GB|configuration file
synchronous_commit|off|configuration file
syslog_facility|local0|configuration file
syslog_ident|postgres|configuration file
TimeZone|US/Pacific|configuration file
vacuum_freeze_table_age|0|configuration file
wal_buffers|32MB|configuration file
wal_keep_segments|250|configuration file
wal_level|hot_standby|configuration file
wal_sync_method|fsync|configuration file
work_mem|130MB|configuration file


You don't specify how many CPU cores you have, but I'm pretty sure there is not 
enough to support this:

max_connections|1500|configuration file

Try connection pooler, it should help.
The simplest to install and configure would be PgBouncer, and it does the job 
very well.

Regards,
Igor Neyman






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to make PG use work_mem?

2014-03-11 Thread Tom Lane
=?ISO-8859-1?Q?Torsten_F=F6rtsch?=  writes:
> I have a query that involves an external sort:

>->  Sort  (cost=13662680.01..13850498.48 rows=75127389 width=16)
>  (actual time=980098.397..1021411.862 rows=74181544 loops=1)
>  Sort Key: (ROW(account_id, (purchase_time)::date))
>  Sort Method: external merge  Disk: 3118088kB
>  Buffers: shared hit=1568637 read=1327223,
>   temp read=389763 written=389763

> What puzzles me is that this happens even when I set work_mem to 50GB in
> the session.

> Why does it still use the external merge?

Seems odd.  I wouldn't have been surprised if you'd said it didn't do it
at work_mem = 5GB.  The internal memory requirement can be noticeably
larger than the space required on-disk, mainly because we go to some
lengths to minimize the size of sort tuples when writing them out, but
not if they stay in memory.  But a difference of more than maybe 2X to
3X from that effect would be surprising.

Perhaps you fat-fingered the SET somehow?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to make PG use work_mem?

2014-03-11 Thread Torsten Förtsch
On 11/03/14 14:36, Tom Lane wrote:
> Perhaps you fat-fingered the SET somehow?

I just repeated it:

# select * from pg_settings where name='work_mem';
-[ RECORD 1 ]
name   | work_mem
setting| 52428800
unit   | kB
...

# explain (analyze,buffers)
  select distinct(account_id, purchase_time::date) from fmb;

   QUERY PLAN
-
 Unique  (cost=13666972.01..14042722.46 rows=12894641 width=16)
 (actual time=1000989.364..1058273.210 rows=2200442 loops=1)
   Buffers: shared hit=1570088 read=1326647,
temp read=389842 written=389842
   ->  Sort (cost=13666972.01..13854847.24 rows=75150089 width=16)
(actual time=1000989.362..1035694.670 rows=74196802 loops=1)
 Sort Key: (ROW(account_id, (purchase_time)::date))
 Sort Method: external merge  Disk: 3118720kB
 Buffers: shared hit=1570088 read=1326647,
  temp read=389842 written=389842
 ->  Seq Scan on fmb
 (cost=0.00..3836111.11 rows=75150089 width=16)
 (actual time=0.021..35520.901 rows=74196802 loops=1)
   Buffers: shared hit=1570088 read=1326647
 Total runtime: 1059324.646 ms

# show work_mem;
 work_mem
--
 50GB

This is 9.3.3 from the pgdg debian repository.

Torsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Extracting data from the view to retrieve the foreign key is not declared

2014-03-11 Thread nill
Laurenz Thanks for your reply, it is going to help me. 

I use this query to get the dependencies of view with the objects in the db
SELECT * 
FROM INFORMATION_SCHEMA.view_column_usage 

I now have to explain the attributes of the join tree, that is JOIN
expressions e JOIN clauses.

Thanks 




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Extracting-data-from-the-view-to-retrieve-the-foreign-key-is-not-declared-tp5795408p5795524.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-11 Thread matshyeq
Marko, Tom, Adrian, Jeff, Daniel - thank you all for valuable feedback!


 Two general questions:

- when using PQsetSingleRowMode() function - does it give an option to
define how many rows to cache on client's side (like JDBC setFetchSize()
does) or leaves it at pqlib's discretion?

- is it/would it be possible to add corresponding option to pgAdmin to
limit initially (and each subsequently) returned rows in Query Tool by
custom defined max value?


 @Tom

>Perhaps you should actually experiment with that solution instead of
rejecting it out of hand.  Or at least RTFM about it.

As I'm not using pqlib directly I'm unable to leverage PQsetSingleRowMode()
call (or at least I'm not aware how to do this from DBD::Pg)

I simply passed you feedback given by them.


 @Adrian

The example in the documentation you refer to actually demonstrates this
has been properly implemented in JDBC.

By properly I mean call to:

setFetchSize()

works, whatever it actually does behind the scenes (cursors?) it doesn't
actually require a developer to declare and utilize cursors explicitly, like

st.execute("DECLARE csr CURSOR FOR SELECT * FROM myBigTable;");

conn.prepareStatement("fetch 1000 from csr");

⋮


 @Jeff

I'll make a suggestion to DBD::Pg development


 @Daniel

Very interesting alternative. You're welcome to contribute to this
stackoverflow question!

Does it mean $data is a reference to single returned row of data (`COPY
formatted`)?


 Kind Regards

Msciwoj


Re: [GENERAL] How to make PG use work_mem?

2014-03-11 Thread Tom Lane
=?ISO-8859-1?Q?Torsten_F=F6rtsch?=  writes:
> On 11/03/14 14:36, Tom Lane wrote:
>> Perhaps you fat-fingered the SET somehow?

> I just repeated it:

[ thinks for awhile... ]  Oh, I know what's happening: your sort is so
large that it's being constrained by the MaxAllocSize limit on the tuple
pointer array.  This has been fixed in HEAD, but it's not yet in any
shipping release.  According to the log entry for commit
263865a48973767ce8ed7b7788059a38a24a9f37, the previous limit on the number
of tuples that could be sorted in memory was INT_MAX/48 or about 44
million; I've not done the arithmetic to check that, but it seems about
right seeing that you're having trouble with 75 million.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to make PG use work_mem?

2014-03-11 Thread Torsten Förtsch
On 11/03/14 16:03, Tom Lane wrote:
> [ thinks for awhile... ]  Oh, I know what's happening: your sort is so
> large that it's being constrained by the MaxAllocSize limit on the tuple
> pointer array.  This has been fixed in HEAD, but it's not yet in any
> shipping release.  According to the log entry for commit
> 263865a48973767ce8ed7b7788059a38a24a9f37, the previous limit on the number
> of tuples that could be sorted in memory was INT_MAX/48 or about 44
> million; I've not done the arithmetic to check that, but it seems about
> right seeing that you're having trouble with 75 million.

Thanks, that makes sense. BTW, I solved my problem w/o that sort. I was
just curious what happened here.

Torsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Increase in max_connections

2014-03-11 Thread Jeff Janes
On Mon, Mar 10, 2014 at 6:04 PM, Anand Kumar, Karthik <
karthik.anandku...@classmates.com> wrote:

>   Hi all,
>
>  We're running postgres 9.3.2, server configuration below.
>
>  Seemingly randomly, we will see the number of active queries in postgres
> go up until we hit max_connections. The DB will recover after a few minutes.
>
>  We had the issue a couple of times in Feb 2014. We then upgraded the
> postgres server from 9.1 to 9.3.2, and the occurrence has gone up
> significantly - to several times a day.
>
>  The user CPU goes up as well to a 100%, no increase in I/O or system CPU.
> We have slow query logging, and there is no dramatic change in the slow
> queries either.
>


I think you might need to change your cutoff for what makes a slow query.
 It must be the case that either your are seeing an abnormal spike in query
requests, or that the queries are taking an abnormally long time (or both).
 If not many queries are hitting log_min_duration_statement, that just
means you can lower it further without causing too much log bloat.

Usually when people hit max_connections under load, it means they need to
move the limit into a connection pooler (where it can be more graceful, by
putting them in a queue for a free slot, rather than returning an error)
and *lower* max_connections.


max_connections|1500|configuration file
>

 That is very high.

Cheers,

Jeff


Re: [GENERAL] Increase in max_connections

2014-03-11 Thread Anand Kumar, Karthik
No errors in the logs, except when we hit max_connections
No shared memory problems – no associated spike in I/O or system CPU indicating 
shared memory is either unused or over used. Sufficient memory in 
cache/buffers, zero swapping or anything indicative of a memory problem.

The box is pretty beefy – 24 core, 768G RAM :) - so yes, an effective cache of 
568GB is normal, we arrived at it with months of tuning over time.

cpu_tuple_cost of 0.03 – yes, a lot of our settings are tweaked from the 
defaults based on performance. I don't have the output now, the the 0.03 was 
based on recommendations from posrgtes user groups, and via testing with 
setting it up and running explain analyze on queries. None of the settings have 
changed when this problem began.

Thanks,
Karthik

From: Venkata Balaji Nagothi mailto:vbn...@gmail.com>>
Date: Monday, March 10, 2014 7:35 PM
To: "Anand Kumar, Karthik" 
mailto:karthik.anandku...@classmates.com>>
Cc: "pgsql-general@postgresql.org" 
mailto:pgsql-general@postgresql.org>>
Subject: Re: [GENERAL] Increase in max_connections


On Tue, Mar 11, 2014 at 12:04 PM, Anand Kumar, Karthik 
mailto:karthik.anandku...@classmates.com>> 
wrote:
Hi all,

We're running postgres 9.3.2, server configuration below.

Seemingly randomly, we will see the number of active queries in postgres go up 
until we hit max_connections. The DB will recover after a few minutes.

We had the issue a couple of times in Feb 2014. We then upgraded the postgres 
server from 9.1 to 9.3.2, and the occurrence has gone up significantly – to 
several times a day.

The user CPU goes up as well to a 100%, no increase in I/O or system CPU.
We have slow query logging, and there is no dramatic change in the slow queries 
either.
There is a corresponding spike in shared locks, but that seems to be an effect 
not a cause – it corresponds to an increase in the number of running processes 
at the time.

We had a similar issue in the past – that was solved by disabling 
transparent_huge_pages – but the difference there was that we'd see queries 
slow down dramatically. Currently, we don't. Also, transparent_huge_pages is 
still disabled.

I do realize the issue would be caused by a spurt in incoming connections – we 
do not yet have conclusive evidence on whether that's happening (active queries 
climbs up, however no conclusive proof on whether thats because of slow down, 
or because of increase in traffic). Working on getting the information, will 
update with that information as soon as we have it.

I thought I'd send a post out to the group before then, to see if anyone has 
run into anything similar.

Thanks,
Karthik

site=# SELECT version();
PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 
20120305 (Red Hat 4.4.6-4), 64-bit

site=# SELECT name, current_setting(name), source
site-#   FROM pg_settings
site-#   WHERE source NOT IN ('default', 'override');
application_name|psql|client
archive_command|/usr/bin/archiver.sh %f %p|configuration file
archive_mode|on|configuration file
autovacuum_freeze_max_age|25000|configuration file
autovacuum_max_workers|6|configuration file
bgwriter_lru_maxpages|1000|configuration file
bgwriter_lru_multiplier|4|configuration file
checkpoint_completion_target|0.8|configuration file
checkpoint_segments|250|configuration file
checkpoint_timeout|15min|configuration file
checkpoint_warning|6min|configuration file
client_encoding|UTF8|client
commit_siblings|25|configuration file
cpu_tuple_cost|0.03|configuration file
DateStyle|ISO, MDY|configuration file
default_statistics_target|300|configuration file
default_text_search_config|pg_catalog.english|configuration file
effective_cache_size|568GB|configuration file
fsync|on|configuration file
lc_messages|en_US.UTF-8|configuration file
lc_monetary|en_US.UTF-8|configuration file
lc_numeric|en_US.UTF-8|configuration file
lc_time|en_US.UTF-8|configuration file
listen_addresses|*|configuration file
log_autovacuum_min_duration|0|configuration file
log_checkpoints|on|configuration file
log_connections|on|configuration file
log_destination|syslog|configuration file
log_directory|pg_log|configuration file
log_filename|postgresql-%a.log|configuration file
log_line_prefix|user=%u,db=%d,ip=%h |configuration file
log_min_duration_statement|100ms|configuration file
log_min_messages|debug1|configuration file
log_rotation_age|1d|configuration file
log_rotation_size|0|configuration file
log_timezone|US/Pacific|configuration file
log_truncate_on_rotation|on|configuration file
logging_collector|off|configuration file
maintenance_work_mem|1GB|configuration file
max_connections|1500|configuration file
max_locks_per_transaction|1000|configuration file
max_stack_depth|2MB|environment variable
max_wal_senders|5|configuration file
port|5432|command line
random_page_cost|2|configuration file
shared_buffers|8GB|configuration file
synchronous_commit|off|configuration file
syslog_facility|local0|configuration file

Re: [GENERAL] Increase in max_connections

2014-03-11 Thread Anand Kumar, Karthik
Thanks Jeff. We have scripts in place now to capture the incoming rate of 
requests. Waiting on the crash to happen to see if it spikes up :)

Re: min_log_duration – we *do* see a good number of requests in the log that 
hit our cap (of 100ms). Just that nothing stands out when we have the issue. 
Whatever queries we do see slow down seem to be after we start the CPU spike, 
and so an effect and not a cause.

We typically see about 500-700 active queries at a time – and that seems to 
match how high connection limit goes. We tried pg_bouncer, however, at session 
level pooling, it slowed down our applications (they maintain persistent 
connections once established, so any connection overhead slows them down), and 
with transaction level pooling, simply did not work.

Thanks,
Karthik

From: Jeff Janes mailto:jeff.ja...@gmail.com>>
Date: Tuesday, March 11, 2014 9:23 AM
To: "Anand Kumar, Karthik" 
mailto:karthik.anandku...@classmates.com>>
Cc: "pgsql-general@postgresql.org" 
mailto:pgsql-general@postgresql.org>>
Subject: Re: [GENERAL] Increase in max_connections

On Mon, Mar 10, 2014 at 6:04 PM, Anand Kumar, Karthik 
mailto:karthik.anandku...@classmates.com>> 
wrote:
Hi all,

We're running postgres 9.3.2, server configuration below.

Seemingly randomly, we will see the number of active queries in postgres go up 
until we hit max_connections. The DB will recover after a few minutes.

We had the issue a couple of times in Feb 2014. We then upgraded the postgres 
server from 9.1 to 9.3.2, and the occurrence has gone up significantly – to 
several times a day.

The user CPU goes up as well to a 100%, no increase in I/O or system CPU.
We have slow query logging, and there is no dramatic change in the slow queries 
either.


I think you might need to change your cutoff for what makes a slow query.  It 
must be the case that either your are seeing an abnormal spike in query 
requests, or that the queries are taking an abnormally long time (or both).  If 
not many queries are hitting log_min_duration_statement, that just means you 
can lower it further without causing too much log bloat.

Usually when people hit max_connections under load, it means they need to move 
the limit into a connection pooler (where it can be more graceful, by putting 
them in a queue for a free slot, rather than returning an error) and *lower* 
max_connections.


max_connections|1500|configuration file

 That is very high.

Cheers,

Jeff


Re: [GENERAL] automatically refresh all materialized views?

2014-03-11 Thread Kevin Grittner
Marti Raudsepp  wrote:
> Reece Hart  wrote:
>
>> I should be able to chase pg_depends entries to create this
>> ordering, right?
>
> Not always, there may be circular dependencies between them.

I haven't been able to think of a way to create circular references
among a set of materialized views, short of committing violence
against the system catalog tables directly. What have I missed?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication timeout in pg_basebackup

2014-03-11 Thread John R Pierce

On 3/11/2014 5:50 AM, Aggarwal, Ajay wrote:
Thats exactly what I was thinking after all other experiments. Couple 
of questions:
1) why did you say that 300 seconds is the upper limit? Is this 
enforced by Postgres? What if I want to set it to 10 minutes?

2) whats the downside of bigger replication timeout?



I said, set it to 300 or whatever.   An unfortunate extra comma confused 
my meaning.  What I meant was, whatever you set it to, thats a upper limit.


As I understand it, that timeout is how long the replication can lag the 
server before the server decides to stop replication.   with it at 300, 
under heavy load, the replication could run as much as 5 minutes (300 
seconds) behind before it errors.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatically refresh all materialized views?

2014-03-11 Thread Tom Lane
Kevin Grittner  writes:
> Marti Raudsepp  wrote:
>> Not always, there may be circular dependencies between them.

> I haven't been able to think of a way to create circular references
> among a set of materialized views, short of committing violence
> against the system catalog tables directly. What have I missed?

It'd be pretty trivial if we had CREATE OR REPLACE MATERIALIZED VIEW.

Which I assume is not there only for lack of round tuits, and not
because you deliberately intend that it will never be implemented.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatically refresh all materialized views?

2014-03-11 Thread Marti Raudsepp
On Tue, Mar 11, 2014 at 9:17 PM, Kevin Grittner  wrote:
> I haven't been able to think of a way to create circular references
> among a set of materialized views, short of committing violence
> against the system catalog tables directly. What have I missed?

Not directly, but you can create circles with matviews selecting from
ordinary views:

create view v_a as select 1 i;
create materialized view m_a as select * from v_a;
create view v_b as select * from m_a;
create materialized view m_b as select * from v_b;
create or replace view v_a as select * from m_b;  -- Cha-ching!

You probably won't hit them if you don't recurse into views in
pg_depends, but then you might miss some necessary dependencies.

I didn't really give this much thought though. Such a setup might not
be refresh'able after a restore because you run into a recursive
"materialized view "x" has not been populated". (Maybe it is possible
if the recursive reference occurs in a subquery that isn't scanned?
dunno)

Now that I tried it, pg_dump has this to say:

pg_dump: [sorter] WARNING: could not resolve dependency loop among these items:
pg_dump: [sorter]   REFRESH MATERIALIZED VIEW m_a  (ID 1971 OID 18834835)
pg_dump: [sorter] WARNING: could not resolve dependency loop among these items:
pg_dump: [sorter]   REFRESH MATERIALIZED VIEW m_b  (ID 1972 OID 18834843)
pg_dump: [sorter] WARNING: could not resolve dependency loop among these items:
pg_dump: [sorter]   REFRESH MATERIALIZED VIEW m_a  (ID 1971 OID 18834835)
pg_dump: [sorter]   REFRESH MATERIALIZED VIEW m_b  (ID 1972 OID 18834843)

Regards,
Marti


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Increase in max_connections

2014-03-11 Thread Jeff Janes
On Tue, Mar 11, 2014 at 10:20 AM, Anand Kumar, Karthik <
karthik.anandku...@classmates.com> wrote:

>   Thanks Jeff. We have scripts in place now to capture the incoming rate
> of requests. Waiting on the crash to happen to see if it spikes up :)
>
>  Re: min_log_duration - we *do* see a good number of requests in the log
> that hit our cap (of 100ms). Just that nothing stands out when we have the
> issue. Whatever queries we do see slow down seem to be after we start the
> CPU spike, and so an effect and not a cause.
>

I think what you have is a vicious cycle: too many active connections leads
to contention which leads to slow response which leads to piling up
connections which leads to more contention.  So the cause and the effect
are the same thing as each other, you can't cleanly divide them.


>
>  We typically see about 500-700 active queries at a time - and that seems
> to match how high connection limit goes.
>

This is during normal times, or during the trouble?


> We tried pg_bouncer, however, at session level pooling, it slowed down our
> applications (they maintain persistent connections once established, so any
> connection overhead slows them down),
>

I don't understand that.  If the connections are persistent, why would they
increase during the slow down?

Cheers,

Jeff


Re: [GENERAL] Increase in max_connections

2014-03-11 Thread John R Pierce

On 3/11/2014 10:20 AM, Anand Kumar, Karthik wrote:
We typically see about 500-700 active queries at a time 


if these are primarily small/fast queries, like OLTP operations, and you 
DONT have 200-400 CPU cores on this server, you will likely find that if 
you use a queueing mechanism to only execute about 2X your CPU core 
count concurrently, you will get MORE total transactions/second than 
trying to do 500-700 at once.


if your apps are using persistent connections, then the session pooling 
model won't do any good, you should use transaction pooling.  you want 
the actual active query count to be tunable, probably down around 2X the 
cpu core count, depending on various things.some folks say, CPU 
cores/threads plus disk spindles is the optimal number.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovering from failed transaction

2014-03-11 Thread Francisco Figueiredo Jr.
Hi Brian!

I got a patch to fix this. Unfortunately, I'm having some problems with
github at this moment and I couldn't push it to create a pull request.

 Would you mind to patch a local copy of Npgsql code and give it a try?

Here is the patch:


diff --git a/Npgsql/Npgsql/NpgsqlConnector.cs
b/Npgsql/Npgsql/NpgsqlConnector.cs
index eb7da15..5e090be 100644
--- a/Npgsql/Npgsql/NpgsqlConnector.cs
+++ b/Npgsql/Npgsql/NpgsqlConnector.cs
@@ -427,7 +427,7 @@ internal void ReleaseResources()

 internal void ReleaseWithDiscard()
 {
-NpgsqlCommand.ExecuteBlind(this, NpgsqlQuery.DiscardAll, 60);
+NpgsqlCommand.ExecuteBlind(this, NpgsqlQuery.DiscardAll);

 // The initial connection parameters will be restored via
IsValid()
 }
@@ -828,7 +828,7 @@ internal void Open()

 initQueries = sbInitQueries.ToString();

-NpgsqlCommand.ExecuteBlind(this, initQueries, 60);
+NpgsqlCommand.ExecuteBlind(this, initQueries);

 // Make a shallow copy of the type mapping that the connector
will
 // It is possible that the connector may add types to its
privateY



Thanks in advance and sorry for this problem.



On Mon, Mar 10, 2014 at 6:49 PM, Brian Crowell  wrote:

> On Mon, Mar 10, 2014 at 4:16 PM, Jeff Janes  wrote:
> > ...  Or maybe it is mangling the "ROLLBACK;" into some form
> > the database doesn't recognize.  Look in the postgres log files to see
> what
> > the events look like from PostgreSQL's perspective.
>
> Well that's the clue I needed. I was misinterpreting Postgres's log
> file; it was complaining about the "SET statement_timeout" statement
> Npgsql was slipping ahead of my ROLLBACK. Apparently I need to do
> transactions with Npgsql's transaction class.
>
> --Brian
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Regards,

Francisco Figueiredo Jr.
Npgsql Lead Developer
http://www.npgsql.org
http://gplus.to/franciscojunior
http://fxjr.blogspot.com
http://twitter.com/franciscojunior


[GENERAL] When does pg_archivecleanup work?

2014-03-11 Thread AI Rumman
As I have very low wal_keep_segments compare to my wal generation, I am
collecting archive wal files at slave.
Now in order to clean up archive wal collection directory at slave, I used
"archive_cleanup_command".
I watched that after archive wal files were pilling up at slave and after
certain point it was cleared up.
But still I didn't get the idea when it will be executed and clean up my
directory.

Example: recovery.conf that I used:
standby_mode = 'on' # enables stand-by (readonly) mode
primary_conninfo = 'host= 10.0.0.1 port= 5432 user= replicator
password=replicator'
trigger_file = '/pgdata/pgsql.trigger.5432'
restore_command = 'cp /archivewal/%f %p'
archive_cleanup_command = '$HOME/bin/pg_archivecleanup  /archivewal %r'

Any idea please.

Thanks.


Re: [GENERAL] How do I track down a possible locking problem?

2014-03-11 Thread Kevin Grittner
Herouth Maoz  wrote:

> I have a production system using Postgresql 9.1.2.

That's asking for trouble.  There have been many bugs fixed in 9.1
since 2011-12-05, including security vulnerabilities and (more to
the point) bugs which caused vacuum processes to interact poorly
with tables used as queues.  You should really drop in the latest
9.1 minor release.  It does not require any conversion of the data.

http://www.postgresql.org/support/versioning/

> The database in production is very busy with millions of writes
> per hour. Could there be a situation in which a particular
> connection gets "starved" while other connections are able to run
> queries without noticeable delay?

If the issue has not been solved, you might want to read this page
and post to the pgsql-performance list, providing the suggested
information:

http://wiki.postgresql.org/wiki/SlowQueryQuestions

In particular, the number of cores and the setting for
max_connections might suggest a possible cause.  But rather than
guessing in advance of the facts, I suggest monitoring data in the
queue to spot a lingering entry, and capturing the contents of
pg_stat_activity and pg_locks while that condition esists, along
with a couple minutes of output from `vmstat 1`.  That along with
the general information suggested on the above page may allow a
proper diagnosis.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When does pg_archivecleanup work?

2014-03-11 Thread Venkata Balaji Nagothi
On Wed, Mar 12, 2014 at 10:03 AM, AI Rumman  wrote:

As I have very low wal_keep_segments compare to my wal generation, I am
> collecting archive wal files at slave.
> Now in order to clean up archive wal collection directory at slave, I used
> "archive_cleanup_command".
> I watched that after archive wal files were pilling up at slave and after
> certain point it was cleared up.
> But still I didn't get the idea when it will be executed and clean up my
> directory.
>
> Example: recovery.conf that I used:
> standby_mode = 'on' # enables stand-by (readonly) mode
> primary_conninfo = 'host= 10.0.0.1 port= 5432 user= replicator
> password=replicator'
> trigger_file = '/pgdata/pgsql.trigger.5432'
> restore_command = 'cp /archivewal/%f %p'
> archive_cleanup_command = '$HOME/bin/pg_archivecleanup  /archivewal %r'
>

To best way to track the frequency of when the clean-up is happening (in
your environment) is to log output and examine the log file output after a
certain period of time.

Something like below :

*archive_cleanup_command = '$HOME/bin/pg_archivecleanup  /archivewal %r 2>>
/tmp/archivecleanup.log'*

Generally - "%r" implies all the WAL archives before the last restart point
( In-short - WAL archives not needed for recovery) will be delete by
pg_archivecleanup. Example : If i am recovering from an online backup, then
%r picks up information regarding the oldest WAL archive file needed/kept
from the "backup label" and deletes all the archive files prior to that.

To better understand at what frequency the files are getting deleted,
please log the output as mentioned above.

Please let us know the PG version you are using

Regards,

Venkata Balaji N

Sr. Database Administrator
Fujitsu Australia