Re: Master-Slave error: the database system is starting up

2018-02-02 Thread Daniel Westermann
> http://www.postgresql-archive.org/Master-Slave-error-the-database-system-is-starting-up-td6004044.html
>  
> 
>please execute "select * from pg_stat_replication;" and "select * from 
>pg_replication_slots ;" on the master and show us the output. 
>Please check also the logs on the standby. 

Did you set hot_standby=on on the slave as well. 

Regards 
Daniel 


Re: PG Sharding

2018-02-02 Thread Matej
I thought that this kind of solution had high latency and bad OLTP
capabilities (low trans/second)? Analytics is not a high priority.

BR

2018-02-01 19:01 GMT+01:00 Dan Wierenga :

>
>
> On Wed, Jan 31, 2018 at 7:48 PM, Steven Lembark 
> wrote:
>
>> On Mon, 29 Jan 2018 15:34:18 +0100
>> Matej  wrote:
>>
>> > Hi Everyone.
>> >
>> > We are looking at a rather large fin-tech installation. But as
>> > scalability requirements are high we look at sharding of-course.
>> >
>> > I have looked at many sources for Postgresql sharding, but we are a
>> > little confused as to shared with schema or databases or both.
>>
>> Suggest looking at the Xreme Data product. It is a parallel,
>> shared-nothing implementation of PG that should solve your
>> needs rather nicely.
>>
>> You can see a description of their product at
>> https://xtremedata.com/
>>
>> Happy scaling :-)
>>
>>
> Having been a production DBA for both the DBX (XtremeData) and the
> Greenplum MPP database platforms, IMO Greenplum is far superior to DBX.
> Issues with the GP master node being a single point of failure are solved
> by a secondary master node and automatic failover technology e.g.
> keepalived.
>
> But, it sounds like the OP is not really looking for the kind of scale
> that an MPP solution provides, but rather the kind of scale that is
> typically solved by a service-orchestration suite.  I don't think that "a
> rather large fin-tech installation" with "high scalability requirements" is
> really enough detail to give a recommendation on orchestration software.
>
> -dan
>


FATAL: failed to create a backend connection

2018-02-02 Thread Vikas Sharma
Hi There,

We are using Postgresql 9.3 with pgpool-II-93-3.5.10.
One Master replicating to 3 Standbys using streaming replication. Pgpool is
used for load balancing only.

Lately we are seeing below on application servers.

Caused by: org.postgresql.util.PSQLException: FATAL: failed to create a
backend connection
Detail: executing failover on backend

Are we hitting the max_connections in PgSql instance or pgpool has reached
to max_connections and can't spawn more

Please advise.

Regards
Vikas


Re: FATAL: failed to create a backend connection

2018-02-02 Thread Pavan Teja
On Feb 2, 2018 5:57 PM, "Vikas Sharma"  wrote:

Hi There,

We are using Postgresql 9.3 with pgpool-II-93-3.5.10.
One Master replicating to 3 Standbys using streaming replication. Pgpool is
used for load balancing only.

Lately we are seeing below on application servers.

Caused by: org.postgresql.util.PSQLException: FATAL: failed to create a
backend connection
Detail: executing failover on backend

Are we hitting the max_connections in PgSql instance or pgpool has reached
to max_connections and can't spawn more

Please advise.

Regards
Vikas


Hi,
   It seems like there is a disagreement between postgres and
pgpool as to where the Unix domain sockets for talking to postgres ought to
be found.
 It's a situation like it can connect to postgres buy not pgpool, as it was
directing all traffic to the local postgres node.
1) Check whether any of the postgres backends are cancelling query with
'statement time out' error.

2) If statement time out is not set in postgres.conf file then the
application might have altered.

3) Check for any long running queries are being cancelled by Server.

Regards,
Pavan


Increasing size of array items without locking

2018-02-02 Thread Rowan Seymour
Hi. I'm trying to increase the size of items in an array column on a large
production database but it's locking the table. Is it correct than when you
make a field larger, it shouldn't lock because it doesn't have to check the
constraint on every row? For example expanding a VARCHAR column from 255
chars to 2048 chars.

Is the problem here that it is an array field?

ALTER TABLE "msgs_msg" ALTER COLUMN "attachments" TYPE varchar(2048)[]
USING "attachments"::varchar(2048)[];

Thanks

-Rowan
-- 
*Rowan Seymour* | +593 979099233 | @rowanseymour


Ensure extension exists

2018-02-02 Thread Geoffrey Hoffman
I want to use the pgcrypto extension with a particular database. Does it exist 
permanently and survive a server reboot? Or if not, how do you ensure it runs 
when the server restarts?

I ask because in my Docker development environment I have to run it every time 
I restart the container, and just want to be prepared for production in AWS 
Aurora.

Thanks,
Geoff


Re: Master-Slave error: the database system is starting up

2018-02-02 Thread Sargez
Hello

Thenk you very much fot your interest! I am very grateful!

There are my results:

postgres=# select * from pg_stat_replication;
 pid  | usesysid | usename  | application_name | client_addr |
client_hostname | client_port |
   backend_start | backend_xmin |   state   | sent_location |
write_location | flush_locatio
n | replay_location | sync_priority | sync_state
--+--+--+--+-+-+-+--
-+--+---+---++--
--+-+---+
 4372 |   10 | postgres | walreceiver  | ip-slave |
|   57344 | 2018-
02-01 23:08:49.729755+03 |  | streaming | A/E7CE29F0|
A/E7CE2878 | A/E7CE2878
  | A/E7CE2878  | 0 | async
(1 строка)

postgres=# select * from pg_replication_slots;
 slot_name | plugin | slot_type | datoid | database | active | active_pid |
xmin | catalog_xmin | re
start_lsn | confirmed_flush_lsn
---++---++--+++--+--+---
--+-
 slot_1|| physical  ||  | t  |   4372 | 
|  | A/
E7CE3CD0  |
(1 строка)

I'll wait for your reply, in log file the same the same error when trying to
connect to postgres
"the database system is starting up"



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Postgres Planner "Inconsistency"?

2018-02-02 Thread Renzo Bertuzzi
Hi
Can you help me with this situation please??

I have a strange problem with a query where the planner only uses and index
if I use a constant value, but if I use a subquery it will prefer a seq
scan.

I have table "sample_table" with columns id serial primary key, and
int_flag, with an index on int_flag.
I inserted 240387 values with int_flag=1 and 1 value with int_flag=2

so the table has 240388 total rows, the last row of the table has int_flag=2

If I execute this query, the planner chooses the index:

explain (analyze ,verbose,buffers)
SELECT id
FROM sample_table
WHERE
  int_flag = any((array[2])::int[])

QUERY PLAN


---
Index Scan using index_sample_table_int_flag_ix on public.sample_table
(cost=0.42..39.86 rows=1 width=8) (actual time=0.023..0.023 *rows=1*
 loops=1)
  Output: id


  Index Cond: (sample_table.int_flag = ANY ('{2}'::integer[]))


  Buffers: shared hit=28


Planning time: 0.087 ms


Execution time: 0.046 ms


but if I slightly change the query to:

explain (analyze ,verbose,buffers)
SELECT id
FROM sample_table
WHERE
  int_flag = any((*select* array[2])::int[])

now postgres will do a seq scan.
I have run vacuum and analyze but the result is the same.

QUERY PLAN

---
Seq Scan on public.sample_table  (cost=0.01..8843.74 rows=240388 width=8)
(actual time=44.993..44.995 *rows=1* loops=1)
  Output: id

  Filter: (sample_table.int_flag = ANY ($0))

  Rows Removed by Filter: *240387*

  Buffers: shared hit=3435

  InitPlan 1 (returns $0)

->  Result  (cost=0.00..0.01 rows=1 width=32) (actual time=0.002..0.003
rows=1 loops=1)
  Output: '{2}'::integer[]

Planning time: 0.092 ms

Execution time: 45.017 ms


I have created a SQL Fiddle to demonstrate the issue:

http://sqlfiddle.com/#!17/5be43/5



I suppose postgres prefers a seq scan because it treats the subquery as a
non-deterministic value while in the first case the planner has all the
values before hand???

I'm using PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Debian
4.9.2-10) 4.9.2, 64-bit

PS: This is a simplified query, the actual query will use another small
table to build the array with less than 100 values and sample_table can
have up to 5 millions entries. I have tried using a CTE with the array, but
it still will do a seq scan.

cheers


Re: Unexpected ErrorMessage reply to SSLRequest

2018-02-02 Thread Peter Eisentraut
On 1/31/18 17:01, Michał Muskała wrote:
> I'm one of the maintainers of postgrex [1] - an Elixir client for
> Postgresql. We implement the wire protocol and connect directly through
> tcp. We got a bug report indicating we got an unexpected ErrorMessage
> packet in reply to SSLRequest.

What did the client error message say, and what did you see in the
server log?

> I wonder, if this is a bug or rather an omission in the docs that the
> ErrorMessage can happen under some failure scenarios other than lack of
> SSL support.

I'm not sure, but the documentation also says that you should handle it
nonetheless. ;-)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Ensure extension exists

2018-02-02 Thread Peter Eisentraut
On 2/1/18 19:47, Geoffrey Hoffman wrote:
> I want to use the pgcrypto extension with a particular database. Does it 
> exist permanently and survive a server reboot? Or if not, how do you ensure 
> it runs when the server restarts?

Once an extension is installed, it stays installed, just like any other
permanent database object such as tables and functions.

> I ask because in my Docker development environment I have to run it every 
> time I restart the container, and just want to be prepared for production in 
> AWS Aurora.

Maybe you have your development environment set up in a way that it
blows away your database on each run.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Ensure extension exists

2018-02-02 Thread geoff hoffman
On Feb 2, 2018, at 12:38 PM, Peter Eisentraut 
 wrote:
> 
> On 2/1/18 19:47, Geoffrey Hoffman wrote:
>> I want to use the pgcrypto extension with a particular database. Does it 
>> exist permanently and survive a server reboot? Or if not, how do you ensure 
>> it runs when the server restarts?
> 
> Once an extension is installed, it stays installed, just like any other
> permanent database object such as tables and functions.
> 
>> I ask because in my Docker development environment I have to run it every 
>> time I restart the container, and just want to be prepared for production in 
>> AWS Aurora.
> 
> Maybe you have your development environment set up in a way that it
> blows away your database on each run.
> 

Yep, ok thanks for confirming!




Re: Postgres Planner "Inconsistency"?

2018-02-02 Thread Laurenz Albe
Renzo Bertuzzi wrote:
> I have a strange problem with a query where the planner only uses and index
> if I use a constant value, but if I use a subquery it will prefer a seq scan.
> 
> I have table "sample_table" with columns id serial primary key, and int_flag, 
> with an index on int_flag.
> I inserted 240387 values with int_flag=1 and 1 value with int_flag=2
> 
> so the table has 240388 total rows, the last row of the table has int_flag=2
> 
> If I execute this query, the planner chooses the index:
> 
> explain (analyze ,verbose,buffers)
> SELECT id
> FROM sample_table
> WHERE  
>   int_flag = any((array[2])::int[])
> 
> QUERY PLAN
>   
>  
> ---
>   
> Index Scan using index_sample_table_int_flag_ix on public.sample_table  
> (cost=0.42..39.86 rows=1 width=8) (actual time=0.023..0.023 rows=1 loops=1)  
>   Output: id  
>   
>  
>   Index Cond: (sample_table.int_flag = ANY ('{2}'::integer[]))
>   
>
>   Buffers: shared hit=28  
>   
>  
> Planning time: 0.087 ms   
>   
>  
> Execution time: 0.046 ms   
> 
> 
> but if I slightly change the query to:
> 
> explain (analyze ,verbose,buffers)
> SELECT id
> FROM sample_table
> WHERE  
>   int_flag = any((select array[2])::int[])
> 
> now postgres will do a seq scan.
> I have run vacuum and analyze but the result is the same.
> 
> QUERY PLAN
>
> ---
>   
> Seq Scan on public.sample_table  (cost=0.01..8843.74 rows=240388 width=8) 
> (actual time=44.993..44.995 rows=1 loops=1)  
>   Output: id  
>
>   Filter: (sample_table.int_flag = ANY ($0))  
>   
>   Rows Removed by Filter: 240387  
>
>   Buffers: shared hit=3435
>
>   InitPlan 1 (returns $0) 
>
> ->  Result  (cost=0.00..0.01 rows=1 width=32) (actual time=0.002..0.003 
> rows=1 loops=1)  
>   Output: '{2}'::integer[]
>   
> Planning time: 0.092 ms   
>
> Execution time: 45.017 ms  
> 
> I suppose postgres prefers a seq scan because it treats the subquery as a 
> non-deterministic
> value while in the first case the planner has all the values before hand???
> 
> I'm using PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Debian 
> 4.9.2-10) 4.9.2, 64-bit
> 
> PS: This is a simplified query, the actual query will use another small table 
> to build the array
> with less than 100 values and sample_table can have up to 5 millions entries.
> I have tried using a CTE with the array, but it still will do a seq scan.

In the second case, the optimizer does not think hard enough to figure out
that it actually could know that the InitPlan has a result of 2, and with
your real query it probably couldn't know for sure even if it tried hard.

So it has to come up with a plan without knowing what the search values will
be, and it chooses a sequential scan as the lesser evil, since it guesses
that it will have to retrieve most of the tuples anyway.

Maybe you can write your query as a join instead.

Yours,
Laurenz Albe