Re: Pg Upgrade failing as it is not able to start and stop server properly

2018-01-04 Thread kiran gadamsetty
Hi All,

Any help on this issue?


Regards,
Kiran G

On Thu, Dec 21, 2017 at 3:03 PM, kiran gadamsetty <
kirankumar.gadamse...@gmail.com> wrote:

> Hi,
>
> While upgrading the PostgreSQL database from 9.1.4 to 9.6.4 version on
> windows 2012 server, Pg_upgrade is failing as postgre service start and
> stop are failing because of time outs. I got the information as using -m
> immediate mode with solve the problem, but we are interested to know the
> root cause of this issue.
> Without -m immediate, the server start and stop commands will simply wait,
> till command timeouts and they fail.
> There is no load on the server that could cause a performance issue.
>
> Below is the failure snippet from pg_upgrade_internal.log file.
>
>
> Checking for presence of required libraries ok
> Checking database user is the install user  executing:
> SELECT rolsuper, oid FROM pg_catalog.pg_roles WHERE rolname = current_user
> AND rolname !~ '^pg_'
> executing: SELECT COUNT(*) FROM pg_catalog.pg_roles WHERE rolname !~ '^pg_'
> ok
> Checking for prepared transactions  executing:
> SELECT * FROM pg_catalog.pg_prepared_xacts
> ok
>
> If pg_upgrade fails after this point, you must re-initdb the
> new cluster before continuing.
>
> Performing Upgrade
> --
> Analyzing all rows in the new cluster
>  "c:\EMC\AppSync\jboss\datastore_96\engine\bin/vacuumdb" --port 50432
> --username ^"apollosuperuser^" --all --analyze --verbose >>
> "pg_upgrade_utility.log" 2>&1
> ok
> Freezing all rows on the new cluster
> "c:\EMC\AppSync\jboss\datastore_96\engine\bin/vacuumdb" --port 50432
> --username ^"apollosuperuser^" --all --freeze --verbose >>
> "pg_upgrade_utility.log" 2>&1
>
> *failure*
> "c:\EMC\AppSync\jboss\datastore_96\engine\bin/pg_ctl" -w -D
> "c:\EMC\AppSync\jboss\datastore_96\data" -o "" -m fast stop >>
> "pg_upgrade_utility.log" 2>&1
>
>
>
> Below are the pg_upgrade logs pertaining to the failure. Please suggest
> any solutions to fix this issue.
>
> Regards,
> Kiran G
>
>


Re: Pg Upgrade failing as it is not able to start and stop server properly

2018-01-04 Thread Bruce Momjian
On Thu, Jan  4, 2018 at 01:49:17PM +0530, kiran gadamsetty wrote:
> While upgrading the PostgreSQL database from 9.1.4 to 9.6.4 version on
> windows 2012 server, Pg_upgrade is failing as postgre service start and
> stop are failing because of time outs. I got the information as using -m
> immediate mode with solve the problem, but we are interested to know the
> root cause of this issue.
> Without -m immediate, the server start and stop commands will simply wait,
> till command timeouts and they fail.
> There is no load on the server that could cause a performance issue. 
...
> Freezing all rows on the new cluster                        
> "c:\EMC\AppSync
> \jboss\datastore_96\engine\bin/vacuumdb" --port 50432 --username ^
> "apollosuperuser^" --all --freeze --verbose >> "pg_upgrade_utility.log" 
> 2>&
> 1
> 
> *failure*
> "c:\EMC\AppSync\jboss\datastore_96\engine\bin/pg_ctl" -w -D 
> "c:\EMC\AppSync
> \jboss\datastore_96\data" -o "" -m fast stop >> "pg_upgrade_utility.log" 
> 2>
> &1

It appears the freeze operation is failing, and I have no idea what
would cause that.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Production Database requirement

2018-01-04 Thread Azimuddin Mohammed
Hello All,
Can anyone please let me know

1. What is the hardware and software  requirement for postgressql to be
installed in production. We are planning to use postgres for hadoop
backend, to store user access info, metadata etc. So, it will not be high
transaction db, but we might see lost of reads to this db.

2. Can we use Vm's to run the Db? Is it recommended ?


Thanks in Advance!


-- 

Regards,
Azim



Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: Production Database requirement

2018-01-04 Thread Francis Santiago
2018-01-04 14:47 GMT-03:00 Azimuddin Mohammed :

> Hello All,
> Can anyone please let me know
>
> 1. What is the hardware and software  requirement for postgressql to be
> installed in production. We are planning to use postgres for hadoop
> backend, to store user access info, metadata etc. So, it will not be high
> transaction db, but we might see lost of reads to this db.
>

the minimum requirements depend of use the your bds, you can start by 50GB
y,  memory 2 Gb, for example

>
> 2. Can we use Vm's to run the Db? Is it recommended ?
>

Yes, they can use VM'S

>
>
> Thanks in Advance!
>
>
> --
>
> Regards,
> Azim
>
>
>
> 
>  Virus-free.
> www.avast.com
> 
> <#m_2540529285869011783_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


Re: Production Database requirement

2018-01-04 Thread Vincenzo Romano
2018-01-04 18:47 GMT+01:00 Azimuddin Mohammed :
> Hello All,
> Can anyone please let me know
>
> 1. What is the hardware and software  requirement for postgressql to be
> installed in production. We are planning to use postgres for hadoop backend,
> to store user access info, metadata etc. So, it will not be high transaction
> db, but we might see lost of reads to this db.

Your question makes little sense without any details about the data
and the application architecture.
Please elaborate more.
You'd elaborate more on these points in order to get some meaningful advise.

> 2. Can we use Vm's to run the Db? Is it recommended ?

There's nothing against virtualization apart the computing and I/O
limits it can pose.
In general, real hardware can provide for better performances.

> Thanks in Advance!
>
>
> --
>
> Regards,
> Azim
>
>
> Virus-free. www.avast.com



-- 
Vincenzo Romano - NotOrAnd.IT
Information Technologies
--
NON QVIETIS MARIBVS NAVTA PERITVS



Re: Replication questions - read-only and temporary read/write slaves

2018-01-04 Thread Tiffany Thang
Michael,
Thanks for your input. What I meant to say was rolling back all the
changes. I was hoping for a way to temporary open the read-only standby in
r/w for testing purpose and then rollback all the changes made during the
test without having to re-create the standby from scratch.

Thanks.

Tiff

On Wed, Jan 3, 2018 at 11:52 PM, Michael Paquier 
wrote:

> On Fri, Dec 15, 2017 at 12:03:08PM -0500, Tiffany Thang wrote:
> > 1. set up a read-only slave database? The closest solution I could find
> is
> > Hot Standby but the slave would not be accessible until after a
> > failover.
>
> That's what the parameter hot_standby is for in recovery.conf. When a
> server is in recovery mode and once it has reached a consistent point,
> then it can be accessed for read-only queries if this parameter is
> enabled. You need to be careful about how you want to handle replication
> conflicts though, particularly if you have long read-queries on
> standbys, which can be tuned with hot_standby_feedback. Be careful
> though to not bloat too much the primary: retaining a XID horizon older
> causes tables to retain more past versions of tuples, which costs in
> space as well as in future VACUUM cleanups.
>
> > 2. temporary convert a read-only slave in read-write mode for testing
> > read/write workloads? Currently in Oracle, we can temporary open our
> > read-only standby database in read-write mode to occasionally test our
> > read-write workloads. We would stop the log apply on the standby
> database,
> > convert the read-only database to read-write,
> > perform our read/write test, discard all the changes after testing and
> > reopen and resync the standby database in read-only mode. Is there a
> > similar feature in PostgreSQL or are there ways to achieve something
> close
> > to our needs?
>
> Unfortunately not. You could reach the same kind of behavior by
> promoting a standby, and then do your testing. Then you would need to
> re-create a standby from scratch. What does "discard all the changes"
> mean?
> --
> Michael
>


Re: Production Database requirement

2018-01-04 Thread Michelle Konzack
Salam Mohammed,

Am DATE hackte Azimuddin Mohammed in die Tasten:
> Hello All,
> Can anyone please let me know
>
> 1. What is the hardware and software  requirement for postgressql to be
> installed in production. We are planning to use postgres for hadoop
> backend, to store user access info, metadata etc. So, it will not be high
> transaction db, but we might see lost of reads to this db.

It sound a only a little bit more what I have a s /etc/passwd, /etc/shadow,
/etc/groups and /etc/gshadow replacement...

I have exactly this too and it rund quiert well on a MiniITX with 1000MHz
and 1 GByte of memory which is 60% unused.  The Debian GNU/Linux 7.11
installation together with the postgresql database using less then
5 GByte...  Lets say, the whole thing would run an a 20 GByte HDD.

I manage currently 23.000 users with it...

And no, I do not use LDAP.

My webservices authenticate directly with the database server which is
hostet at a friendly ISP in Estonia.

> 2. Can we use Vm's to run the Db? Is it recommended ?

Yes of course.

> Thanks in Advance!



-- 
Michelle Konzack
00372-54541400




Bulk Insert/Update Scenario

2018-01-04 Thread Mana M
I am setting up the data processing pipeline and the end result gets stored
in Postgres. Have not been a heavy DB user in general and had question
regarding how best to handle bulk insert/updates scenario with Postgres.

Here is my use case:
* I get file with thousands of entries (lines) periodically.
* I process each entry (line) from the file and data is split and stored in
different Postgres tables. Some tables have foreign keys on other tables.
There is "no" straight mapping from the entry in file to Postgres tables.
* Some data could be updates on existing rows in Postgres tables while
others could be inserts.
* Would like to ensure the atomicity (either all rows gets stored in all
tables or nothing gets stored on failure from Postgres).
* Also like to make sure no concurrency issues in case two different
processes try to perform above at the same time.
* Ideally, would want to avoid individual upserts after processing every
single entry (line) from the file.


I thought this would be a fairly common use case. What is the best way to
handle above? What performance issues I should keep in mind and what are
the pitfalls? I tried looking around for articles for such use case - any
pointers would be greatly appreciated.


By the way, the application is in Python running in Apache Spark and can
use any Python libraries that can help simplify above.

Thanks in advance.


Re: Bulk Insert/Update Scenario

2018-01-04 Thread Jordan Deitch
Hi Mana,

A starting point would be reading about the batch upsert functionality:
https://www.postgresql.org/docs/current/static/sql-insert.html

You would do something like:
INSERT INTO table ON CONFLICT update...

This operation would be atomic. You can also look into deferrable
constraints such that you would perform all your insert / update operations
in a transaction block and accommodate for the constraints.

I hope this helps to get you on the right track!

Thanks,
Jordan Deitch
http://id.rsa.pub


Re: Bulk Insert/Update Scenario

2018-01-04 Thread legrand legrand
Hi,

check documentation  Populate a database
  

this explains how to create a dummy table,
load it using COPY command,
and then INSERT / UPDATE target tables (using ON CONFLICT if needed)

You can also investigate:
-  file_fdw   
extension (that permits to use text files as tables)
-  pg_bulkload    extension (that
permits to load data like Oracle loader do)

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Bulk Insert/Update Scenario

2018-01-04 Thread Mana M
Thanks Jordan.

One more question I had was - anyway to avoid doing individual INSERT ...
ON CONFLICT? I was thinking about dumping everything into TEMP table and
using that as source for INSERT ... ON CONFLICT. However, I was not sure on
how to get thousands of rows from my Python application into TEMP table in
one shot. Or is there any better alternatives?

Thanks.

On Thu, Jan 4, 2018 at 12:43 PM, Jordan Deitch  wrote:

> Hi Mana,
>
> A starting point would be reading about the batch upsert functionality:
> https://www.postgresql.org/docs/current/static/sql-insert.html
>
> You would do something like:
> INSERT INTO table ON CONFLICT update...
>
> This operation would be atomic. You can also look into deferrable
> constraints such that you would perform all your insert / update operations
> in a transaction block and accommodate for the constraints.
>
> I hope this helps to get you on the right track!
>
> Thanks,
> Jordan Deitch
> http://id.rsa.pub
>
>
>


Re: Bulk Insert/Update Scenario

2018-01-04 Thread Jordan Deitch
INSERT .. ON CONFLICT can be ran as a bulk operation:

create table test(id int);
insert into test(id) values (1), (2), (3), (4);

Unless you mean you don't want to run this for every table?


Thanks,
Jordan Deitch
http://id.rsa.pub


Re: Replication questions - read-only and temporary read/write slaves

2018-01-04 Thread Michael Paquier
On Fri, Jan 5, 2018 at 3:58 AM, Tiffany Thang  wrote:
> Thanks for your input. What I meant to say was rolling back all the changes.
> I was hoping for a way to temporary open the read-only standby in r/w for
> testing purpose and then rollback all the changes made during the test
> without having to re-create the standby from scratch.

There is no backend-side feature that allows undo actions, Postgres
only supports redo. Recycling an older standby is the speciality of
pg_rewind, which supports the possibility of backward timeline lookups
from 9.6. So you could emulate the same behavior as Oracle by:
1) Promoting the standby where you want the tests to happen.
2) Run your r/w load on it.
3) Stop the standby.
4) Rewind the standby using pg_rewind, so as it is able to join back
the cluster. This needs a new recovery.conf of course. pg_rewind also
needs to find in the standby's pg_xlog all the WAL segments from the
point where WAL has forked (when the standby has been promoted), up to
the point where you run your r/w tests. This can be tricked with
wal_keep_segments, with a replication slot or with larger values of
checkpoint_timeout and max_wal_size, or by even copying segments from
an archive before running the rewind. In all cases be careful of bloat
in the partition of pg_xlog.
-- 
Michael



BDR PG9.4 Replica Identity Full

2018-01-04 Thread ning chan
Hi BDR PostgreSQL experts,

I am recently trying out the logical decoding with Replica Identity Full
using postgresql-bdr94-server.
However, i seems to run in to a bug reporting in this thread
https://github.com/2ndQuadrant/bdr/issues/104.

I tested stock version of PG 9.4 and it works just fine.

Anyway know there is workaround or solution to get Replica Identity Full
working in postgresql-bdr?

To reproduce:
test1=# create table test (i int primary key, t text);
CREATE TABLE
test1=# alter table test replica identity full;
ALTER TABLE
test1=# insert into test values (1, 'hello');
INSERT 0 1
test1=# SELECT * FROM pg_create_logical_replication_slot('regression_slot',
'test_decoding');
slot_name| xlog_position
-+---
 regression_slot | 0/70BBAB0
(1 row)

cucm=# SELECT * FROM pg_logical_slot_get_changes('regression_slot', NULL,
NULL);
 location | xid | data
--+-+--
(0 rows)

test1=# update test set t='world';
ERROR:  Cannot run UPDATE or DELETE on table test because it does not have
a PRIMARY KEY.
HINT:  Add a PRIMARY KEY to the table

FYI, the exact same command works just fine on stock PG9.4.5

And here is the postgresql bdr package that i used
[root@bdr301 /]# rpm -qa |grep post
postgresql-bdr94-2ndquadrant-redhat-1.0-3.noarch
postgresql-bdr94-libs-9.4.15_bdr1-1.el6.x86_64
postgresql-bdr94-contrib-9.4.15_bdr1-1.el6.x86_64
postgresql-bdr94-bdr-1.0.3-3.el6.x86_64
postgresql-bdr94-9.4.15_bdr1-1.el6.x86_64
postgresql-bdr94-server-9.4.15_bdr1-1.el6.x86_64

Thanks!
Ning