Re: could not accept SSL connection: sslv3 alert bad certificate

2019-09-26 Thread Marco Ippolito
Hi Adrian,
putting in /fabric/fabric-ca/fabric-ca-server-config.yaml :

db:
  type: postgres
  datasource: host=localhost port=5433 user=fabmnet_admin
password=fabmnet1971 dbname=fabmnetdb sslmode=verify-ca

this is the result:

(base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b admin:adminpw
2019/09/26 09:44:39 [INFO] Configuration file location:
/home/marco/fabric/fabric-ca/fabric-ca-server-config.yaml
2019/09/26 09:44:39 [INFO] Starting server in home directory:
/home/marco/fabric/fabric-ca
2019/09/26 09:44:39 [INFO] Server Version: 1.4.4
2019/09/26 09:44:39 [INFO] Server Levels: &{Identity:2 Affiliation:1
Certificate:1 Credential:1 RAInfo:1 Nonce:1}
2019/09/26 09:44:39 [INFO] The CA key and certificate already exist
2019/09/26 09:44:39 [INFO] The key is stored by BCCSP provider 'SW'
2019/09/26 09:44:39 [INFO] The certificate is at:
/home/marco/fabric/fabric-ca/ca-cert.pem
2019/09/26 09:44:39 [ERROR] Error occurred initializing database: Failed to
create Postgres database: Failed to execute create database query: pq:
permission denied to create database
2019/09/26 09:44:39 [INFO] Home directory for default CA:
/home/marco/fabric/fabric-ca
2019/09/26 09:44:39 [INFO] Operation Server Listening on 127.0.0.1:9443
2019/09/26 09:44:39 [INFO] Listening on http://0.0.0.0:7054

and the corresponding log in /var/log/postgresql/postgresql-11-fabmnet.log :

2019-09-26 09:21:11.605 CEST [1132] LOG:  received fast shutdown request
2019-09-26 09:21:11.613 CEST [1132] LOG:  aborting any active transactions
2019-09-26 09:21:11.615 CEST [1132] LOG:  background worker "logical
replication launcher" (PID 1169) exited with exit code 1
2019-09-26 09:21:11.616 CEST [1161] LOG:  shutting down
2019-09-26 09:21:11.643 CEST [1132] LOG:  database system is shut down
2019-09-26 09:21:57.370 CEST [1077] LOG:  listening on IPv6 address "::1",
port 5433
2019-09-26 09:21:57.370 CEST [1077] LOG:  listening on IPv4 address
"127.0.0.1", port 5433
2019-09-26 09:21:57.372 CEST [1077] LOG:  listening on Unix socket
"/var/run/postgresql/.s.PGSQL.5433"
2019-09-26 09:21:57.426 CEST [1124] LOG:  database system was shut down at
2019-09-26 09:21:11 CEST
2019-09-26 09:21:57.446 CEST [1077] LOG:  database system is ready to
accept connections
2019-09-26 09:21:58.040 CEST [1147] [unknown]@[unknown] LOG:  incomplete
startup packet
2019-09-26 09:44:39.374 CEST [2902] fabmnet_admin@fabmnetdb ERROR:
 permission denied to create database
2019-09-26 09:44:39.374 CEST [2902] fabmnet_admin@fabmnetdb STATEMENT:
 CREATE DATABASE fabmnetdb

Putting in /home/marco/fabric/fabric-ca/fabric-ca-serve-config.yaml :

db:
  type: postgres
  datasource: host=localhost port=5433 user=fabmnet_admin
password=fabmnet1971 dbname=fabmnetdb sslmode=require

which, according to
https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#configuring-the-database
means: Always SSL (skip verification)
again the same error message:

(base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b admin:adminpw
2019/09/26 10:08:27 [INFO] Configuration file location:
/home/marco/fabric/fabric-ca/fabric-ca-server-config.yaml
2019/09/26 10:08:27 [INFO] Starting server in home directory:
/home/marco/fabric/fabric-ca
2019/09/26 10:08:27 [INFO] Server Version: 1.4.4
2019/09/26 10:08:27 [INFO] Server Levels: &{Identity:2 Affiliation:1
Certificate:1 Credential:1 RAInfo:1 Nonce:1}
2019/09/26 10:08:27 [INFO] The CA key and certificate already exist
2019/09/26 10:08:27 [INFO] The key is stored by BCCSP provider 'SW'
2019/09/26 10:08:27 [INFO] The certificate is at:
/home/marco/fabric/fabric-ca/ca-cert.pem
2019/09/26 10:08:27 [ERROR] Error occurred initializing database: Failed to
create Postgres database: Failed to execute create database query: pq:
permission denied to create database
2019/09/26 10:08:27 [INFO] Home directory for default CA:
/home/marco/fabric/fabric-ca
2019/09/26 10:08:27 [INFO] Operation Server Listening on 127.0.0.1:9443
2019/09/26 10:08:27 [INFO] Listening on http://0.0.0.0:7054

and the corresponding portion of the log file:

2019-09-26 10:08:27.947 CEST [3728] fabmnet_admin@fabmnetdb ERROR:
 permission denied to create database
2019-09-26 10:08:27.947 CEST [3728] fabmnet_admin@fabmnetdb STATEMENT:
 CREATE DATABASE fabmnetdb

I do not undertand...

Marco

Il giorno gio 26 set 2019 alle ore 02:07 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:

> On 9/25/19 12:34 PM, Marco Ippolito wrote:
> > Following the indications here:
> >
> https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#configuring-the-database
> > I'm trying to understand how to correctly set Fabric-CA with a
> > PostgreSQL-11 database in Ubuntu 18.04.02 Server Edition.
> >
>
> > This is the corresponding part in
> > /var/log/postgresql/postgresql-11-fabmnet.log :
> >
> >  2019-09-25 20:51:52.655 CEST [1096] LOG:  listening on IPv6 address
> > "::1",
> >  port 5433
> >  2019-09-25 20:51:52.673 CEST [1096] LOG:  listening on IPv4 address
> > 

Re: could not accept SSL connection: sslv3 alert bad certificate

2019-09-26 Thread Marco Ippolito
Thanks Martin. I need to check these important aspects as well.
What do you mean as "disable hardcoded BCCSP Provider"?

Marco

Il giorno gio 26 set 2019 alle ore 00:43 Martin Gainty 
ha scritto:

> Hi Marco
>
> not necessarily with PG but with all other servers i secure when i see
> that error
> it means the certificate and key your provider is referencing are already
> stored in storage (in my case "truststore")
> I would clean all storage locations of certificate and key
> then I would allow BCCSP provider to push your cert and key into stores
> (identified by BCCSP config)
>
> if that doesnt work I would disable hardcoded BCCSP Provider then manually
> import your certs and keys into your truststore
>
> YMMV
> martin
> --
> *From:* Marco Ippolito 
> *Sent:* Wednesday, September 25, 2019 3:34 PM
> *To:* pgsql-general@lists.postgresql.org <
> pgsql-general@lists.postgresql.org>
> *Subject:* could not accept SSL connection: sslv3 alert bad certificate
>
> Following the indications here:
> https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#configuring-the-database
> I'm trying to understand how to correctly set Fabric-CA with a
> PostgreSQL-11 database in Ubuntu 18.04.02 Server Edition.
>
> I created a postgresql-11 db to which I can connect with SSL:
>
> (base) marco@pc:~$ psql --cluster 11/fabmnet -h 127.0.0.1 -d
> fabmnetdb -U fabmnet_admin
> Password for user fabmnet_admin:
> psql (11.5 (Ubuntu 11.5-1.pgdg18.04+1))
> SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384,
> bits: 256, compression: off)
> Type "help" for help.
>
> fabmnetdb=> \l
> List of databases
>Name| Owner | Encoding | Collate |  Ctype  |   Access
> privileges
>
> ---+---+--+-+-+---
>  fabmnetdb | fabmnet_admin | UTF8 | C.UTF-8 | C.UTF-8 |
>  postgres  | postgres  | UTF8 | C.UTF-8 | C.UTF-8 |
>  template0 | postgres  | UTF8 | C.UTF-8 | C.UTF-8 |
> =c/postgres  +
>|   |  | | |
> postgres=CTc/postgres
>  template1 | postgres  | UTF8 | C.UTF-8 | C.UTF-8 |
> =c/postgres  +
>|   |  | | |
> postgres=CTc/postgres
> (4 rows)
>
> fabmnetdb=>
>
>
> but when trying to start a fabric-ca-server :
>
> (base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b
> admin:adminpw
> 2019/09/25 20:56:57 [INFO] Configuration file location:
> /home/marco/fabric
> /fabric-ca/fabric-ca-server-config.yaml
> 2019/09/25 20:56:57 [INFO] Starting server in home directory:
> /home/marco
> /fabric/fabric-ca
> 2019/09/25 20:56:57 [INFO] Server Version: 1.4.4
> 2019/09/25 20:56:57 [INFO] Server Levels: &{Identity:2 Affiliation:1
> Certificate:1 Credential:1 RAInfo:1 Nonce:1}
> 2019/09/25 20:56:57 [INFO] The CA key and certificate already exist
> 2019/09/25 20:56:57 [INFO] The key is stored by BCCSP provider 'SW'
> 2019/09/25 20:56:57 [INFO] The certificate is at: /home/marco/fabric
> /fabric-ca/ca-cert.pem
> 2019/09/25 20:56:57 [WARNING] Failed to connect to database 'fabmnetdb'
> 2019/09/25 20:56:57 [WARNING] Failed to connect to database 'postgres'
> 2019/09/25 20:56:57 [WARNING] Failed to connect to database 'template1'
> 2019/09/25 20:56:57 [ERROR] Error occurred initializing database:
> Failed
> to connect to Postgres database. Postgres requires connecting to a
> specific database, the following databases were tried: [fabmnetdb
> postgres
>  template1]. Please create one of these database before continuing
> 2019/09/25 20:56:57 [INFO] Home directory for default CA: /home/marco
> /fabric/fabric-ca
> 2019/09/25 20:56:57 [INFO] Operation Server Listening on
> 127.0.0.1:9443
> 2019/09/25 20:56:57 [INFO] Listening on http://0.0.0.0:7054
>
> This is the corresponding part in
> /var/log/postgresql/postgresql-11-fabmnet.log :
>
> 2019-09-25 20:51:52.655 CEST [1096] LOG:  listening on IPv6 address
> "::1",
> port 5433
> 2019-09-25 20:51:52.673 CEST [1096] LOG:  listening on IPv4 address
> "127.0.0.1", port 5433
> 2019-09-25 20:51:52.701 CEST [1096] LOG:  listening on Unix socket
> "/var/run/postgresql/.s.PGSQL.5433"
> 2019-09-25 20:51:52.912 CEST [1171] LOG:  database system was
> interrupted;
>  last known up at 2019-09-25 09:50:30 CEST
> 2019-09-25 20:51:53.001 CEST [1171] LOG:  database system was not
> properly
>  shut down; automatic recovery in progress
> 2019-09-25 20:51:53.011 CEST [1171] LOG:  redo starts at 0/1668238
> 2019-09-25 20:51:53.011 CEST [1171] LOG:  invalid record length at
> 0/1668318: wanted 24, got 0
> 2019-09-25 20:51:53.011 CEST [1171] LOG:  redo done at 0/16682E0
> 2019-09-25 20:51:53.043 CEST [1096] LOG:  databa

Logical replicatino from standby

2019-09-26 Thread Andreas Joseph Krogh
Hi. Will the feature described here (Minimal logical decoding on standbys): 
https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de
 

make it possible to do logical replication from standby like I'm looking for in 
this thread:
https://www.postgresql.org/message-id/VisenaEmail.15.9f118cec79ac2589.1621cfd8405%40tc7-visena
 

 ? Thanks. -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 
963 andr...@visena.com  www.visena.com 
   

Re: could not accept SSL connection: sslv3 alert bad certificate

2019-09-26 Thread Marco Ippolito
Affer removing the previous cert and key files,  I started again the
fabric-ca server discovering that new cert and key files were created:

(base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b admin:adminpw
2019/09/26 11:56:18 [INFO] Configuration file location:
/home/marco/fabric/fabric-ca/fabric-ca-server-config.yaml
2019/09/26 11:56:18 [INFO] Starting server in home directory:
/home/marco/fabric/fabric-ca
2019/09/26 11:56:18 [INFO] Server Version: 1.4.4
2019/09/26 11:56:18 [INFO] Server Levels: &{Identity:2 Affiliation:1
Certificate:1 Credential:1 RAInfo:1 Nonce:1}
2019/09/26 11:56:18 [WARNING] &{69 The specified CA certificate file
/home/marco/fabric/fabric-ca/ca-cert.pem does not exist}
2019/09/26 11:56:18 [INFO] generating key: &{A:ecdsa S:256}
2019/09/26 11:56:18 [INFO] encoded CSR
2019/09/26 11:56:18 [INFO] signed certificate with serial number
542755587310273579559145444277178107021548224556
2019/09/26 11:56:18 [INFO] The CA key and certificate were generated for CA
2019/09/26 11:56:18 [INFO] The key was stored by BCCSP provider 'SW'
2019/09/26 11:56:18 [INFO] The certificate is at:
/home/marco/fabric/fabric-ca/ca-cert.pem
2019/09/26 11:56:18 [WARNING] Failed to connect to database 'fabmnetdb'
2019/09/26 11:56:18 [WARNING] Failed to connect to database 'postgres'
2019/09/26 11:56:18 [WARNING] Failed to connect to database 'template1'
2019/09/26 11:56:18 [ERROR] Error occurred initializing database: Failed to
connect to Postgres database. Postgres requires connecting to a specific
database, the following databases were tried: [fabmnetdb postgres
template1]. Please create one of these database before continuing
2019/09/26 11:56:18 [INFO] Home directory for default CA:
/home/marco/fabric/fabric-ca
2019/09/26 11:56:18 [INFO] Operation Server Listening on 127.0.0.1:9443
2019/09/26 11:56:18 [INFO] Listening on http://0.0.0.0:7054

but, again, the corresponding log says "bad certificate" :

2019-09-26 11:55:04.514 CEST [4837] [unknown]@[unknown] LOG:  could not
accept SSL connection: sslv3 alert bad certificate
2019-09-26 11:55:04.517 CEST [4839] [unknown]@[unknown] LOG:  could not
accept SSL connection: sslv3 alert bad certificate
2019-09-26 11:55:04.518 CEST [4840] [unknown]@[unknown] LOG:  could not
accept SSL connection: sslv3 alert bad certificate
2019-09-26 11:56:18.967 CEST [4862] [unknown]@[unknown] LOG:  could not
accept SSL connection: sslv3 alert bad certificate
2019-09-26 11:56:18.969 CEST [4865] [unknown]@[unknown] LOG:  could not
accept SSL connection: sslv3 alert bad certificate
2019-09-26 11:56:18.971 CEST [4866] [unknown]@[unknown] LOG:  could not
accept SSL connection: sslv3 alert bad certificate

So..how could it be "bad certificate" if it's just been created brand new
by the execution of fabric-ca-server start?

Marco

Il giorno gio 26 set 2019 alle ore 00:43 Martin Gainty 
ha scritto:

> Hi Marco
>
> not necessarily with PG but with all other servers i secure when i see
> that error
> it means the certificate and key your provider is referencing are already
> stored in storage (in my case "truststore")
> I would clean all storage locations of certificate and key
> then I would allow BCCSP provider to push your cert and key into stores
> (identified by BCCSP config)
>
> if that doesnt work I would disable hardcoded BCCSP Provider then manually
> import your certs and keys into your truststore
>
> YMMV
> martin
> --
> *From:* Marco Ippolito 
> *Sent:* Wednesday, September 25, 2019 3:34 PM
> *To:* pgsql-general@lists.postgresql.org <
> pgsql-general@lists.postgresql.org>
> *Subject:* could not accept SSL connection: sslv3 alert bad certificate
>
> Following the indications here:
> https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#configuring-the-database
> I'm trying to understand how to correctly set Fabric-CA with a
> PostgreSQL-11 database in Ubuntu 18.04.02 Server Edition.
>
> I created a postgresql-11 db to which I can connect with SSL:
>
> (base) marco@pc:~$ psql --cluster 11/fabmnet -h 127.0.0.1 -d
> fabmnetdb -U fabmnet_admin
> Password for user fabmnet_admin:
> psql (11.5 (Ubuntu 11.5-1.pgdg18.04+1))
> SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384,
> bits: 256, compression: off)
> Type "help" for help.
>
> fabmnetdb=> \l
> List of databases
>Name| Owner | Encoding | Collate |  Ctype  |   Access
> privileges
>
> ---+---+--+-+-+---
>  fabmnetdb | fabmnet_admin | UTF8 | C.UTF-8 | C.UTF-8 |
>  postgres  | postgres  | UTF8 | C.UTF-8 | C.UTF-8 |
>  template0 | postgres  | UTF8 | C.UTF-8 | C.UTF-8 |
> =c/postgres  +
>|   |  | | |
> postgres=CTc/postgres
>  template1 | postgres  | UTF8 | C.UTF-8 | C.UTF-8 |
> =c/postgres  +
>| 

Re: managing primary key conflicts while restoring data to table with existing data

2019-09-26 Thread Krishnakant Mane

On 26/09/19 12:03 AM, Adrian Klaver wrote:
> On 9/25/19 8:04 AM, Rob Sargent wrote:
>>
>>
>> On Sep 25, 2019, at 8:24 AM, Krishnakant Mane > > wrote:
>>
>>>
>>> On 25/09/19 7:50 PM, Adrian Klaver wrote:
 On 9/25/19 12:15 AM, Krishnakant Mane wrote:
> Hello all,
>
> I have been using postgresql for an enterprise quality account's
> automation and inventory management software called GNUKhata
> 
>
> Our team is planning to add backup and restore function in the
> software.
>
> But we don't want to dump the entire database and then restore the
> same.
>
> What we are trying to do is to copy data specific to an organization.
>
> The challenge here is that I might copy all data (account heads,
> bills, vouchers etc ) for one organization from an instance on one
> machine.
>
> I take the archive in what ever format to another machine and now
> attempt to restore.
>
> The risk here is for example if the primary key value for orgcode
> in the organization table is 5, it might conflict with the data
> where I am attempting it to be restored.
>
> Same holds true for bills, invoices etc.
>
> A certain account head with accountcode 1 might be already present
> on the second machine.
>
> I am not expecting the users to empty all data from the
> destination machine before restoring a backup.
>
> The reason is that an auditor may have many client's data and one
> can't predict what primary key values are going to come from a
> backup.
>
> Basically I can even say this is a copy paste instead of a pure
> backup and restore.
>
> Can any one suggest how to handle such conflicts?

 Hard to say. If the data is held in common tables(bills, vouchers,
 etc)then the only thing I see happening is changing the PK values
 to an unused value. That could turn into a nightmare though. Not
 only that you lose the connection to the original data source. If
 the data can be broken out into separate tables then I could see
 placing them in their own schema.

>
>
> -- 
> Regards,
> Krishnakant Mane,
> Project Founder and Leader,
> GNUKhata 
> //(Opensource Accounting, Billing and Inventory Management
> Software)//

>>>
>>> Hi Adrian,
>>>
>>> Even I am thinnking to do some kind of upsert with this situation.
>
> So to be clear the tables you are working can have records from
> multiple organizations in a single table?
>
>>>
>>> And I would have to set the pkey to an unassigned value when there
>>> is conflict.
>
> I am seeing nextval() in your future:)
>
>>>
>>> I may also choose to revamp the serial by timestamps but don't know
>>> if the target customers would like it.
>
> I would avoid that. In my opinion timestamps are to too volatile to
> serve as a PK. If you are going to change I would go with the previous
> suggestion of UUID:
> https://www.postgresql.org/docs/11/datatype-uuid.html
>
> Not sure your customers would like that either.
>
Hi Adrian,

I think I would make them like the uuid idea.

So now what I am thinking is to first revamp the database by first
removing all the primary key constraints and then deleting all the values.

Then loop through the existing data and get uuid in that colum for every
row.

I might also require to update all the references to this value as
foreign key in related tables.

But I guess some kind of on update cascade might do well.

I know this would slow down the system, but given that this will be a
one time process for an individual user (that too if he has existing
data ), I would take that trade-off.

What do you say?

>>>
>>> -- 
>>> Regards,
>>> Krishnakant Mane,
>>> Project Founder and Leader,
>>> GNUKhata 
>>> //(Opensource Accounting, Billing and Inventory Management Software)//
>> It would likely be easier to rethink your backup and restore plan.
>> Putting each restore into its own space would be one tack.
>
>
-- 
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata 
//(Opensource Accounting, Billing and Inventory Management Software)//


Re: managing primary key conflicts while restoring data to table with existing data

2019-09-26 Thread Rob Sargent
 

> On Sep 26, 2019, at 12:27 AM, Krishnakant Mane  wrote:
> 
> 
>> On 26/09/19 12:03 AM, Adrian Klaver wrote:
>>> On 9/25/19 8:04 AM, Rob Sargent wrote: 
>>> 
>>> 
>>> On Sep 25, 2019, at 8:24 AM, Krishnakant Mane >> > wrote: 
>>> 
 
> On 25/09/19 7:50 PM, Adrian Klaver wrote: 
>> On 9/25/19 12:15 AM, Krishnakant Mane wrote: 
>> Hello all, 
>> 
>> I have been using postgresql for an enterprise quality account's 
>> automation and inventory management software called GNUKhata 
>>    
>> 
>> Our team is planning to add backup and restore function in the software. 
>> 
>> But we don't want to dump the entire database and then restore the same. 
>> 
>> What we are trying to do is to copy data specific to an organization. 
>> 
>> The challenge here is that I might copy all data (account heads, bills, 
>> vouchers etc ) for one organization from an instance on one machine. 
>> 
>> I take the archive in what ever format to another machine and now 
>> attempt to restore. 
>> 
>> The risk here is for example if the primary key value for orgcode in the 
>> organization table is 5, it might conflict with the data where I am 
>> attempting it to be restored. 
>> 
>> Same holds true for bills, invoices etc. 
>> 
>> A certain account head with accountcode 1 might be already present on 
>> the second machine. 
>> 
>> I am not expecting the users to empty all data from the destination 
>> machine before restoring a backup. 
>> 
>> The reason is that an auditor may have many client's data and one can't 
>> predict what primary key values are going to come from a backup. 
>> 
>> Basically I can even say this is a copy paste instead of a pure backup 
>> and restore. 
>> 
>> Can any one suggest how to handle such conflicts? 
> 
> Hard to say. If the data is held in common tables(bills, vouchers, 
> etc)then the only thing I see happening is changing the PK values to an 
> unused value. That could turn into a nightmare though. Not only that you 
> lose the connection to the original data source. If the data can be 
> broken out into separate tables then I could see placing them in their 
> own schema. 
> 
>> 
>> 
>> -- 
>> Regards, 
>> Krishnakant Mane, 
>> Project Founder and Leader, 
>> GNUKhata  
>> //(Opensource Accounting, Billing and Inventory Management Software)// 
> 
 
 Hi Adrian, 
 
 Even I am thinnking to do some kind of upsert with this situation. 
>> 
>> So to be clear the tables you are working can have records from multiple 
>> organizations in a single table? 
>> 
 
 And I would have to set the pkey to an unassigned value when there is 
 conflict. 
>> 
>> I am seeing nextval() in your future:)   
>> 
 
 I may also choose to revamp the serial by timestamps but don't know if the 
 target customers would like it. 
>> 
>> I would avoid that. In my opinion timestamps are to too volatile to serve as 
>> a PK. If you are going to change I would go with the previous suggestion of 
>> UUID: 
>> https://www.postgresql.org/docs/11/datatype-uuid.html 
>> 
>> Not sure your customers would like that either. 
>> 
> Hi Adrian,
> 
> I think I would make them like the uuid idea.
> 
> So now what I am thinking is to first revamp the database by first removing 
> all the primary key constraints and then deleting all the values.
> 
> Then loop through the existing data and get uuid in that colum for every row.
> 
> I might also require to update all the references to this value as foreign 
> key in related tables.
> 
> But I guess some kind of on update cascade might do well.
> 
> I know this would slow down the system, but given that this will be a one 
> time process for an individual user (that too if he has existing data ), I 
> would take that trade-off.
> 
> What do you say?
> 
 
 -- 
 Regards, 
 Krishnakant Mane, 
 Project Founder and Leader, 
 GNUKhata  
 //(Opensource Accounting, Billing and Inventory Management Software)// 
>>> It would likely be easier to rethink your backup and restore plan. Putting 
>>> each restore into its own space would be one tack. 
>> 
>> 
> -- 
> Regards, 
> Krishnakant Mane,
> Project Founder and Leader,
> GNUKhata
> (Opensource Accounting, Billing and Inventory Management Software)
You might think about adding the new UUID column and use the existing primary 
key to inform the updates in dependent tables. Then remove the old PK  column 
and constraint followed by promoting the UUID to primary key. This could be 
safely scripted and applied to all instances of your data. 
That said, this is only truly necessary of you have production databases to 
worry about.

Re: Use of ?get diagnostics'?

2019-09-26 Thread Adrian Klaver

On 9/25/19 10:44 PM, Thiemo Kellner wrote:

Hello Adrian

Quoting Adrian Klaver :

To get above I believe you will need to use GET CURRENT DIAGNOSTICS 
PG_CONTEXT:


I actually use "get stacked diagnostics" to retrieve the exception 
place. And it works. I am not sure why I did no see it.


GET [ CURRENT ] DIAGNOSTICS:
https://www.postgresql.org/docs/11/plpgsql-statements.html#PLPGSQL-STATEMENTS-DIAGNOSTICS

and

GET STACKED DIAGNOSTICS

https://www.postgresql.org/docs/11/plpgsql-control-structures.html#PLPGSQL-EXCEPTION-DIAGNOSTICS

are two different things.

See the example here:

https://www.postgresql.org/docs/11/plpgsql-control-structures.html#PLPGSQL-CALL-STACK



However, I noticed, that the stack does not include the error place in 
dynamic SQL executed by the "execute" command. Maybe I am missing 
something again.


Kind regards

Thiemo




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: managing primary key conflicts while restoring data to table with existing data

2019-09-26 Thread Rob Sargent


> On Sep 26, 2019, at 7:44 AM, Krishnakant Mane  wrote:
> 
> 
>> On 26/09/19 6:53 PM, Rob Sargent wrote:
>>  
>> 
>> On Sep 26, 2019, at 12:27 AM, Krishnakant Mane  wrote:
>> 
>>> 
 On 26/09/19 12:03 AM, Adrian Klaver wrote:
> On 9/25/19 8:04 AM, Rob Sargent wrote: 
> 
> 
> On Sep 25, 2019, at 8:24 AM, Krishnakant Mane  > wrote: 
> 
>> 
>>> On 25/09/19 7:50 PM, Adrian Klaver wrote: 
 On 9/25/19 12:15 AM, Krishnakant Mane wrote: 
 Hello all, 
 
 I have been using postgresql for an enterprise quality account's 
 automation and inventory management software called GNUKhata 
  
 
 Our team is planning to add backup and restore function in the 
 software. 
 
 But we don't want to dump the entire database and then restore the 
 same. 
 
 What we are trying to do is to copy data specific to an organization. 
 
 The challenge here is that I might copy all data (account heads, 
 bills, vouchers etc ) for one organization from an instance on one 
 machine. 
 
 I take the archive in what ever format to another machine and now 
 attempt to restore. 
 
 The risk here is for example if the primary key value for orgcode in 
 the organization table is 5, it might conflict with the data where I 
 am attempting it to be restored. 
 
 Same holds true for bills, invoices etc. 
 
 A certain account head with accountcode 1 might be already present on 
 the second machine. 
 
 I am not expecting the users to empty all data from the destination 
 machine before restoring a backup. 
 
 The reason is that an auditor may have many client's data and one 
 can't predict what primary key values are going to come from a backup. 
 
 Basically I can even say this is a copy paste instead of a pure backup 
 and restore. 
 
 Can any one suggest how to handle such conflicts? 
>>> 
>>> Hard to say. If the data is held in common tables(bills, vouchers, 
>>> etc)then the only thing I see happening is changing the PK values to an 
>>> unused value. That could turn into a nightmare though. Not only that 
>>> you lose the connection to the original data source. If the data can be 
>>> broken out into separate tables then I could see placing them in their 
>>> own schema. 
>>> 
 
 
 -- 
 Regards, 
 Krishnakant Mane, 
 Project Founder and Leader, 
 GNUKhata  
 //(Opensource Accounting, Billing and Inventory Management Software)// 
>>> 
>> 
>> Hi Adrian, 
>> 
>> Even I am thinnking to do some kind of upsert with this situation. 
 
 So to be clear the tables you are working can have records from multiple 
 organizations in a single table? 
 
>> 
>> And I would have to set the pkey to an unassigned value when there is 
>> conflict. 
 
 I am seeing nextval() in your future:) 
 
>> 
>> I may also choose to revamp the serial by timestamps but don't know if 
>> the target customers would like it. 
 
 I would avoid that. In my opinion timestamps are to too volatile to serve 
 as a PK. If you are going to change I would go with the previous 
 suggestion of UUID: 
 https://www.postgresql.org/docs/11/datatype-uuid.html 
 
 Not sure your customers would like that either. 
 
>>> Hi Adrian,
>>> 
>>> I think I would make them like the uuid idea.
>>> 
>>> So now what I am thinking is to first revamp the database by first removing 
>>> all the primary key constraints and then deleting all the values.
>>> 
>>> Then loop through the existing data and get uuid in that colum for every 
>>> row.
>>> 
>>> I might also require to update all the references to this value as foreign 
>>> key in related tables.
>>> 
>>> But I guess some kind of on update cascade might do well.
>>> 
>>> I know this would slow down the system, but given that this will be a one 
>>> time process for an individual user (that too if he has existing data ), I 
>>> would take that trade-off.
>>> 
>>> What do you say?
>>> 
>> 
>> -- 
>> Regards, 
>> Krishnakant Mane, 
>> Project Founder and Leader, 
>> GNUKhata  
>> //(Opensource Accounting, Billing and Inventory Management Software)// 
> It would likely be easier to rethink your backup and restore plan. 
> Putting each restore into its own space would be one tack. 
 
 
>>> -- 
>>> Regards, 
>>> Krishnakant Mane,
>>> Project Founder and Leader,
>>> GNUKhata
>>> (Opensource Accounting, Billing and Inventory Management Software)
>> You might think

"Failed to connect to Postgres database"

2019-09-26 Thread Marco Ippolito
In order to restart from a clean situation and configuration, I removed the
previous fabric-ca folder, created a new one, and then initiated the
fabric-ca-server. With the default SQLite everything seem working fine. But
one I try to use the PostgreSQL-11 db I created before, errors appear:

(base) marco@pc:~/fabric$ rm -rf fabric-ca(base) marco@pc:~/fabric$
mkdir fabric-ca(base) marco@pc:~/fabric$ cd fabric-ca/(base)
marco@pc:~/fabric/fabric-ca$ fabric-ca-server init -b
admin:adminpw(base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b
admin:adminpw2019/09/26 15:48:54 [INFO] Created default configuration
file at /home/marco/fabric/fabric-ca/fabric-ca-server-config.yaml2019/09/26
15:48:54 [INFO] Starting server in home directory:
/home/marco/fabric/fabric-ca2019/09/26 15:48:54 [INFO] Server Version:
1.4.42019/09/26 15:48:54 [INFO] Server Levels: &{Identity:2
Affiliation:1
Certificate:1 Credential:1 RAInfo:1 Nonce:1}2019/09/26 15:48:54
[WARNING] &{69 The specified CA certificate file
/home/marco/fabric/fabric-ca/ca-cert.pem does not exist}2019/09/26
15:48:54 [INFO] generating key: &{A:ecdsa S:256}2019/09/26 15:48:54
[INFO] encoded CSR2019/09/26 15:48:54 [INFO] signed certificate with
serial number 1625953039820960683388734809875126848203422536642019/09/26
15:48:54 [INFO] The CA key and certificate were generated for
CA 2019/09/26 15:48:54 [INFO] The key was stored by BCCSP provider
'SW'2019/09/26 15:48:54 [INFO] The certificate is at:
/home/marco/fabric/fabric-ca/ca-cert.pem2019/09/26 15:48:54 [INFO]
Initialized sqlite3 database at
/home/marco/fabric/fabric-ca/fabric-ca-server.db2019/09/26 15:48:54
[INFO] The issuer key was successfully stored. The public key is at:
/home/marco/fabric/fabric-ca/IssuerPublicKey, secret key is at:
/home/marco/fabric/fabric-ca/msp/keystore/IssuerSecretKey2019/09/26
15:48:54 [INFO] Idemix issuer revocation public and secret
keys were generated for CA ''2019/09/26 15:48:54 [INFO] The revocation
key was successfully stored.
The public key is at: /home/marco/fabric/fabric-
ca/IssuerRevocationPublicKey, private key is at:
/home/marco/fabric/fabric-ca/msp/keystore/IssuerRevocationPrivateKey2019/09/26
15:48:54 [INFO] Home directory for default CA:
/home/marco/fabric/fabric-ca2019/09/26 15:48:54 [INFO] Operation
Server Listening on 127.0.0.1:94432019/09/26 15:48:54 [INFO] Listening
on http://0.0.0.0:7054

I set the brand-new fabric-ca-server-config.yaml in this way:

#db:#  type: sqlite3#  datasource: fabric-ca-server.db#  tls:#
enabled: false#  certfiles:#  client:#certfile:#
 keyfile:

db:
  type: postgres
  datasource: host=localhost port=5433 user=fabmnet_admin
  password=password dbname=fabmnetdb sslmode=verify-full

and in /etc/postgresql/11/fabmnet/postgresql.conf :

ssl = on
ssl_cert_file = '/home/marco/fabric/fabric-ca/ca-cert.pem'
ssl_key_file = '/home/marco/fabric/fabric-ca/msp/keystore
/IssuerSecretKey'

After systemctl restart postgresql, I tried to start the fabric-ca-server:

(base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b
admin:adminpw2019/09/26 15:56:50 [INFO] Configuration file location:
/home/marco/fabric/fabric-ca/fabric-ca-server-config.yaml2019/09/26
15:56:50 [INFO] Starting server in home directory:
/home/marco/fabric/fabric-ca2019/09/26 15:56:50 [INFO] Server Version:
1.4.42019/09/26 15:56:50 [INFO] Server Levels: &{Identity:2
Affiliation:1
Certificate:1 Credential:1 RAInfo:1 Nonce:1}2019/09/26 15:56:50 [INFO]
The CA key and certificate already exist2019/09/26 15:56:50 [INFO] The
key is stored by BCCSP provider 'SW'2019/09/26 15:56:50 [INFO] The
certificate is at: /home/marco/fabric/fabric-ca/ca-cert.pem2019/09/26
15:56:50 [WARNING] Failed to connect to database 'fabmnetdb'2019/09/26
15:56:50 [WARNING] Failed to connect to database 'postgres'2019/09/26
15:56:50 [WARNING] Failed to connect to database 'template1'2019/09/26
15:56:50 [ERROR] Error occurred initializing database: Failedto
connect to Postgres database. Postgres requires connecting to a
specific database, the following databases were tried: [fabmnetdb
postgres template1]. Please create one of these database before
continuing2019/09/26 15:56:50 [INFO] Home directory for default CA:
/home/marco/fabric/fabric-ca2019/09/26 15:56:50 [INFO] Operation
Server Listening on 127.0.0.1:94432019/09/26 15:56:50 [INFO] Listening
on http://0.0.0.0:7054

Before I also removed all the previous content of
/var/log/postgresql/postgresql-11-fabmnet.log to have a clean situation.
But strangely now I do not get any new logging information in
postgresql-11-fabmnet.log

So. I think there must be something to fix in the interface between
fabric-ca-server and PostgreSQL-11 db. In fabric-ca-server-config.yaml, in
postgresql.conf, in both or somewhere else.









Il giorno gio 26 set 2019 alle ore 12:05 Marco Ippolito <
ippolito.ma...@gmail.com> ha scritto:

> Affer removing the previous cert and key files,  I started again the
> fabric-ca server discovering that new cert and key files we

Re: Operator is not unique

2019-09-26 Thread Adrian Klaver

On 9/25/19 1:14 PM, PegoraroF10 wrote:

format_type format_type castfunccastcontext castmethod
bigint  smallint714 a   f
bigint  integer 480 a   f
bigint  real652 i   f
bigint  double precision482 i   f
bigint  numeric 1781i   f
bigint  regclass1287i   f
bigint  regtype 1287i   f
bigint  regconfig   1287i   f
bigint  regdictionary   1287i   f
bigint  regrole 1287i   f
bigint  regnamespace1287i   f
bigint  bit 2075e   f
bigint  money   3812a   f
bigint  oid 1287i   f
bigint  regproc 1287i   f
bigint  regprocedure1287i   f
bigint  regoper 1287i   f
bigint  regoperator 1287i   f
numeric bigint  1779a   f
numeric smallint1783a   f
numeric integer 1744a   f
numeric real1745i   f
numeric double precision1746i   f
numeric money   3824a   f
numeric numeric 1703i   f



Hmm, nothing strange here AFAICT.

What does:

select oprname, oprkind, oprleft, oprright, oprresult, oprcode  from 
pg_operator where oprname = '+';


show?


Have you made any changes/additions to CASTs and/or OPERATORs recently?




Sent from the PostgreSQL - general mailing list archive 
 at 
Nabble.com.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: could not accept SSL connection: sslv3 alert bad certificate

2019-09-26 Thread Adrian Klaver

On 9/26/19 1:10 AM, Marco Ippolito wrote:

Hi Adrian,
putting in /fabric/fabric-ca/fabric-ca-server-config.yaml :

db:
   type: postgres
   datasource: host=localhost port=5433 user=fabmnet_admin 
password=fabmnet1971 dbname=fabmnetdb sslmode=verify-ca





and the corresponding portion of the log file:

2019-09-26 10:08:27.947 CEST [3728] fabmnet_admin@fabmnetdb ERROR: 
  permission denied to create database
2019-09-26 10:08:27.947 CEST [3728] fabmnet_admin@fabmnetdb STATEMENT: 
  CREATE DATABASE fabmnetdb


User fabmnet_admin does not have CREATE DATABASE privileges. In a psql 
session do:


\du fabmnet_admin

If the results do not include Create DB or Superuser then you need to 
ALTER ROLE fabmnet_admin to have CREATEDB:


https://www.postgresql.org/docs/11/sql-alterrole.html

NOTE: You will need to above as ROLE that has privileges. Easiest if you 
have a SUPERUSER role you can log in as.





I do not undertand...

Marco




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: "Failed to connect to Postgres database"

2019-09-26 Thread Adrian Klaver

On 9/26/19 7:21 AM, Marco Ippolito wrote:
In order to restart from a clean situation and configuration, I removed 
the previous fabric-ca folder, created a new one, and then initiated the 
fabric-ca-server. With the default SQLite everything seem working fine. 
But one I try to use the PostgreSQL-11 db I created before, errors appear:


|(base)marco@pc:~/fabric$rm -rf fabric-ca (base)marco@pc:~/fabric$mkdir 
fabric-ca (base)marco@pc:~/fabric$cd 
fabric-ca/(base)marco@pc:~/fabric/fabric-ca$fabric-ca-server init -b 
admin:adminpw (base)marco@pc:~/fabric/fabric-ca$fabric-ca-server start-b 
admin:adminpw 2019/09/2615:48:54[INFO]Created defaultconfiguration 
fileat /home/marco/fabric/fabric-ca/fabric-ca-server-config.yaml 
2019/09/2615:48:54[INFO]Starting server inhome 
directory:/home/marco/fabric/fabric-ca 2019/09/2615:48:54[INFO]Server 
Version:1.4.42019/09/2615:48:54[INFO]Server 
Levels:&{Identity:2Affiliation:1Certificate:1Credential:1RAInfo:1Nonce:1}2019/09/2615:48:54[WARNING]&{69The 
specified CA certificate file/home/marco/fabric/fabric-ca/ca-cert.pem 
does notexist}2019/09/2615:48:54[INFO]generating key:&{A:ecdsa 
S:256}2019/09/2615:48:54[INFO]encoded CSR 2019/09/2615:48:54[INFO]signed 
certificate withserial number 
1625953039820960683388734809875126848203422536642019/09/2615:48:54[INFO]The 
CA keyandcertificate were generated forCA 2019/09/2615:48:54[INFO]The 
keywas stored byBCCSP provider 'SW'2019/09/2615:48:54[INFO]The 
certificate isat:/home/marco/fabric /fabric-ca/ca-cert.pem 
2019/09/2615:48:54[INFO]Initialized sqlite3 databaseat /home/marco 
/fabric/fabric-ca/fabric-ca-server.db 2019/09/2615:48:54[INFO]The issuer 
keywas successfully stored.The 
publickeyisat:/home/marco/fabric/fabric-ca/IssuerPublicKey,secret 
keyisat:/home/marco/fabric/fabric-ca/msp/keystore/IssuerSecretKey 
2019/09/2615:48:54[INFO]Idemix issuer revocation publicandsecret keys 
were generated forCA ''2019/09/2615:48:54[INFO]The revocation keywas 
successfully stored.The publickeyisat:/home/marco/fabric/fabric- 
ca/IssuerRevocationPublicKey,private keyisat:/home/marco/fabric 
/fabric-ca/msp/keystore/IssuerRevocationPrivateKey 
2019/09/2615:48:54[INFO]Home directory fordefaultCA:/home/marco 
/fabric/fabric-ca 2019/09/2615:48:54[INFO]Operation Server Listening 
on127.0.0.1:94432019/09/2615:48:54[INFO]Listening onhttp://0.0.0.0:7054|


I set the brand-new fabric-ca-server-config.yaml in this way:

|#db:#type:sqlite3 #datasource:fabric-ca-server.db #tls:#enabled:false 
#certfiles:#client:#certfile:#keyfile:db:type:postgres 
datasource:host=localhost port=5433user=fabmnet_admin password=password 
dbname=fabmnetdb sslmode=verify-full|


Shouldn't the TLS info also be there for the Postgres datasource:

https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#postgresql

As to below, you will not get logs as you are never connecting to the 
database. Those errors get sent to the console.


You are fighting two issues, role permissions and SSL certs. I would 
deal with one a time. Drop the SSL requirement until you can verify a 
connection and database creation. Then deal with the SSL issues.




and in /etc/postgresql/11/fabmnet/postgresql.conf :

|ssl =onssl_cert_file 
='/home/marco/fabric/fabric-ca/ca-cert.pem'ssl_key_file 
='/home/marco/fabric/fabric-ca/msp/keystore /IssuerSecretKey'|


After systemctl restart postgresql, I tried to start the fabric-ca-server:

|(base)marco@pc:~/fabric/fabric-ca$fabric-ca-server start-b 
admin:adminpw 2019/09/2615:56:50[INFO]Configuration 
filelocation:/home/marco /fabric/fabric-ca/fabric-ca-server-config.yaml 
2019/09/2615:56:50[INFO]Starting server inhome 
directory:/home/marco/fabric/fabric-ca 2019/09/2615:56:50[INFO]Server 
Version:1.4.42019/09/2615:56:50[INFO]Server 
Levels:&{Identity:2Affiliation:1Certificate:1Credential:1RAInfo:1Nonce:1}2019/09/2615:56:50[INFO]The 
CA keyandcertificate already exist 2019/09/2615:56:50[INFO]The 
keyisstored byBCCSP provider 'SW'2019/09/2615:56:50[INFO]The certificate 
isat:/home/marco/fabric /fabric-ca/ca-cert.pem 
2019/09/2615:56:50[WARNING]Failed 
toconnecttodatabase'fabmnetdb'2019/09/2615:56:50[WARNING]Failed 
toconnecttodatabase'postgres'2019/09/2615:56:50[WARNING]Failed 
toconnecttodatabase'template1'2019/09/2615:56:50[ERROR]Error occurred 
initializing database:Failed toconnecttoPostgres database.Postgres 
requires connecting toa specific database,the followingdatabases were 
tried:[fabmnetdb postgres template1].Please createone ofthese 
databasebefore continuing 2019/09/2615:56:50[INFO]Home directory 
fordefaultCA:/home/marco /fabric/fabric-ca 
2019/09/2615:56:50[INFO]Operation Server Listening 
on127.0.0.1:94432019/09/2615:56:50[INFO]Listening onhttp://0.0.0.0:7054|


Before I also removed all the previous content of 
/var/log/postgresql/postgresql-11-fabmnet.log to have a clean situation. 
But strangely now I do not get any new logging information in 
postgresql-11-fabmnet.log


So. I think there must be something to fix in the

Re: updating sequence value for column 'serial'

2019-09-26 Thread Adrian Klaver

On 9/25/19 10:12 PM, Matthias Apitz wrote:

El día miércoles, septiembre 25, 2019 a las 07:42:11a. m. -0700, Adrian Klaver 
escribió:


sisis$# DECLARE
sisis$#maxikatkey integer := ( select max(katkey) from titel_daten );
sisis$#result integer := 1;
sisis$# BEGIN
sisis$#maxikatkey := maxikatkey +1;
sisis$#RAISE NOTICE '%', maxikatkey ;
sisis$#result := (SELECT SETVAL('titel_daten_katkey_seq', maxikatkey) );
sisis$#RAISE NOTICE '%', result ;
sisis$# END $$;
NOTICE:  330722
NOTICE:  330723
DO

Is there any better way? Thanks


I have not found a better way. I use ALTER SEQUENCE .. RESTART 330722
though:


Yes, but I found no way to use a variable like 'maxikatkey' in the ALTER 
SEQUENCE ...
it only excepts digits like 330722.


DO $$
DECLARE
  max_id int;
BEGIN
  SELECT INTO max_id max(id) + 1 FROM seq_test;
  RAISE NOTICE 'Max id is %', max_id;
  EXECUTE 'ALTER SEQUENCE seq_test_id_seq RESTART ' || max_id::text;
END;
$$ LANGUAGE plpgsql;


Hi Adrian,

I adopted your code to the name of my table 'ig_target_ipfilter' and its
SERIAL column 'id'; it does not work (and I don't know how it could
works because in the 'ALTER SEQUENCE ...' stmt is somehow missing '...  WITH 
value ...')
or do I understand something wrong?):

cat -n /home/apitzm/postgreSQL/test.sql
  1  DO $$
  2  DECLARE
  3  max_id int;
  4  BEGIN
  5  SELECT INTO max_id max(id) + 1 FROM ig_target_ipfilter ;
  6  RAISE NOTICE 'Max id in % is %', 'ig_target_ipfilter', max_id;
  7  EXECUTE 'ALTER SEQUENCE ig_target_ipfilter_id_seq RESTART ' || 
max_id::text;
  8  END;
  9  $$ LANGUAGE plpgsql;

psql -Usisis -dsisis < /home/apitzm/postgreSQL/test.sql
NOTICE:  Max id in ig_target_ipfilter is 
ERROR:  query string argument of EXECUTE is null
KONTEXT:  PL/pgSQL function inline_code_block line 7 at EXECUTE

Please clarify. Thanks


I forgot about the possibility of NULL being returned by max_id in:

SELECT INTO max_id max(id) + 1 FROM ig_target_ipfilter ;

So:

SELECT INTO max_id COALESCE(max(id), 0) + 1 FROM ig_target_ipfilter ;

That will turn a NULL max(id) into 0.



matthias





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: managing primary key conflicts while restoring data to table with existing data

2019-09-26 Thread Rob Sargent



--
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata 
//(Opensource Accounting, Billing and Inventory Management Software)//
You might think about adding the new UUID column and use the existing 
primary key to inform the updates in dependent tables. Then remove 
the old PK  column and constraint followed by promoting the UUID to 
primary key. This could be safely scripted and applied to all 
instances of your data.
That said, this is only truly necessary of you have production 
databases to worry about.



Thanks a million, this is the most logical and safe way.

yes I have a lot of production databases to worry about.

I am only confused about what you mean by "use the existing primary 
key to inform the updates in dependent tables."


Are you refering to a cascading effect?

If yes then does it mean I first program my upgrade script to manually 
go through all new uuid keys and update the same in the depending 
tables with reference to the old primary key working as foreign key in 
those tables?



It occurs to me you will also need to "duplicate" the columns in which 
you have foreign keys.  How many tables are there in the affected 
schemata and what is the size? Pretty sure you will have to go off-line 
to perform this sort of transition.  Nearly everything will be touched.


--
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata 
//(Opensource Accounting, Billing and Inventory Management Software)//


Re: "Failed to connect to Postgres database"

2019-09-26 Thread Marco Ippolito
Hi Adrian,

I removed the previous fabmentdb and created a new one whose owner is
postgres:

(base) postgres@pc:/usr/local/pgsql$ psql --cluster 11/fabmnet
psql (11.5 (Ubuntu 11.5-1.pgdg18.04+1))
Type "help" for help.

postgres=# \l
  List of databases
   Name|  Owner   | Encoding | Collate |  Ctype  |   Access privileges

---+--+--+-+-+---
 fabmnetdb | postgres | UTF8 | C.UTF-8 | C.UTF-8 |
 postgres  | postgres | UTF8 | C.UTF-8 | C.UTF-8 |
 template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres
 +
   |  |  | | | postgres=CTc/postgres
 template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres
 +
   |  |  | | | postgres=CTc/postgres
(4 rows)

postgres=#

*First objective: enable and make it working ssl connection for fabmnetd*b :

Following the indications found here:
https://vibhork.blogspot.com/2011/07/how-to-enable-ssl-in-postgresqlppas.html
I created .pem and .req files in /var/lib/postgresql/11/fabmnet/  where,
according to nano /etc/postgresql/11/fabmnet/postgresql.conf, the data
folder is located:
data_directory = '/var/lib/postgresql/11/fabmnet'
ssl = on


(base) postgres@pc:/var/lib/postgresql/11/fabmnet$ chmod 600 privkey.pem
(base) postgres@pc:/var/lib/postgresql/11/fabmnet$ chmod 600 cert.req

But now, testing the ssl connection :

(base) marco@pc:/usr/local/pgsql$ psql -h 127.0.0.1 -d fabmnetdb -U postgres
Password for user postgres:
psql: FATAL:  database "fabmnetdb" does not exist

What am I missing?
Thanks again for your kind help.
Marco


Il giorno gio 26 set 2019 alle ore 16:50 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:

> On 9/26/19 7:21 AM, Marco Ippolito wrote:
> > In order to restart from a clean situation and configuration, I removed
> > the previous fabric-ca folder, created a new one, and then initiated the
> > fabric-ca-server. With the default SQLite everything seem working fine.
> > But one I try to use the PostgreSQL-11 db I created before, errors
> appear:
> >
> > |(base)marco@pc:~/fabric$rm -rf fabric-ca (base)marco@pc:~/fabric$mkdir
> > fabric-ca (base)marco@pc:~/fabric$cd
> > fabric-ca/(base)marco@pc:~/fabric/fabric-ca$fabric-ca-server init -b
> > admin:adminpw (base)marco@pc:~/fabric/fabric-ca$fabric-ca-server
> start-b
> > admin:adminpw 2019/09/2615:48:54[INFO]Created defaultconfiguration
> > fileat /home/marco/fabric/fabric-ca/fabric-ca-server-config.yaml
> > 2019/09/2615:48:54[INFO]Starting server inhome
> > directory:/home/marco/fabric/fabric-ca 2019/09/2615:48:54[INFO]Server
> > Version:1.4.42019/09/2615:48:54[INFO]Server
> >
> Levels:&{Identity:2Affiliation:1Certificate:1Credential:1RAInfo:1Nonce:1}2019/09/2615:48:54[WARNING]&{69The
>
> > specified CA certificate file/home/marco/fabric/fabric-ca/ca-cert.pem
> > does notexist}2019/09/2615:48:54[INFO]generating key:&{A:ecdsa
> > S:256}2019/09/2615:48:54[INFO]encoded CSR 2019/09/2615:48:54[INFO]signed
> > certificate withserial number
> >
> 1625953039820960683388734809875126848203422536642019/09/2615:48:54[INFO]The
> > CA keyandcertificate were generated forCA 2019/09/2615:48:54[INFO]The
> > keywas stored byBCCSP provider 'SW'2019/09/2615:48:54[INFO]The
> > certificate isat:/home/marco/fabric /fabric-ca/ca-cert.pem
> > 2019/09/2615:48:54[INFO]Initialized sqlite3 databaseat /home/marco
> > /fabric/fabric-ca/fabric-ca-server.db 2019/09/2615:48:54[INFO]The issuer
> > keywas successfully stored.The
> > publickeyisat:/home/marco/fabric/fabric-ca/IssuerPublicKey,secret
> > keyisat:/home/marco/fabric/fabric-ca/msp/keystore/IssuerSecretKey
> > 2019/09/2615:48:54[INFO]Idemix issuer revocation publicandsecret keys
> > were generated forCA ''2019/09/2615:48:54[INFO]The revocation keywas
> > successfully stored.The publickeyisat:/home/marco/fabric/fabric-
> > ca/IssuerRevocationPublicKey,private keyisat:/home/marco/fabric
> > /fabric-ca/msp/keystore/IssuerRevocationPrivateKey
> > 2019/09/2615:48:54[INFO]Home directory fordefaultCA:/home/marco
> > /fabric/fabric-ca 2019/09/2615:48:54[INFO]Operation Server Listening
> > on127.0.0.1:94432019/09/2615:48:54[INFO]Listening onhttp://0.0.0.0:7054|
> >
> > I set the brand-new fabric-ca-server-config.yaml in this way:
> >
> > |#db:#type:sqlite3 #datasource:fabric-ca-server.db #tls:#enabled:false
> > #certfiles:#client:#certfile:#keyfile:db:type:postgres
> > datasource:host=localhost port=5433user=fabmnet_admin password=password
> > dbname=fabmnetdb sslmode=verify-full|
>
> Shouldn't the TLS info also be there for the Postgres datasource:
>
>
> https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#postgresql
>
> As to below, you will not get logs as you are never connecting to the
> database. Those errors get sent to the console.
>
> You are fighting two issues, role permissions and SSL certs. I would
> deal with one a time. Drop the SSL requirement until 

Re: managing primary key conflicts while restoring data to table with existing data

2019-09-26 Thread Krishnakant Mane

On 26/09/19 6:53 PM, Rob Sargent wrote:
>  
>
> On Sep 26, 2019, at 12:27 AM, Krishnakant Mane  > wrote:
>
>>
>> On 26/09/19 12:03 AM, Adrian Klaver wrote:
>>> On 9/25/19 8:04 AM, Rob Sargent wrote:


 On Sep 25, 2019, at 8:24 AM, Krishnakant Mane >>> > wrote:

>
> On 25/09/19 7:50 PM, Adrian Klaver wrote:
>> On 9/25/19 12:15 AM, Krishnakant Mane wrote:
>>> Hello all,
>>>
>>> I have been using postgresql for an enterprise quality account's
>>> automation and inventory management software called GNUKhata
>>> 
>>>
>>> Our team is planning to add backup and restore function in the
>>> software.
>>>
>>> But we don't want to dump the entire database and then restore
>>> the same.
>>>
>>> What we are trying to do is to copy data specific to an
>>> organization.
>>>
>>> The challenge here is that I might copy all data (account heads,
>>> bills, vouchers etc ) for one organization from an instance on
>>> one machine.
>>>
>>> I take the archive in what ever format to another machine and
>>> now attempt to restore.
>>>
>>> The risk here is for example if the primary key value for
>>> orgcode in the organization table is 5, it might conflict with
>>> the data where I am attempting it to be restored.
>>>
>>> Same holds true for bills, invoices etc.
>>>
>>> A certain account head with accountcode 1 might be already
>>> present on the second machine.
>>>
>>> I am not expecting the users to empty all data from the
>>> destination machine before restoring a backup.
>>>
>>> The reason is that an auditor may have many client's data and
>>> one can't predict what primary key values are going to come from
>>> a backup.
>>>
>>> Basically I can even say this is a copy paste instead of a pure
>>> backup and restore.
>>>
>>> Can any one suggest how to handle such conflicts?
>>
>> Hard to say. If the data is held in common tables(bills,
>> vouchers, etc)then the only thing I see happening is changing the
>> PK values to an unused value. That could turn into a nightmare
>> though. Not only that you lose the connection to the original
>> data source. If the data can be broken out into separate tables
>> then I could see placing them in their own schema.
>>
>>>
>>>
>>> -- 
>>> Regards,
>>> Krishnakant Mane,
>>> Project Founder and Leader,
>>> GNUKhata 
>>> //(Opensource Accounting, Billing and Inventory Management
>>> Software)//
>>
>
> Hi Adrian,
>
> Even I am thinnking to do some kind of upsert with this situation.
>>>
>>> So to be clear the tables you are working can have records from
>>> multiple organizations in a single table?
>>>
>
> And I would have to set the pkey to an unassigned value when there
> is conflict.
>>>
>>> I am seeing nextval() in your future:)
>>>
>
> I may also choose to revamp the serial by timestamps but don't
> know if the target customers would like it.
>>>
>>> I would avoid that. In my opinion timestamps are to too volatile to
>>> serve as a PK. If you are going to change I would go with the
>>> previous suggestion of UUID:
>>> https://www.postgresql.org/docs/11/datatype-uuid.html
>>>
>>> Not sure your customers would like that either.
>>>
>> Hi Adrian,
>>
>> I think I would make them like the uuid idea.
>>
>> So now what I am thinking is to first revamp the database by first
>> removing all the primary key constraints and then deleting all the
>> values.
>>
>> Then loop through the existing data and get uuid in that colum for
>> every row.
>>
>> I might also require to update all the references to this value as
>> foreign key in related tables.
>>
>> But I guess some kind of on update cascade might do well.
>>
>> I know this would slow down the system, but given that this will be a
>> one time process for an individual user (that too if he has existing
>> data ), I would take that trade-off.
>>
>> What do you say?
>>
>
> -- 
> Regards,
> Krishnakant Mane,
> Project Founder and Leader,
> GNUKhata 
> //(Opensource Accounting, Billing and Inventory Management
> Software)//
 It would likely be easier to rethink your backup and restore plan.
 Putting each restore into its own space would be one tack.
>>>
>>>
>> -- 
>> Regards,
>> Krishnakant Mane,
>> Project Founder and Leader,
>> GNUKhata 
>> //(Opensource Accounting, Billing and Inventory Management Software)//
> You might think about adding the new UUID column and use the existing
> primary key to inform the updates in dependent tables. Then remove the
> old PK  column and constraint followed by promoting the UUID to
> primary key. This could be safely scripted and applied to 

row_to_json white space

2019-09-26 Thread Mark Lybarger
Hi,

The row_to_json seems to strip white space:

select ROW_to_json(q) from (select 'fooa   bar' as TEST) q;

any help on how to preserve?

Thanks!
-mark-


Re: row_to_json white space

2019-09-26 Thread Rob Sargent



On 9/26/19 11:54 AM, Mark Lybarger wrote:

select ROW_to_json(q) from (select 'foo        a   bar' as TEST) q;


Not here

postgres=# select ROW_to_json(q) from (select 'foo    a bar' as TEST) q;
  row_to_json
---
 {"test":"foo    a   bar"}
(1 row)

postgres=# select version();
version
-
 PostgreSQL 10.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 
20150623 (Red Hat 4.8.5-36), 64-bit

(1 row)





Re: row_to_json white space

2019-09-26 Thread Adrian Klaver

On 9/26/19 10:54 AM, Mark Lybarger wrote:

Hi,

The row_to_json seems to strip white space:

select ROW_to_json(q) from (select 'foo        a   bar' as TEST) q;

any help on how to preserve?


Like Rob I do not see it:

test=# select version();
  version 



 PostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 
4.8.5, 64-bit

(1 row)

test=# select ROW_to_json(q) from (select 'fooa   bar' as TEST) q;
  row_to_json
---
 {"test":"fooa   bar"}
(1 row)


Do you possibly have an overloaded function of the same name(ROW_to_json())?




Thanks!
-mark-





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: "Failed to connect to Postgres database"

2019-09-26 Thread Adrian Klaver

On 9/26/19 10:10 AM, Marco Ippolito wrote:

Hi Adrian,

I removed the previous fabmentdb and created a new one whose owner is 
postgres:


(base) postgres@pc:/usr/local/pgsql$ psql --cluster 11/fabmnet
psql (11.5 (Ubuntu 11.5-1.pgdg18.04+1))
Type "help" for help.

postgres=# \l
                               List of databases
    Name    |  Owner   | Encoding | Collate |  Ctype  |   Access privileges
---+--+--+-+-+---
  fabmnetdb | postgres | UTF8     | C.UTF-8 | C.UTF-8 |
  postgres  | postgres | UTF8     | C.UTF-8 | C.UTF-8 |
  template0 | postgres | UTF8     | C.UTF-8 | C.UTF-8 | =c/postgres 
      +
            |          |          |         |         | 
postgres=CTc/postgres
  template1 | postgres | UTF8     | C.UTF-8 | C.UTF-8 | =c/postgres 
      +
            |          |          |         |         | 
postgres=CTc/postgres

(4 rows)

postgres=#

_First objective: enable and make it working ssl connection for fabmnetd_b :

Following the indications found here: 
https://vibhork.blogspot.com/2011/07/how-to-enable-ssl-in-postgresqlppas.html
I created .pem and .req files in /var/lib/postgresql/11/fabmnet/  where, 
according to nano /etc/postgresql/11/fabmnet/postgresql.conf, the data 
folder is located:

     data_directory = '/var/lib/postgresql/11/fabmnet'
     ssl = on


(base) postgres@pc:/var/lib/postgresql/11/fabmnet$ chmod 600 privkey.pem
(base) postgres@pc:/var/lib/postgresql/11/fabmnet$ chmod 600 cert.req

But now, testing the ssl connection :

(base) marco@pc:/usr/local/pgsql$ psql -h 127.0.0.1 -d fabmnetdb -U postgres
Password for user postgres:
psql: FATAL:  database "fabmnetdb" does not exist

What am I missing?


My suspicion is that you have more then once instance of Postgres 
running. Partly because of this:


psql --cluster 11/fabmnet

and then later:

psql -h 127.0.0.1 -d fabmnetdb -U postgres

Not sure they are pointing at the same thing.



At command line what does:

ps ax | grep post

show.



Thanks again for your kind help.
Marco





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: row_to_json white space

2019-09-26 Thread Adrian Klaver

On 9/26/19 11:54 AM, Mark Lybarger wrote:

Ccing the list so folks know  the problem is solved.

Problem is on my end.  My tool was showing the space removed and infact 
it wasn't.  Copy paste from dbeaver to notepad++ shows the spaces are 
preserved.


On Thu, Sep 26, 2019 at 2:49 PM Adrian Klaver > wrote:


On 9/26/19 10:54 AM, Mark Lybarger wrote:
 > Hi,
 >
 > The row_to_json seems to strip white space:
 >
 > select ROW_to_json(q) from (select 'foo        a   bar' as TEST) q;
 >
 > any help on how to preserve?

Like Rob I do not see it:

test=# select version();
                                        version



   PostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux)
4.8.5, 64-bit
(1 row)

test=# select ROW_to_json(q) from (select 'foo        a   bar' as
TEST) q;
            row_to_json
---
   {"test":"foo        a   bar"}
(1 row)


Do you possibly have an overloaded function of the same
name(ROW_to_json())?


 >
 > Thanks!
 > -mark-
 >
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




pg_get_triggerdef does not use the schema qualified procedure name

2019-09-26 Thread AJ Welch
Hi,

I noticed that pg_get_triggerdef doesn't seem to use the schema qualified
name of procedures/functions. Is it supposed to? Is there a way to get it
to?

z1h=# \x auto
Expanded display is used automatically.
z1h=# select tg.tgname "name", nsp.nspname "schema", cls.relname table_name,
   pg_get_triggerdef(tg.oid) full_definition, proc.proname proc_name,
   nspp.nspname proc_schema, tg.tgenabled enabled
from pg_trigger tg
join pg_class cls on cls.oid = tg.tgrelid
join pg_namespace nsp on nsp.oid = cls.relnamespace
join pg_proc proc on proc.oid = tg.tgfoid
join pg_namespace nspp on nspp.oid = proc.pronamespace
where not tg.tgisinternal
order by schema, table_name, name;
-[ RECORD 1
]---+-
name| users_updated_at
schema  | z1h
table_name  | users
full_definition | CREATE TRIGGER users_updated_at BEFORE UPDATE ON
z1h.users FOR EACH ROW EXECUTE PROCEDURE update_updated_at_column()
proc_name   | update_updated_at_column
proc_schema | public
enabled | O

This causes problems for tools like migra
 that
try to use the output of pg_get_triggerdef to recreate triggers that were
defined in schemas other than public.

Thanks,
AJ


Re: pg_get_triggerdef does not use the schema qualified procedure name

2019-09-26 Thread Tom Lane
AJ Welch  writes:
> I noticed that pg_get_triggerdef doesn't seem to use the schema qualified
> name of procedures/functions. Is it supposed to? Is there a way to get it
> to?

Generally the policy of such functions is to schema-qualify names that
wouldn't be found in the current search_path.  So you could get what
you want by setting a restrictive search_path beforehand, maybe just ''.

regards, tom lane




pg12 rc1 on CentOS8 depend python2

2019-09-26 Thread keisuke kuroda
Hi.

I tried to install PostgreSQL12 RC1 on CentOS8.

# dnf install postgresql12-server postgresql12-contrib

===
 Package   ArchVersion
 Repository   Size
===
Installing:
 postgresql12-contrib  x86_64  12rc1-1PGDG.rhel8
 pgdg12-updates-testing  660 k
 postgresql12-server   x86_64  12rc1-1PGDG.rhel8
 pgdg12-updates-testing  5.3 M
Installing dependencies:
 python2   x86_64
 2.7.15-22.module_el8.0.0+32+017b2cba AppStream
  107 k
 python2-libs  x86_64
 2.7.15-22.module_el8.0.0+32+017b2cba AppStream
  6.0 M
 python2-pip   noarch
 9.0.3-13.module_el8.0.0+32+017b2cba  AppStream
  2.0 M
 python2-setuptoolsnoarch
 39.0.1-11.module_el8.0.0+32+017b2cba AppStream
  643 k
 postgresql12  x86_64  12rc1-1PGDG.rhel8
 pgdg12-updates-testing  1.8 M
 postgresql12-libs x86_64  12rc1-1PGDG.rhel8
 pgdg12-updates-testing  405 k
Enabling module streams:
 python27  2.7

CentOS8 does not have python2 installed by default, But PostgreSQL is
dependent on python2.

Do we need to install python2 when we use PostgreSQL on CentOS8?

Best Regards.
Keisuke Kuroda


Re: "Failed to connect to Postgres database"

2019-09-26 Thread rob stone
Hi,

On Thu, 2019-09-26 at 16:21 +0200, Marco Ippolito wrote:
> 
> db:
>   type: postgres
>   datasource: host=localhost port=5433 user=fabmnet_admin   
>   password=password dbname=fabmnetdb sslmode=verify-full
> 

> 
> (base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b 
> admin:adminpw
> 2019/09/26 15:56:50 [INFO] Configuration file location: /home/marco
> /fabric/fabric-ca/fabric-ca-server-config.yaml
> 2019/09/26 15:56:50 [INFO] Starting server in home directory: 
> /home/marco/fabric/fabric-ca
> 2019/09/26 15:56:50 [INFO] Server Version: 1.4.4
> 2019/09/26 15:56:50 [INFO] Server Levels: &{Identity:2 Affiliation:1 
> Certificate:1 Credential:1 RAInfo:1 Nonce:1}
> 2019/09/26 15:56:50 [INFO] The CA key and certificate already exist
> 2019/09/26 15:56:50 [INFO] The key is stored by BCCSP provider 'SW'
> 2019/09/26 15:56:50 [INFO] The certificate is at: /home/marco/fabric
> /fabric-ca/ca-cert.pem
> 2019/09/26 15:56:50 [WARNING] Failed to connect to database
> 'fabmnetdb'
> 2019/09/26 15:56:50 [WARNING] Failed to connect to database
> 'postgres'
> 2019/09/26 15:56:50 [WARNING] Failed to connect to database
> 'template1'
> 2019/09/26 15:56:50 [ERROR] Error occurred initializing database:
> Failed
> to connect to Postgres database. Postgres requires connecting to a 
> specific database, the following databases were tried: [fabmnetdb 
> postgres template1]. Please create one of these database before 
> continuing


Why is it trying to connect to *any* database?

In the fabric-ca docs it shows the connection string as a single line
but your configuration file has it split over two lines.
My uneducated guess is that it is ignoring the 'password=password
dbname=fabmnetdb sslmode=verify-full'
line and thus unable to connect to fabmnetdb.

Cheers,
Robert






Re: managing primary key conflicts while restoring data to table with existing data

2019-09-26 Thread Pankaj Jangid
Krishnakant Mane  writes:

>> You might think about adding the new UUID column and use the existing
>> primary key to inform the updates in dependent tables. Then remove the
>> old PK  column and constraint followed by promoting the UUID to
>> primary key. This could be safely scripted and applied to all
>> instances of your data. 
>> That said, this is only truly necessary of you have production
>> databases to worry about.
>
>
> Thanks a million, this is the most logical and safe way.
>
> yes I have a lot of production databases to worry about.
>
> I am only confused about what you mean by "use the existing primary key
> to inform the updates in dependent tables."
>
> Are you refering to a cascading effect?
>
> If yes then does it mean I first program my upgrade script to manually
> go through all new uuid keys and update the same in the depending tables
> with reference to the old primary key working as foreign key in those
> tables?

I guess that is safest option given that the databases are in
production.

1. add UUID UNIQUI column
2. add references to it by identifying ising pkey
3. remove pkey references and the columns
4. make UUID column the pkey
5. remove old pkey column.

-- 
Pankaj Jangid




Re: pg12 rc1 on CentOS8 depend python2

2019-09-26 Thread Adrian Klaver

On 9/26/19 6:50 PM, keisuke kuroda wrote:

Hi.

I tried to install PostgreSQL12 RC1 on CentOS8.

# dnf install postgresql12-server postgresql12-contrib

===
  Package                           Arch                Version 
                                      Repository 
   Size

===
Installing:
  postgresql12-contrib              x86_64 
  12rc1-1PGDG.rhel8   
  pgdg12-updates-testing              660 k
  postgresql12-server               x86_64 
  12rc1-1PGDG.rhel8   
  pgdg12-updates-testing              5.3 M

Installing dependencies:
  python2                           x86_64 
  2.7.15-22.module_el8.0.0+32+017b2cba                 AppStream 
                   107 k
  python2-libs                      x86_64 
  2.7.15-22.module_el8.0.0+32+017b2cba                 AppStream 
                   6.0 M
  python2-pip                       noarch 
  9.0.3-13.module_el8.0.0+32+017b2cba                  AppStream 
                   2.0 M
  python2-setuptools                noarch 
  39.0.1-11.module_el8.0.0+32+017b2cba                 AppStream 
                   643 k
  postgresql12                      x86_64 
  12rc1-1PGDG.rhel8   
  pgdg12-updates-testing              1.8 M
  postgresql12-libs                 x86_64 
  12rc1-1PGDG.rhel8   
  pgdg12-updates-testing              405 k

Enabling module streams:
  python27                                              2.7

CentOS8 does not have python2 installed by default, But PostgreSQL is 
dependent on python2.


Do we need to install python2 when we use PostgreSQL on CentOS8?


If it is installing plpythonu, then yes. From the docs:

https://www.postgresql.org/docs/12/plpython-python23.html

"The language named plpythonu implements PL/Python based on the default 
Python language variant, which is currently Python 2. (This default is 
independent of what any local Python installations might consider to be 
their “default”, for example, what /usr/bin/python might be.) The 
default will probably be changed to Python 3 in a distant future release 
of PostgreSQL, depending on the progress of the migration to Python 3 in 
the Python community."





Best Regards.
Keisuke Kuroda



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg12 rc1 on CentOS8 depend python2

2019-09-26 Thread keisuke kuroda
Thank you for your reply!

Even if I don't need to install plpythonu,
RPM package includes "CONFIGURE = --with-python".
Therefore I think that I need to install python2 when RPM install.
Is my understanding correct?

Best Regards.
Keisuke Kuroda

2019年9月27日(金) 13:03 Adrian Klaver :

> On 9/26/19 6:50 PM, keisuke kuroda wrote:
> > Hi.
> >
> > I tried to install PostgreSQL12 RC1 on CentOS8.
> >
> > # dnf install postgresql12-server postgresql12-contrib
> >
> >
> ===
> >   Package   ArchVersion
> >   Repository
> >Size
> >
> ===
> > Installing:
> >   postgresql12-contrib  x86_64
> >   12rc1-1PGDG.rhel8
> >   pgdg12-updates-testing  660 k
> >   postgresql12-server   x86_64
> >   12rc1-1PGDG.rhel8
> >   pgdg12-updates-testing  5.3 M
> > Installing dependencies:
> >   python2   x86_64
> >   2.7.15-22.module_el8.0.0+32+017b2cba AppStream
> >107 k
> >   python2-libs  x86_64
> >   2.7.15-22.module_el8.0.0+32+017b2cba AppStream
> >6.0 M
> >   python2-pip   noarch
> >   9.0.3-13.module_el8.0.0+32+017b2cba  AppStream
> >2.0 M
> >   python2-setuptoolsnoarch
> >   39.0.1-11.module_el8.0.0+32+017b2cba AppStream
> >643 k
> >   postgresql12  x86_64
> >   12rc1-1PGDG.rhel8
> >   pgdg12-updates-testing  1.8 M
> >   postgresql12-libs x86_64
> >   12rc1-1PGDG.rhel8
> >   pgdg12-updates-testing  405 k
> > Enabling module streams:
> >   python27  2.7
> >
> > CentOS8 does not have python2 installed by default, But PostgreSQL is
> > dependent on python2.
> >
> > Do we need to install python2 when we use PostgreSQL on CentOS8?
>
> If it is installing plpythonu, then yes. From the docs:
>
> https://www.postgresql.org/docs/12/plpython-python23.html
>
> "The language named plpythonu implements PL/Python based on the default
> Python language variant, which is currently Python 2. (This default is
> independent of what any local Python installations might consider to be
> their “default”, for example, what /usr/bin/python might be.) The
> default will probably be changed to Python 3 in a distant future release
> of PostgreSQL, depending on the progress of the migration to Python 3 in
> the Python community."
>
>
> >
> > Best Regards.
> > Keisuke Kuroda
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


RE: Monitor Postgres database status on Docker

2019-09-26 Thread Daulat Ram
Hi Fan Liu,

I am able to make the connection to the Postgres database created in docker 
container via psql from postgres10 client but not able to connect through 
pg_isready.

psql -c 'select count (*) from pg_stat_activity' -h localhost -p 5432 -U 
postgres -W
Password for user postgres:
count
---
 7

Give me suggestions.
 Thanks,


From: Daulat Ram
Sent: Tuesday, September 24, 2019 3:35 PM
To: Fan Liu ; pgsql-general@lists.postgresql.org
Subject: RE: Monitor Postgres database status on Docker

Thanks but how we can use it for docker container.

Regards,
Daulat

From: Fan Liu mailto:fan@ericsson.com>>
Sent: Tuesday, September 24, 2019 3:02 PM
To: Daulat Ram mailto:daulat@exponential.com>>; 
pgsql-general@lists.postgresql.org
Subject: RE: Monitor Postgres database status on Docker

Hi,

I am not from PostgreSQL team.
Just let you know that when we run PostgreSQL in Kubernetes, we use below 
command for liveness check.

pg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME


BRs,
Fan Liu


From: Daulat Ram mailto:daulat@exponential.com>>
Sent: Tuesday, September 24, 2019 5:18 PM
To: 
pgsql-general@lists.postgresql.org
Subject: Monitor Postgres database status on Docker

Hi team,

We want to check the postgres database status on docker container just like we 
monitor Postgres (up  / down) via /etc/init.d/postgresql status

But I am not sure how we can do that with docker.

Thanks,
Daulat