Re: How to enable TDE on Postgresql 12.3 deployed using Kubernetes

2020-07-23 Thread Samarendra Sahoo
Hello all,
Pls suggest if anyone has any reference of a docker image with TDE enables
Postgre 12.3?

On Wed, Jul 15, 2020 at 3:01 PM Samarendra Sahoo 
wrote:

> If there are any references or you have done, could you pls share steps?
>
> Thanks
>
>


pglogical, replication_slots and going back in time

2020-07-23 Thread MirrorX
i was wondering if there is a way to manually manipulate the wal location
from where a pglogical slave resumes the replication process. 

Let's imagine the following scenario. There is a pglogical slave running on
an operating system that allows hourly snapshots. If at 08:00 we decide we
need to go back in time to 04:00 we can simply shut down the database and
restore the 04:00 snapshot. When this happens, this database will be back to
04:00 meaning all data etc will be as they were at that time. However, when
querying the pg_replication_slots of the master, the wal position on
replication slot used for this pglogical slave will stay at where it was at
08:00 just before this database was shut down (for the restore to happen).
So, when this database gets back online with the 04:00 data, it will still
try to connect to the master to resume the replication. Based on the fact
that the replication slot will have the location of 08:00, the slave will
succesfully reconnect to the master but will start applying data from 08:00
onwards, basically losing 4h of transactions. 

Is there any way, that the wal position can be manipulated so that this does
not happen and the replication slot can be 'put behind' in time, to match
the 04:00 timestamp?


Thank you in advance for any clarification



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Problem with pg_service.conf

2020-07-23 Thread Michał Lis

Hello,

The server is located in the lan and runs on Windows 7x64 Ultimate.
On this server I created pg_service.conf file and set the environment 
variable of PGSERVICEFILE.


The client is an other computer with Windows 7x64 Pro. It has no access 
to local folders on the server (especially to the PostgreSQL folder) .


Coping the pg_service.conf file from server to the client and setting 
environment variable PGSERVICEFILE to this file has no sense, because of 
possibility of storing login and password in this file.


I want to connect to the server by service name, because I don't want to 
store any login information on the client side.


Yesterday I created a topic:

BUG #16550: Problem with pg_service.conf

on pgsql-bugs list

At this moment I have the information that the copy of pg_service.conf 
file is necessary on the client side to establish connection by service 
name.


I my opinion it has no sense, and looks like a bug.

Regards
Michal



W dniu 2020-07-22 o 18:14, Adrian Klaver pisze:

On 7/21/20 2:35 PM, Michał Lis wrote:

Hello,

I can't connect to a database via service defined in pg_service.conf 
file from remote machine.


Connection from local machine using this service definition works fine.

I tested it on PostgreSQL 9.4 and 9.6. The other problem is with the 
localization of the pg_service.conf file.


Reading internet solutions, I found, that this file may be placed in 
global configuration folder.


In my case it is D:\PostgreSQLx86\9.6\etc (the SYSCONFDIR variable 
displayed by pg_config.exe).


Placing the file into this folder does not work. I had to define the 
global system variable PGSERVICEFILE = 
D:\PostgreSQLx86\9.6\etc\pg_service.conf.


After that the connection to database via service began work, but 
only on local machine.


To reproduce problem:

1. Create pg_service.conf file like this:

[test4]
host=192.168.1.2
port=5433
dbname=test
user=postgres
password=abcd

2. Save this file in a directory, in my case it was
D:\PostgreSQLx86\9.6\etc\pg_service.conf
(line ends in Linux style by NotePad++)

3. Set the global system variable:
PGSERVICEFILE =D:\PostgreSQLx86\9.6\etc\pg_service.conf

4. Restart computer to take effects for setting PGSERVICEFILE

5. Open PGAdmin 3 or 4 no matter which.

6. Add and Set database connection
- Name: test
- Host: 192.168.1.2
- Port: 0
- Service: test4
- User: 
- Password: 

On the local machine the connection will be established but on remote 
machine the error:


"Definition of service "test4" not found"

will be raised

Connection from remote machine to server by host, port, user and 
password works fine.


Where is the server located?

Are you on remote machine when you try to use the service file?

If so, that is not going to work as the remote machine will not have 
access to the local environment.





PostgreSQL version: 9.6.11 and 9.4
Operating system: Windows 7 x64 Pro/Ultimate

Regards
Michal












Re: Problem with pg_service.conf

2020-07-23 Thread David G. Johnston
On Thu, Jul 23, 2020 at 6:12 AM Michał Lis  wrote:

> Hello,
>
> The server is located in the lan and runs on Windows 7x64 Ultimate.
> On this server I created pg_service.conf file and set the environment
> variable of PGSERVICEFILE.
>

The server software (postgres) doesn't use PGSERVICEFILE, only client
software does (psql, pg_dump, etc.).


> Coping the pg_service.conf file from server to the client and setting 
> environment
> variable PGSERVICEFILE to this file has no sense, because of possibility
> of storing login and password in this file.
>

Replace "copying" (Google incorrectly wants to replace copying with
coping...) with "moving"; or just "create a pg_service.conf file on the
local machine with identical contents".

I want to connect to the server by service name, because I don't want to
> store any login information on the client side.
>

What you are describing is simply not possible.  You can avoid storing
credentials on the client if you wish but you need a person to enter them
manually each time.  If you want any form of self-authentication by the
client then the credentials the client uses must be made available to it
somehow.  Searching the internet will surely turn up many such options for
this.  But it is nonsense to say that the client need only supply an
unencrypted shared secret (i.e., the service name) to the server and be
granted access.  That is no better than storing a username and password on
the local machine.

At this moment I have the information that the copy of pg_service.conf file
> is necessary on the client side to establish connection by service name.
>

PostgreSQL has a client-server architecture.  pg_service.conf is used only
by the client.  In your local machine example the client and the server
software are on the same machine and so the fact that the pg_service.conf
file was only being read by the client was not apparent.  Once you move the
client to a machine that lacks a pg_service.conf file the client rightfully
complains that it cannot find a service entry that exists on the server.
If you want both the local machine and remote machine client software to be
aware of the service name then both machines need their own pg_service.conf
file with that service definition.  The client is not continually
downloading the pg_service.conf file from the server (if you wanted a
shared pg_service.conf file you could do so but you would probably want a
separate configuration machine to provide it, not the PostgreSQL server).
And the server doesn't recognized service names supplied by the client - it
only recognizes credentials and a database name (and other configuration
settings too)

David J.


Re: HA setup with pg pool in docker

2020-07-23 Thread Bo Peng
Hello,

On Wed, 22 Jul 2020 13:29:31 +0800
Vasu Madhineni  wrote:

> We are planning to build a HA setup with pgpool in docker, Could you please
> let us know prerequisites for it.
> 
> 1. How many servers are needed for this?, can we plan with 2 servers like
> below config.
>Primary instance and pgpool in two different docker containers in server
> 1 and standby instance in docker container server 2.

It depends on your requirements.
You can create 3 containers (pgpool, primary and standby)
on one server or on different servers.

> 2. User requirement: any user needs password less authentication between
> nodes.

It is possible by configuring pool_hba.conf (Pgpool-II) and pg_hba.conf 
(PostgreSQL).

> 3. Port or firewall rules between servers.

Pgpool port and PostgreSQL port need to be accessible.
 
> 4. If planning to create a network bridge between all docker containers
> between 2 servers will be any prerequisites for that.

You need to make sure all containers can connect to each other.
-- 
Bo Peng 
SRA OSS, Inc. Japan




Re: Pgpool in docker container

2020-07-23 Thread Bo Peng
Hello, 

> Hi All,
> 
> Planning to build standalone postgres and with pgpool as connection pooler
> in docker containers.
> Shall we try option like installing pgpool in one docker container and
> postgres in another docker container, is it possible?

Yes. 
It is possible to install Pgpool-II and PostgreSQL in different containers.

> Thanks in advance.
> 
> Regards,
> Vasu Madhineni


-- 
Bo Peng 
SRA OSS, Inc. Japan




ERROR: out of overflow pages in hash index

2020-07-23 Thread Adam Scott
Anyone know if this can be remedied?

ERROR: out of overflow pages in hash index 

This occurred while waiting for a hash index to be created on  a 1,580
GB table (Postgres 11.7).  We have > 30 TB of space free.

Here's the line where I think it gets triggered.

https://github.com/postgres/postgres/blob/7559d8ebfa11d98728e816f6b655582ce41150f3/src/backend/access/hash/hashovfl.c#L283

And I found that
#define HASH_MAX_BITMAPS Min(BLCKSZ / 8, 1024)

So there looks like a maximum size here, but I couldn't find anything
in the documentation.

What is the maximum size?  I am guessing partitioning the table
someway might be a workaround, but it doesn't lend itself very well to
partitioning (i.e. there's no year column, or category or some such).

Help is appreciated!




Re: Slow or Cannot Connect to PostgreSQL Instance Service on Windows 10

2020-07-23 Thread Adrian Klaver

On 7/22/20 4:49 PM, Surya Widyanto wrote:

On 23-07-2020 06:12, Rob Sargent wrote:



On Jul 22, 2020, at 4:58 PM, Surya Widyanto > wrote:


On 23-07-2020 05:26, Adrian Klaver wrote:

On 7/22/20 10:24 AM, Surya Widyanto wrote:

Hi,

I have PostgreSQL v12.3 (x64) installed on my Laptop running 
Windows 10 Home Single Language (64-bit).
In two or three times since I fresh install the PostgreSQL and 
windows on 17-05-2020, 06:04:56 (GMT).


Meant to ask earlier, where did you install Postgres from?

I've download the installer executable from 
https://www.enterprisedb.com/downloads/postgres-postgresql-downloads, 
the Windows x86-64 version.


Then I've install it on my Windows with binaries path to *"C:\Program 
Files\PostgreSQL\12\" *and for data path to *"H:\PostgreSQL\Data\12.x\"*.
Both *C:* and *H:* drive are partition on my NVMe PCIe Gen. 3 x 4 SSD 
Drive mentioned earlier.


For additional information, I've also install the *PostGIS v3.0 
r15475 (x64)* for *PostgreSQL v12.*


By two or three times I mean, if this slow connection problem 
happen, than it wil occur in a view days in a row. Then it will 
return to normal with fast login/connection process.





Regards,
Surya


Is there by any chance a correlation between your physical location 
and the slowness episodes?

Any AV software in play?



Since this COVID-19 outbreak, I'm working from home. I've try to use 
different internet connection to test, one with wired connection 
internet provider on my house, and the other are 4G cellular connection 
internet from my smartphone and both internet connection I use give the 
same effect of slow and easy to drop/disconnect connection to PostgreSQL.


I thought you said earlier you where only connecting to the database on 
your laptop at localhost?:


"The Database is on the same machine/laptop, I try to connect using 
localhost and 127.0.0.1, both have the same slow connection problem."


So Internet connection should not be of concern here or is there more to 
this story?




For additional information, I've also had this problem when connectiong 
from PHP Application, so that total 4 apps I try to connect with 
PostgreSQL (psql, pgAdmin, Navicat, PHP Application Connection)
I'm not installing any AntiVirus software, the only installed AntiVirus 
software are WIndows Defender that came with Microsoft Windows. 
(Currently I'm Disabling the real time scan on windows defender)


After I activate the DEBUG5 for *log_min_messages* and 
*log_min_error_statement*, another line of log came up with the other 
log line I mentioned earlier:
2020-07-22 23:31:29.065 +00 [postgres@postgres] - 127.0.0.1(10191) - 
[1724] (psql) LOG:  could not receive data from client: An existing 
connection was forcibly closed by the remote host.
2020-07-22 23:31:29.065 +00 [postgres@postgres] - 127.0.0.1(10191) - 
[1724] (psql) DEBUG:  unexpected EOF on client connection






--
Adrian Klaver
adrian.kla...@aklaver.com




Row estimates for empty tables

2020-07-23 Thread Christophe Pettus
I realize I've never quite known this; where does the planner get the row 
estimates for an empty table?  Example:

psql (11.8)
Type "help" for help.

xof=# CREATE TABLE t (i integer, t text, j integer);
CREATE TABLE
xof=# VACUUM ANALYZE t;
VACUUM
xof=# EXPLAIN ANALYZE SELECT * FROM t;
   QUERY PLAN   


 Seq Scan on t  (cost=0.00..22.00 rows=1200 width=40) (actual time=0.015..0.015 
rows=0 loops=1)
 Planning Time: 5.014 ms
 Execution Time: 0.094 ms
(3 rows)

xof=# INSERT INTO t values(1, 'this', 2);
INSERT 0 1
xof=# EXPLAIN ANALYZE SELECT * FROM t;
   QUERY PLAN   


 Seq Scan on t  (cost=0.00..22.00 rows=1200 width=40) (actual time=0.010..0.011 
rows=1 loops=1)
 Planning Time: 0.039 ms
 Execution Time: 0.021 ms
(3 rows)

xof=# VACUUM ANALYZE t;
VACUUM
xof=# EXPLAIN ANALYZE SELECT * FROM t;
 QUERY PLAN 


 Seq Scan on t  (cost=0.00..1.01 rows=1 width=13) (actual time=0.008..0.008 
rows=1 loops=1)
 Planning Time: 0.069 ms
 Execution Time: 0.019 ms
(3 rows)

xof=# DELETE FROM t;
DELETE 0
xof=# VACUUM ANALYZE t;
VACUUM
xof=# EXPLAIN ANALYZE SELECT * FROM t;
   QUERY PLAN   


 Seq Scan on t  (cost=0.00..29.90 rows=1990 width=13) (actual time=0.004..0.004 
rows=0 loops=1)
 Planning Time: 0.034 ms
 Execution Time: 0.015 ms
(3 rows)


--
-- Christophe Pettus
   x...@thebuild.com





Re: Row estimates for empty tables

2020-07-23 Thread David Rowley
On Fri, 24 Jul 2020 at 16:01, Christophe Pettus  wrote:
> I realize I've never quite known this; where does the planner get the row 
> estimates for an empty table?  Example:

We just assume there are 10 pages if the relation has not yet been
vacuumed or analyzed. The row estimates you see are the number of
times 1 tuple is likely to fit onto a single page multiplied by the
assumed 10 pages.  If you had made your table wider then the planner
would have assumed fewer rows

There's a comment that justifies the 10 pages, which, as of master is
in table_block_relation_estimate_size(). It'll be somewhere else in
pg12.

* HACK: if the relation has never yet been vacuumed, use a minimum size
* estimate of 10 pages.  The idea here is to avoid assuming a
* newly-created table is really small, even if it currently is, because
* that may not be true once some data gets loaded into it.  Once a vacuum
* or analyze cycle has been done on it, it's more reasonable to believe
* the size is somewhat stable.
*
* (Note that this is only an issue if the plan gets cached and used again
* after the table has been filled.  What we're trying to avoid is using a
* nestloop-type plan on a table that has grown substantially since the
* plan was made.  Normally, autovacuum/autoanalyze will occur once enough
* inserts have happened and cause cached-plan invalidation; but that
* doesn't happen instantaneously, and it won't happen at all for cases
* such as temporary tables.)
*
* We approximate "never vacuumed" by "has relpages = 0", which means this
* will also fire on genuinely empty relations.  Not great, but
* fortunately that's a seldom-seen case in the real world, and it
* shouldn't degrade the quality of the plan too much anyway to err in
* this direction.
*
* If the table has inheritance children, we don't apply this heuristic.
* Totally empty parent tables are quite common, so we should be willing
* to believe that they are empty.

The code which decides if the table has been vacuumed here assumes it
has not if pg_class.relpages == 0. So even if you were to manually
vacuum the table the code here would think it's not yet been vacuumed.

David