How to get previous log file

2020-06-28 Thread Andrus

Hi!

select pg_read_file(pg_current_logfile()) 


retrieves today log file.
Log files are in log directory:

/var/lib/postgresql/12/main/log# ls
...
postgresql-2020-06-08_00.log  postgresql-2020-06-18_00.log  
postgresql-2020-06-28_00.log
postgresql-2020-06-09_00.log  postgresql-2020-06-19_00.log

How get yesterday log file from remote client application using postgresql 
query ?
Using Postgres 12 in Debian.

Andrus.





Re: How to get previous log file

2020-06-28 Thread Adrian Klaver

On 6/28/20 2:42 AM, Andrus wrote:

Hi!

select pg_read_file(pg_current_logfile())
retrieves today log file.
Log files are in log directory:

/var/lib/postgresql/12/main/log# ls
...
postgresql-2020-06-08_00.logĀ  postgresql-2020-06-18_00.log  
postgresql-2020-06-28_00.log

postgresql-2020-06-09_00.logĀ  postgresql-2020-06-19_00.log

How get yesterday log file from remote client application using 
postgresql query ?

Using Postgres 12 in Debian.


You where half way there:

https://www.postgresql.org/docs/12/functions-admin.html

pg_ls_dir() -- To get file names.

pg_read_file() -- Using file name from above.



Andrus.






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: n_distinct off by a factor of 1000

2020-06-28 Thread Peter J. Holzer
On 2020-06-24 16:27:35 -0600, Michael Lewis wrote:
> On Wed, Jun 24, 2020, 2:35 PM Peter J. Holzer  wrote:
> 
> Yes, estimating the number of distinct values from a relatively small
> sample is hard when you don't know the underlying distribution. It might
> be possible to analyze the sample to find the distribution and get a
> better estimate. But I'm not sure how useful that would really be: If
> a few values are very common and most very rare you are probably also
> much more likely to use the common values in a query: And for those you
> you would massively underestimate their frequency if you had an accurate
> n_distinct value. That might be just as bad or even worse.
> 
> 
> 
> This would only be true for values that are "common" but not in the MCVs list,
> right?

Yes, but if you have 33 million values there are likely to be a lot of
them "common but not in the MCVs list", even for a very biased
distribution.


> If we could increase the sampling ratio beyond the hard coded 300x to get a
> more representative sample

I thought of that but abandoned it since I don't think a better estimate
for n_distinct will help (see above for the reason). The problem is that
the distribution is biased and the planner has no idea whether the value
it is searching for is common or rare if it isn't in the MCV list.

Unless ...

As I understood Klaudie, the values are ids, and ids have no inherent
meaning, the common values are probably scattered randomly.

But it might be possible to change that. Group the ids by frequency. Ids
< 1E12 occur at most 10 times, Ids >= 1E12 <2E12 occur at most 100 times
and so on. 

This may mean that ids aren't long time stable - they may change as
their frequency changes. But if an id always changes by a multiple of
1E12, the last 12 decimal digits are stable. 

The advantage is that then the planner can use the histogram to get a
pretty good estimate of how frequent a value is.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Fwd: BUG #16513: Postgresql HA Cluster

2020-06-28 Thread Brajendra Pratap Singh
Hi Team,

Please help us on the below query .

Thanks

-- Forwarded message -
From: Daniel Gustafsson 
Date: Sat, 27 Jun, 2020, 3:16 AM
Subject: Re: BUG #16513: Postgresql HA Cluster
To: , PostgreSQL mailing lists <
pgsql-b...@lists.postgresql.org>


> On 26 Jun 2020, at 16:10, PG Bug reporting form 
wrote:

> Is there any functionality present in postgresql for HA availability where
> we can setup multiple nodes/instances in READ-WRITE mode for single
database
> so that incase of one node/instance failure we can use the 2nd one without
> failure or in less time ,this is just like oracle RAC concept .

This mailinglist is for bugreports against the PostgreSQL server, which the
above isn't.  Please see the -general mailinglist for general questions.
More
information on how to subscribe can be found at:

https://www.postgresql.org/list/

cheers ./daniel


Postgresql HA Cluster

2020-06-28 Thread Brajendra Pratap Singh
Hi All,

Is there any functionality present in postgresql for High Availability
Cluster where we can setup multiple nodes/instances in READ-WRITE mode for
single database so that incase of one node/instance failure it will
automatically failover the traffic to 2nd node/instance (without
failure or in less time) ,this is just like oracle RAC concept .

Here High Availability Cluster means there will be zero downtime incase of
any one node/instance failure.

Please help us to know this.

Thanks & Regards,
Brajendra


bdr_init_copy ends fine but doesn't copy the remote db

2020-06-28 Thread Emrah Tema
I am trying to make a multi-master replication using BDR and because of the
DDL issues-errors, I decided to use bdr_init_copy command instead of using
bdr.bdr_group_join(). I have two servers with PostgreSQL services and one
of them(Master2) has an empty DB, the other(Master1) has one schema and one
table on its DB. When I try to run bdr_init_copy command, everything goes
well and the last rows of the output are these:

.
Initializing BDR on the local node:
  bdrdemo: adding the database to BDR cluster ...All done

But when I check, the schema and the table on the Master1 is not copied to
Master2. Master2's database is still empty. What could have gone wrong? The
bdr_init_copy that I used is:

./bdr_init_copy -D /var/lib/pgsql/9.4-bdr/data -n node2 -d
'host=10.m1.m1.m1 port=5598 dbname=bdrdemo' -U postgres
--local-dbname='host=10.m2.m2.m2 port=5598 dbname=bdrdemo'