Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Durgamahesh Manne
On Fri, Aug 30, 2019 at 4:12 PM Luca Ferrari  wrote:

> On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne
>  wrote:
> >  Logical dump of that table is taking more than 7 hours to be completed
> >
> >  I need to reduce to dump time of that table that has 88GB in size
>
> Good luck!
> I would see two possible solutions to the problem:
> 1) use physical backup and switch to incremental (e..g, pgbackrest)
> 2) partition the table and backup single pieces, if possible
> (constraints?) and be assured it will become hard to maintain (added
> partitions, and so on).
>
> Are all of the 88 GB be written during a bulk process? I guess no, so
> maybe partitioning you can avoid locking the whole dataset and reduce
> contention (and thus time).
>
> Luca
>


Hi respected postgres team

  Are all of the 88 GB be written during a bulk process?
   NO
 Earlier table size was 88gb
 Now table size is about 148 GB
 Is there any way to reduce dump time when i take dump of the table which
has 148gb in size without creating partiton on that table has 148gb in size
?


Regards
Durgamahesh Manne


connection timeout with psycopg2

2019-10-16 Thread Vicente Juan Tomas Monserrat
Hi there,

I have been testing out the following architecture for PostgreSQL HA.

 +-+
   +-+   VIP   ++
   | +-+|
   ||
+--v---+ +--v---+
|  pgBouncer   | |  pgBouncer   |
|  +   | |  +   |
| keepalived   | |  keepalived  |
+--+---+ +--+---+
   ||
   ||
   ||
+--v---+ +--v---+
|  | |  |
|   HAProxy| |   HAProxy|
|  | |  |
+--+---+ +--+---+
   ||
   ++
   ||
   ||
  +v+  +v+
  | |  | |
  | |  | |
  |  PG01   |  |  PG02   |
  | |  | |
  |(patroni)|  |(patroni)|
  | |  | |
  +-+  +-+

I'm using this python script for checking the failover events in pgBouncer,
HAProxy and Patroni (PostgreSQL HA solution).

#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8

import psycopg2

ISOLEVEL = psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT

import time
from datetime import datetime

user = 'postgres'
password = 'secretpassword'

host = 'localhost'
port = '6432'
database = 'test'
LIMIT_RETRIES = 10

class DB():
def __init__(self, user, password, host, port, database, reconnect):
self.user = user
self.password = password
self.host = host
self.port = port
self.database = database
self._connection = None
self._cursor = None
self.reconnect = reconnect
self.init()

def connect(self,retry_counter=0):
if not self._connection:
try:
self._connection = psycopg2.connect(user = self.user,
password = self.password, host = self.host, port = self.port, database
= self.database, connect_timeout = 3)
retry_counter = 0
self._connection.autocommit = True
return self._connection
except psycopg2.OperationalError as error:
if not self.reconnect or retry_counter >= LIMIT_RETRIES:
raise error
else:
retry_counter += 1
print("got error {}. reconnecting
{}".format(str(error).strip(), retry_counter))
time.sleep(5)
self.connect(retry_counter)
except (Exception, psycopg2.Error) as error:
raise error

def cursor(self):
if not self._cursor or self._cursor.closed:
if not self._connection:
self.connect()
self._cursor = self._connection.cursor()
return self._cursor

def execute(self, query, retry_counter=0):
try:
self._cursor.execute(query)
retry_counter = 0
except (psycopg2.DatabaseError, psycopg2.OperationalError) as error:
if retry_counter >= LIMIT_RETRIES:
raise error
else:
retry_counter += 1
print("got error {}. retrying
{}".format(str(error).strip(), retry_counter))
time.sleep(1)
self.reset()
self.execute(query, retry_counter)
except (Exception, psycopg2.Error) as error:
raise error

def reset(self):
self.close()
self.connect()
self.cursor()

def close(self):
if self._connection:
if self._cursor:
self._cursor.close()
self._connection.close()
print("PostgreSQL connection is closed")
self._connection = None
self._cursor = None

def init(self):
self.connect()
self.cursor()

db = DB(user=user, password=password, host=host, port=port,
database=database, reconnect=True)
db.execute("create table if not exists t1 (id integer);")
i = 0
while True:
db.execute("insert into t1(id) values(1);")
if i % 100 == 0:
print("%s: %d" % (datetime.now(), i))
i = i+1

When running this python script against the pgBouncer VIP it keeps
inserting data into the database. Then I stop one of the HAProxy
service (where the VIP lives) the connection it hangs and never goes on.
The VIP is on the other node but the client/app it doesn't notice and it
keeps waiting for 5 minutes and finally continues. I've been looking for
some default value of 5min with no luck.

$ python insert.py
2019-10-15 10:01:51.817585: 0
2019-10-15 10:01:51.901091: 100
2019-10-15 10:01:52.031583: 200
2019-10-15 10:01:52.126565: 300
2019-10-15 10:01:52.216502: 400
2019-10-15 10:01:52.307157: 500
2019-10-15 10:01:52.400867: 600
2019-10-15 10:01:52.497239: 700
2019-10-15 10:01:52.655689: 800
2019-10-15 10:01:52

Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Luca Ferrari
On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
 wrote:
>  Is there any way to reduce dump time when i take dump of the table which has 
> 148gb in size without creating partiton on that table has 148gb in size ?

I would at least try a parallel dump: should it be -j option to pg_dump.
I'm not sure it is an effective parallelism against a single table.

Luca




Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Durgamahesh Manne
On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari  wrote:

> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
>  wrote:
> >  Is there any way to reduce dump time when i take dump of the table
> which has 148gb in size without creating partition* on that table has 148gb
> in size ?
>
> I would at least try a parallel dump: should it be -j option to pg_dump.
> I'm not sure it is an effective parallelism against a single table.
>
> Luca
>

Hi

parallel jobs with pg_dump be effective for the database which contains lot
of tables&its dependent objects

parallel jobs with pg_dump can not be effective against a single table


Regards
Durgamahesh Manne


Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Durgamahesh Manne
On Wed, Oct 16, 2019 at 3:22 PM Durgamahesh Manne 
wrote:

>
>
> On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari  wrote:
>
>> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
>>  wrote:
>> >  Is there any way to reduce dump time when i take dump of the table
>> which has 148gb in size without creating partition* on that table has 148gb
>> in size ?
>>
>> I would at least try a parallel dump: should it be -j option to pg_dump.
>> I'm not sure it is an effective parallelism against a single table.
>>
>> Luca
>>
>
> Hi
>
> parallel jobs with pg_dump can* be effective for the database which
> contains lot of tables&its dependent objects
>
> parallel jobs with pg_dump can not be effective against a single table
>
>
> Regards
> Durgamahesh Manne
>
>
>


Is there any configuration in postgresql.conf or any other configuration of postgres which will make this possible to listen on particular interface

2019-10-16 Thread M Tarkeshwar Rao
Hi all,



postmaste  2917 postgres3u  IPv452341  0t0  TCP *:5432 (LISTEN)

postmaste  2917 postgres4u  IPv652342  0t0  TCP *:5432 (LISTEN)



Above processes are bind on all the interfaces of a machine. Is there any 
configuration in postgresql.conf or any other configuration of postgres which 
will make this possible to listen on particular interface?



Regards

Tarkeshwar



Postgres 9.6 active-passive HA cluster

2019-10-16 Thread Jairam Gauns
Hi guys,



We are in process of testing a Postgres 9.6 active-passive HA cluster using

streaming replication on Rhel 7.6.

We have setup Postgres 9.6 primary/warm standby streaming synchorous
replication.

We have set synchronous_standby_names=* and synchronous_commit=on.



To test we have 5 sql clients firing inserts simultaneously on the 12
tables created on a db present on the primary. each client would end up
firing 2000 request.

So when i run this experiment, i kill postgres on primary say after 8000
records are inserted.

Then i start both nodes as standalone to compare the data.



So once in a while i observe that  there exist one record more in a few
tables on the killed  primary than as compared to the records on the
standby.

Our expectation is both servers should have same number of records when we
set synchronous replication. Because if that does not happen and now i make
the failed primary the new standby then the records would not be saved as
the new primary(old standby) did not have them in the first place.
How do we handle this situation?



Could anybody please help provide some pointers?

Thanks
Jairam


Re: Is there any configuration in postgresql.conf or any other configuration of postgres which will make this possible to listen on particular interface

2019-10-16 Thread Francisco Olarte
On Wed, Oct 16, 2019 at 12:07 PM M Tarkeshwar Rao
 wrote:
> Above processes are bind on all the interfaces of a machine. Is there any 
> configuration in postgresql.conf or any other configuration of postgres which 
> will make this possible to listen on particular interface?

Listening on an INTERFACE is a tricky thing. But if you can manage
with listening on a particular ADDRESS and letting the os pick the
interfaces for you, you can try listen_address ( which is trivially
found on 
https://www.postgresql.org/docs/12/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS
, although being lazy I just searched for "listen" on the manual index
)

Francisco Olarte.




Re: Text search lexer's handling of hyphens and negatives

2019-10-16 Thread Daniel Verite
raylu wrote:

> to_tsvector('simple', 'UVW-789-XYZ') is
> 'uvw':1 '-789':2 'xyz':3
> because -789 is a negative integer. If we turn the query '789-XYZ'
> into the tsquery as before, we get to_tsquery('simple', '789 <-> xyz')
> which doesn't match it.
> 
> Are we missing something here? Is there either a way to
> 1. generate tsvectors without this special (negative) integer behavior or

As an ad-hoc solution, you could add a dictionary that turns a negative
integer into its positive counterpart. There's a dictionary in contrib that
can be used as a starting point:
https://www.postgresql.org/docs/current/dict-int.html

It's a matter of ~10 lines of C code to add an "abs" parameter to 
that dictionary that would, when true, produce "789" as a lexem
when fed "-789" as input.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite




A little confusion about JSON Path

2019-10-16 Thread Thomas Kellerer
Hello,

I don't understand why the following two JSON Path expressions aren't doing the 
same thing in Postgres 12:

with sample (data) as (
  values
('{"k1": {"list":[1,2,3]}}'::jsonb)
)
select data, 
   jsonb_path_exists(data, '$.k1.list.type() ? (@ == "array")'), -- 
returns true as expected
   jsonb_path_exists(data, '$.k1.list ? (@.type() == "array")') -- 
returns false - not expected
from sample;


Apparently "@.type()" returns something different then "$.k1.list.type()"

But maybe I simply don't understand how the @ is supposed to work.

Regards
Thomas




Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Imre Samu
Hi,

Maybe - you can re-use this backup tricks.

"Speeding up dump/restore process"
https://www.depesz.com/2009/09/19/speeding-up-dumprestore-process/

for example:
"""
*Idea was: All these tables had primary key based on serial. We could
easily get min and max value of the primary key column, and then split it
into half-a-million-ids “partitions", then dump them separately using:*
*psql -qAt -c "COPY ( SELECT * FROM TABLE WHERE id BETWEEN x AND y) TO
STDOUT" | gzip -c - > TABLE.x.y.dump*
"""

best,
Imre



Durgamahesh Manne  ezt írta (időpont: 2019. aug.
30., P, 11:51):

> Hi
> To respected international postgresql team
>
> I am using postgresql 11.4 version
> I have scheduled logical dump job which runs daily one time at db level
> There was one table that has write intensive activity for every 40 seconds
> in db
> The size of the table is about 88GB
>  Logical dump of that table is taking more than 7 hours to be completed
>
>  I need to reduce to dump time of that table that has 88GB in size
>
>
> Regards
> Durgamahesh Manne
>
>
>
>


Analyze and vaccum

2019-10-16 Thread Sonam Sharma
How auto vaccum and vaccum are different ?
How analyze and auto analyze are different ?

We were running load on a table which yesterday took 3 mins and today it
took 30 mins. The autovaccum and autoanalyze are happening daily . But the
manual vaccum and analyze ran on 26th.
Can someone please help how these are different and if it help if I run a
vaccum and analyze on table .


Re: Analyze and vaccum

2019-10-16 Thread Adrian Klaver

On 10/16/19 5:32 AM, Sonam Sharma wrote:

How auto vaccum and vaccum are different ?
How analyze and auto analyze are different ?

We were running load on a table which yesterday took 3 mins and today it 
took 30 mins. The autovaccum and autoanalyze are happening daily . But 
the manual vaccum and analyze ran on 26th.
Can someone please help how these are different and if it help if I run 
a vaccum and analyze on table .


The docs will tell you:

https://www.postgresql.org/docs/11/routine-vacuuming.html

https://www.postgresql.org/docs/11/sql-analyze.html

https://www.postgresql.org/docs/11/sql-vacuum.html


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Text search lexer's handling of hyphens and negatives

2019-10-16 Thread Alan Hodgson
On Tue, 2019-10-15 at 20:34 -0700, raylu wrote:
> On Tue, Oct 15, 2019 at 3:35 PM Alan Hodgson  > wrote:
> > My company has found the pg_trm extension to be more useful for
> > partial text searches than the full text functions. I don't know
> > specifically how it might help with your hyphens but it would be
> > worth testing. The docs actually suggest using them in conjunction
> > in some cases.
> 
> We actually do use pg_trgm already for the names/titles of
> things.Indexing the content with a trigram index and then
> doingLOWER(content) LIKE '%789-xyz%' would certainly work, but1. we'd
> have to do a little bit of finagling if we wanted to match onword
> boundaries (don't match '6789-xyza' in the above example)2. trigram
> indexes are pretty huge for long documents, which is why wecurrently
> only use them for names/titles
> We may give up and just use pg_trgm for contents if nothing else
> worksout but it feels like the text search lexer is _so_ close to what
> wewant.

Maybe you could have a trigger pull out those specific hypenated
references into a separate column when the document is added or updated,
and store/index those separately?


PostgreSQL memory usage

2019-10-16 Thread Alexander Pyhalov
Hello.

After looking at my DBMS server for some time I've understood that I don't 
understand what was going on...

A server has 48 GB RAM. shared_buffers is set to 12GB, work_mem - to 32MB, 
pgbouncer in transaction mode is used to connect pool (pool size 80) to 
PostgreSQL 10.5 server. 

I see that at some point several postgresql backends start consuming about 16  
GB RAM. If we account for shared_buffers, it meens 4 GB RAM for private backend 
memory. How can we achieve such numbers? I don't see any long-running (or 
complex) queries (however, there could be long-running transactions and queries 
to large partitioned tables). But how could they consume 512* work_mem memory?  
 

С уважением,
Александр Пыхалов,
программист отдела телекоммуникационной инфраструктуры
управления информационно-коммуникационной инфраструктуры ЮФУ





Re: Postgres 9.6 active-passive HA cluster

2019-10-16 Thread Michael Lewis
"pg_rewind is a tool for synchronizing a PostgreSQL cluster with another
copy of the same cluster, after the clusters' timelines have diverged. A
typical scenario is to bring an old master server back online after
failover as a standby that follows the new master."
-https://www.postgresql.org/docs/9.6/app-pgrewind.html

>


RE: Postgres 10.7 Systemd Startup Issue

2019-10-16 Thread Lu, Dan
Greeting,

I have a follow-up question regarding PostgreSQL configure option.

Example:
./configure --prefix=/hostname/pg/PostgreSQL-11.5 --with-systemd --with-openssl

Above example I configure PG with 2 options:
1. with-systemd
2. with-openssl

Is there a catalog view that I can query to see what options were used to 
configure the PG instance?  We will be taking over administration of existing 
PG instance, but we would need to know what configuration option was used 
during build.

Your feedback and help is much appreciated.

Dan

-Original Message-
From: Lu, Dan
Sent: Friday, June 07, 2019 3:22 PM
To: 'Tom Lane' 
Cc: 'Christopher Browne' ; 'Francisco Olarte' 
; Kelly, Kevin ; 
'pgsql-gene...@postgresql.org' 
Subject: RE: Postgres 10.7 Systemd Startup Issue

Hello All,

I ended up removing all configuration.  Re-unzip the binary tar files we 
downloaded.  Re-configure with the option "--with-systemd" and now it is 
working.

We really appreciate all your help!

Dan

-Original Message-
From: Lu, Dan
Sent: Friday, June 07, 2019 10:11 AM
To: 'Tom Lane' 
Cc: Christopher Browne ; Francisco Olarte 
; Kelly, Kevin ; 
pgsql-gene...@postgresql.org
Subject: RE: Postgres 10.7 Systemd Startup Issue

I apologize, it was a copy/paste error.  I did include the option 
"--with-systemd".

This is our first install of postgresql.  I am in the process of getting 
standard in place for future deployment.  We don't have any large footprint of 
postgresql here.  We run mostly Oracle, SQL Server and MySQL in production so 
far.

Dan

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Friday, June 07, 2019 10:04 AM
To: Lu, Dan 
Cc: Christopher Browne ; Francisco Olarte 
; Kelly, Kevin ; 
pgsql-gene...@postgresql.org
Subject: Re: Postgres 10.7 Systemd Startup Issue

"Lu, Dan"  writes:
> Even with the added option “—with-systemd”, it is not working still.

Hmmm 

> To outline what I did.

> 2)  Configure PostgreSQL to new location mkdir
> /hostname/pg/PostgreSQL-10.7_2 ./configure
> --prefix=/hostname/pg/PostgreSQL-10.7_2

Is it just a copy-and-paste mistake that you don't actually have a 
--with-systemd option here?

Also, it seems fairly unusual for anyone to build a production installation 
with absolutely no configuration options --- it'd work, but you'd get a really 
plain-vanilla server.  You might consider reviewing

https://www.postgresql.org/docs/10/install-procedure.html

to see what other things might make sense to turn on.

regards, tom lane



IMPORTANT: The information contained in this email and/or its attachments is 
confidential. If you are not the intended recipient, please notify the sender 
immediately by reply and immediately delete this message and all its 
attachments. Any review, use, reproduction, disclosure or dissemination of this 
message or any attachment by an unintended recipient is strictly prohibited. 
Neither this message nor any attachment is intended as or should be construed 
as an offer, solicitation or recommendation to buy or sell any security or 
other financial instrument. Neither the sender, his or her employer nor any of 
their respective affiliates makes any warranties as to the completeness or 
accuracy of any of the information contained herein or that this message or any 
of its attachments is free of viruses.


RE: Postgres 10.7 Systemd Startup Issue

2019-10-16 Thread Lu, Dan
Thank you so much for your help!



-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Monday, October 14, 2019 2:19 PM
To: Lu, Dan 
Cc: Christopher Browne ; Francisco Olarte 
; pgsql-gene...@postgresql.org
Subject: Re: Postgres 10.7 Systemd Startup Issue

"Lu, Dan"  writes:
> Is there a catalog view that I can query to see what options were used
> to configure the PG instance?

No, but pg_config should tell you.

regards, tom lane



IMPORTANT: The information contained in this email and/or its attachments is 
confidential. If you are not the intended recipient, please notify the sender 
immediately by reply and immediately delete this message and all its 
attachments. Any review, use, reproduction, disclosure or dissemination of this 
message or any attachment by an unintended recipient is strictly prohibited. 
Neither this message nor any attachment is intended as or should be construed 
as an offer, solicitation or recommendation to buy or sell any security or 
other financial instrument. Neither the sender, his or her employer nor any of 
their respective affiliates makes any warranties as to the completeness or 
accuracy of any of the information contained herein or that this message or any 
of its attachments is free of viruses.




Text search lexer's handling of hyphens and negatives

2019-10-16 Thread Raymond Lu
We've been happily using pgsql to store user-generated documents for a
while now. We also wanted to be able to search the documents so we
tossed the document contents into a tsvector and did a pretty
straightforward contents @@ phraseto_tsquery('simple', 'the query').

Our users have a lot of things named like ABC-DEF-GHI so that sort of
hyphenated name appears in their documents fairly often.
to_tsvector('simple', 'ABC-DEF-GHI') @@ phraseto_tsquery('simple',
'ABC-DEF-GHI') works without issue.

Sometimes, these hyphenated names have numbers in them like
UVW-789-XYZ. Still no problem with to_tsvector/phraseto_tsquery.

Sometimes, users can only remember the last bit of the name. So they'd
like to find the document with ABC-DEF-GHI in it by searching for
'DEF-GHI'. Since to_tsvector('simple', 'ABC-DEF-GHI') is
'abc-def-ghi':1 'abc':2 'def':3 'ghi':4
we search for to_tsquery('simple', 'def <-> ghi') instead of using
phraseto_tsquery. This works, but you can probably see where this is
going.

to_tsvector('simple', 'UVW-789-XYZ') is
'uvw':1 '-789':2 'xyz':3
because -789 is a negative integer. If we turn the query '789-XYZ'
into the tsquery as before, we get to_tsquery('simple', '789 <-> xyz')
which doesn't match it.

Are we missing something here? Is there either a way to
1. generate tsvectors without this special (negative) integer behavior or
2. generate tsqueries in a more intelligent way?




CVE-2018-1058

2019-10-16 Thread Lizeth Solis Aramayo
Good afternoon,

I am working with postgresql 9.6.15 and I need to restore in a 9.6.5 version,  
I got an error, and  I found this page to install a patch

https://www.postgresql.org/about/news/1834/

CVE-2018-1058

But I don't k now how to download the patch,  please may you help me?

Thank you a lot.

Lizeth Solis
DBA - ELFEC S.A.
Cochabamba - Bolivia.
La informaci?n contenida en este mensaje esta dirigida en forma exclusiva 
para el uso personal y confidencial del o los destinatarios arriba nombrados. 
Si el lector de este mensaje no es el destinatario previsto o una persona 
responsable para su distribuci?n al destinatario, se le notifica que ha 
recibido este correo por error y que la revisi?n, distribuci?n, difusi?n o 
copia de este mensaje esta estrictamente prohibida. Si por error recibi? esta 
comunicaci?n, por favor notifiquenos inmediatamente y borre el mensaje 
original. The information contained in this message is intended only for 
the personal and confidential use of the recipient(s) named above. If the 
reader of this message is not the intended recipient or an agent responsible 
for delivering it to the intended recipient, you are hereby notified that you 
have received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately, and delete 
the original message.


Re: CVE-2018-1058

2019-10-16 Thread Adrian Klaver

On 10/14/19 3:27 PM, Lizeth Solis Aramayo wrote:

Good afternoon,

I am working with postgresql 9.6.15 and I need to restore in a 9.6.5 
version,  I got an error, and  I found this page to install a patch


What commands did you use to dump the 9.6.15 version and restore to the 
9.6.5 version?


Which versions software did you use to do above?

What was the error?

The reason why you can't upgrade the 9.6.5 to 9.6.15?




https://www.postgresql.org/about/news/1834/

CVE-2018-1058

But I don’t k now how to download the patch,  please may you help me?

Thank you a lot.

Lizeth Solis

DBA – ELFEC S.A.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: CVE-2018-1058

2019-10-16 Thread Rob Sargent



On 10/16/19 1:40 PM, Adrian Klaver wrote:

On 10/14/19 3:27 PM, Lizeth Solis Aramayo wrote:

Good afternoon,

I am working with postgresql 9.6.15 and I need to restore in a 9.6.5 
version,  I got an error, and  I found this page to install a patch


What commands did you use to dump the 9.6.15 version and restore to 
the 9.6.5 version?


Which versions software did you use to do above?

What was the error?

The reason why you can't upgrade the 9.6.5 to 9.6.15?



Isn't OP asking to downgrade?




Re: CVE-2018-1058

2019-10-16 Thread Ron

On 10/16/19 2:40 PM, Adrian Klaver wrote:

On 10/14/19 3:27 PM, Lizeth Solis Aramayo wrote:

Good afternoon,

I am working with postgresql 9.6.15 and I need to restore in a 9.6.5 
version,  I got an error, and  I found this page to install a patch


What commands did you use to dump the 9.6.15 version and restore to the 
9.6.5 version?


Which versions software did you use to do above?

What was the error?

The reason why you can't upgrade the 9.6.5 to 9.6.15?


There are a thousand and one -- nay, a million and ten -- crazy reasons why 
software can't be upgraded. (Mostly due to "Process" in large 
organizations.)  It's best just to swallow "why can't you upgrade" and 
answer the question.






https://www.postgresql.org/about/news/1834/

CVE-2018-1058

But I don’t k now how to download the patch,  please may you help me?

Thank you a lot.

Lizeth Solis

DBA – ELFEC S.A.





--
Angular momentum makes the world go 'round.




Changing PK on replicated database

2019-10-16 Thread PegoraroF10
I have replication using Publication/Subscription and configured with REPLICA
IDENTITY DEFAULT, so it uses PK values to do the replication. Then, I´ve
imported data in a new schema and that has several tables with a record with
its PK = 0. Replication works but my application doesn´t because it needs to
see pk values > 0. 

So, I have to change those records with 0 on their pk to any value, what is
the best way to do that ?
If i just change pk valued on master how will the data of that record be
replicated ?
That record will be sent to replica as update but that PK doesn´t exist on
replica server, so ...

Or do I need to update them manually on Master and Replicated servers ?

I didn´t find any info about this on Docs and because that I´m posting about
this.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: CVE-2018-1058

2019-10-16 Thread Neil


> On Oct 16, 2019, at 2:55 PM, Ron  wrote:
> 
> On 10/16/19 2:40 PM, Adrian Klaver wrote:
>> On 10/14/19 3:27 PM, Lizeth Solis Aramayo wrote:
>>> Good afternoon,
>>> 
>>> I am working with postgresql 9.6.15 and I need to restore in a 9.6.5 
>>> version,  I got an error, and  I found this page to install a patch
>> 
>> What commands did you use to dump the 9.6.15 version and restore to the 
>> 9.6.5 version?
>> 
>> Which versions software did you use to do above?
>> 
>> What was the error?
>> 
>> The reason why you can't upgrade the 9.6.5 to 9.6.15?
> 
> There are a thousand and one -- nay, a million and ten -- crazy reasons why 
> software can't be upgraded. (Mostly due to "Process" in large organizations.) 
>  It’s best just to swallow “why can't you upgrade" and answer the question.

Well, I don’t know any organization where applying a one time patch is safer, 
less bug prone, and cheaper than doing a well tested point upgrade for 
postgres.  So the question seems very relevant to me.

In addition, if the company is not going to keep updated to latest point 
upgrades (meaning they are not applying security and bug fixes) then why would 
they expect free support.  If they want to play with fire by applying 
individual patches, then, from my standpoint they are on their own.  The 
decision not to do regular maintenance has consequences and individual patches 
are not guaranteed to be bug free for the system. While the developers try not 
to miss dependencies, the OP should understand that the Postgres build farm 
will never have run a configuration with only their individual patch applied 
against an older system.  Sounds really risky to me. 

So the reason to ask the question is to make sure the OP understands the high 
level of risk they are undertaking.








RE: CVE-2018-1058

2019-10-16 Thread Lizeth Solis Aramayo
What commands did you use to dump the 9.6.15 version and restore to the
9.6.5 version?

Pg_dump -p 5433 -U postgres -Fc -d dbkerp -n param > param.dump
And
Pg_restore -p 5432 -U postgres -d dbkerp param.dump


Server with pg_dump is Linux red hat 7.6

Server with pg_restore is linux red hat 6.5


In both servers I have postgresql 9.6, but in pg_dump is 9.6.15,   and in 
pg_restore is 9.6.5.


The pg_dump is correct,  everything goes ok.,  but when I do the pg_restore I 
gota n error :  pg_restore: [archiver] unsupported version (1.13) in file header


I searched solutions,  and I found that I can apply a patch CVE-2018-1058,  but 
I don¡t know how.
How  to download,  and install,  I dont find documents about it.

he reason why you can't upgrade the 9.6.5 to 9.6.15?  I dont know how.




-Mensaje original-
De: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Enviado el: miércoles, 16 de octubre de 2019 15:41
Para: Lizeth Solis Aramayo; pgsql-gene...@postgresql.org
Asunto: Re: CVE-2018-1058

On 10/14/19 3:27 PM, Lizeth Solis Aramayo wrote:
> Good afternoon,
>
> I am working with postgresql 9.6.15 and I need to restore in a 9.6.5
> version,  I got an error, and  I found this page to install a patch

What commands did you use to dump the 9.6.15 version and restore to the
9.6.5 version?

Which versions software did you use to do above?

What was the error?

The reason why you can't upgrade the 9.6.5 to 9.6.15?


>
> https://www.postgresql.org/about/news/1834/
>
> CVE-2018-1058
>
> But I don't k now how to download the patch,  please may you help me?
>
> Thank you a lot.
>
> Lizeth Solis
>
> DBA - ELFEC S.A.
>

--
Adrian Klaver
adrian.kla...@aklaver.com
La información contenida en este mensaje esta dirigida en forma exclusiva 
para el uso personal y confidencial del o los destinatarios arriba nombrados. 
Si el lector de este mensaje no es el destinatario previsto o una persona 
responsable para su distribución al destinatario, se le notifica que ha 
recibido este correo por error y que la revisión, distribución, difusión o 
copia de este mensaje esta estrictamente prohibida. Si por error recibió esta 
comunicación, por favor notifiquenos inmediatamente y borre el mensaje 
original. The information contained in this message is intended only for 
the personal and confidential use of the recipient(s) named above. If the 
reader of this message is not the intended recipient or an agent responsible 
for delivering it to the intended recipient, you are hereby notified that you 
have received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately, and delete 
the original message.




Re: CVE-2018-1058

2019-10-16 Thread Adrian Klaver

On 10/16/19 12:51 PM, Rob Sargent wrote:


On 10/16/19 1:40 PM, Adrian Klaver wrote:

On 10/14/19 3:27 PM, Lizeth Solis Aramayo wrote:

Good afternoon,

I am working with postgresql 9.6.15 and I need to restore in a 9.6.5 
version,  I got an error, and  I found this page to install a patch


What commands did you use to dump the 9.6.15 version and restore to 
the 9.6.5 version?


Which versions software did you use to do above?

What was the error?

The reason why you can't upgrade the 9.6.5 to 9.6.15?



Isn't OP asking to downgrade?



Not sure, could be or OP is restoring to another instance.


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: CVE-2018-1058

2019-10-16 Thread Adrian Klaver

On 10/16/19 12:55 PM, Ron wrote:

On 10/16/19 2:40 PM, Adrian Klaver wrote:

On 10/14/19 3:27 PM, Lizeth Solis Aramayo wrote:

Good afternoon,

I am working with postgresql 9.6.15 and I need to restore in a 9.6.5 
version,  I got an error, and  I found this page to install a patch


What commands did you use to dump the 9.6.15 version and restore to 
the 9.6.5 version?


Which versions software did you use to do above?

What was the error?

The reason why you can't upgrade the 9.6.5 to 9.6.15?


There are a thousand and one -- nay, a million and ten -- crazy reasons 
why software can't be upgraded. (Mostly due to "Process" in large 
organizations.)  It's best just to swallow "why can't you upgrade" and 
answer the question.




I generally find it best not to assume. See OP's response for why I 
stick to that strategy.






https://www.postgresql.org/about/news/1834/

CVE-2018-1058

But I don’t k now how to download the patch,  please may you help me?

Thank you a lot.

Lizeth Solis

DBA – ELFEC S.A.








--
Adrian Klaver
adrian.kla...@aklaver.com




Re: CVE-2018-1058

2019-10-16 Thread Adrian Klaver

On 10/16/19 1:05 PM, Lizeth Solis Aramayo wrote:

What commands did you use to dump the 9.6.15 version and restore to the
9.6.5 version?

Pg_dump -p 5433 -U postgres -Fc -d dbkerp -n param > param.dump
And
Pg_restore -p 5432 -U postgres -d dbkerp param.dump


Server with pg_dump is Linux red hat 7.6

Server with pg_restore is linux red hat 6.5


In both servers I have postgresql 9.6, but in pg_dump is 9.6.15,   and in 
pg_restore is 9.6.5.


So the RH 7.5 has 9.6.15 and RH 6.5 9.6.5?




The pg_dump is correct,  everything goes ok.,  but when I do the pg_restore I 
gota n error :  pg_restore: [archiver] unsupported version (1.13) in file header


Yeah, that is because you are using an older version of pg_restore to 
restore a file that was created by a newer version of pg_dump. You have 
three choices as I see it:


1) Try to dump the 9.6.15  database with the 9.6.5 pg_dump using -Fc

2) Dump the 9.6.15 database using the 9.6.15 dump but output as plain 
text file(-Fp). To restore the file you will need to use psql not 
pg_restore. See examples at bottom of here:


https://www.postgresql.org/docs/11/app-pgdump.html

3) Upgrade your 9.6.5 instance to 9.6.15. This is going to need more 
information though:


a) How are you installing Postgres?

b) How big is the database and is it in production?




I searched solutions,  and I found that I can apply a patch CVE-2018-1058,  but 
I don¡t know how.
How  to download,  and install,  I dont find documents about it.

he reason why you can't upgrade the 9.6.5 to 9.6.15?  I dont know how.


 


-Mensaje original-
De: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Enviado el: miércoles, 16 de octubre de 2019 15:41
Para: Lizeth Solis Aramayo; pgsql-gene...@postgresql.org
Asunto: Re: CVE-2018-1058

On 10/14/19 3:27 PM, Lizeth Solis Aramayo wrote:

Good afternoon,

I am working with postgresql 9.6.15 and I need to restore in a 9.6.5
version,  I got an error, and  I found this page to install a patch


What commands did you use to dump the 9.6.15 version and restore to the
9.6.5 version?

Which versions software did you use to do above?

What was the error?

The reason why you can't upgrade the 9.6.5 to 9.6.15?




https://www.postgresql.org/about/news/1834/

CVE-2018-1058

But I don't k now how to download the patch,  please may you help me?

Thank you a lot.

Lizeth Solis

DBA - ELFEC S.A.



--
Adrian Klaver
adrian.kla...@aklaver.com
La información contenida en este mensaje esta dirigida en forma exclusiva 
para el uso personal y confidencial del o los destinatarios arriba nombrados. 
Si el lector de este mensaje no es el destinatario previsto o una persona 
responsable para su distribución al destinatario, se le notifica que ha 
recibido este correo por error y que la revisión, distribución, difusión o 
copia de este mensaje esta estrictamente prohibida. Si por error recibió esta 
comunicación, por favor notifiquenos inmediatamente y borre el mensaje 
original. The information contained in this message is intended only for 
the personal and confidential use of the recipient(s) named above. If the 
reader of this message is not the intended recipient or an agent responsible 
for delivering it to the intended recipient, you are hereby notified that you 
have received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately, and delete 
the original message.




--
Adrian Klaver
adrian.kla...@aklaver.com


Re: Changing PK on replicated database

2019-10-16 Thread Adrian Klaver

On 10/16/19 1:47 PM, PegoraroF10 wrote:

I have replication using Publication/Subscription and configured with REPLICA
IDENTITY DEFAULT, so it uses PK values to do the replication. Then, I´ve
imported data in a new schema and that has several tables with a record with
its PK = 0. Replication works but my application doesn´t because it needs to
see pk values > 0.

So, I have to change those records with 0 on their pk to any value, what is
the best way to do that ?
If i just change pk valued on master how will the data of that record be
replicated ?
That record will be sent to replica as update but that PK doesn´t exist on
replica server, so ...


I'm not following. You said above the replication worked with the 
records where PK = 0, it was your application that could not find them. 
If that is true then the records should be on the replica server, 
correct? In that case it would just be an update.


Or do I need to update them manually on Master and Replicated servers ?

I didn´t find any info about this on Docs and because that I´m posting about
this.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: CVE-2018-1058

2019-10-16 Thread Adrian Klaver

On 10/16/19 1:05 PM, Lizeth Solis Aramayo wrote:

What commands did you use to dump the 9.6.15 version and restore to the
9.6.5 version?

Pg_dump -p 5433 -U postgres -Fc -d dbkerp -n param > param.dump
And
Pg_restore -p 5432 -U postgres -d dbkerp param.dump


Server with pg_dump is Linux red hat 7.6

Server with pg_restore is linux red hat 6.5


In both servers I have postgresql 9.6, but in pg_dump is 9.6.15,   and in 
pg_restore is 9.6.5.


The pg_dump is correct,  everything goes ok.,  but when I do the pg_restore I 
gota n error :  pg_restore: [archiver] unsupported version (1.13) in file header


Forgot a fourth option:

4) Use the 9.6.15 pg_restore to restore the 9.6.15 pg_dump to the 9.6.5 
database.





I searched solutions,  and I found that I can apply a patch CVE-2018-1058,  but 
I don¡t know how.
How  to download,  and install,  I dont find documents about it.

he reason why you can't upgrade the 9.6.5 to 9.6.15?  I dont know how.



--
Adrian Klaver
adrian.kla...@aklaver.com




Can functions containing a CTE be PARALLEL SAFE?

2019-10-16 Thread Erwin Brandstetter
[The manual for Postgres 12 says][1]:

> The following operations are always parallel restricted.

>  - Scans of common table expressions (CTEs).
>  - Scans of temporary tables.
>  - ...

Further down on the same [manual page:][2]

> [...] Similarly, functions must be marked PARALLEL RESTRICTED if they
access
> temporary tables, client connection state, cursors, prepared
> statements, or miscellaneous backend-local state which the system
> cannot synchronize across workers. For example, setseed and random are
> parallel restricted for this last reason.

No mention of CTEs.

I searched the list archives and found a statement from Thomas Munro
[here][3]:

> That means that these CTEs can only be scanned in the leader process.

Now I am unsure whether I can use `PARALLEL SAFE` for functions containing
a CTE (while  fulfilling all other criteria)?

Would the new inlining of CTEs in Postgres 12 have any role in this?

I posted a [similar question on dba.SE][4].

Regards
Erwin Brandstetter

  [1]: https://www.postgresql.org/docs/12/parallel-safety.html
  [2]:
https://www.postgresql.org/docs/12/parallel-safety.html#PARALLEL-LABELING
  [3]:
https://www.postgresql.org/message-id/CAEepm=03s4yih+c0pctfki0o8zgq-p4vfcvek5vvxcr6axh...@mail.gmail.com
  [4]: https://dba.stackexchange.com/q/251274/3684


 That means that these CTEs can only be scanned in the
leader process.


Re: PostgreSQL memory usage

2019-10-16 Thread Luca Ferrari
On Wed, Oct 16, 2019 at 6:30 PM Alexander Pyhalov  wrote:
> I see that at some point several postgresql backends start consuming about 16 
>  GB RAM. If we account for shared_buffers, it meens 4 GB RAM for private 
> backend memory. How can we achieve such numbers? I don't see any long-running 
> (or complex) queries (however, there could be long-running transactions and 
> queries to large partitioned tables). But how could they consume 512* 
> work_mem memory?

I'm not sure they ae consuming 512 times the work_memory, I mean there
is a whole lot of stuff a process can allocate, and it requires to dig
into the process memory map (something I'm not good at!) to understand
it.
For sure, a single process (backend) can consume one time work_memory
per "complex node" in a query plan, that is it can consume multiple
times the work_memory value if that is available.

Luca




Has there been any discussion of custom dictionaries being defined in the database?

2019-10-16 Thread Morris de Oryx
I've been experimenting with the FTS features in Postgres over the past few
days. Mind blow.

We're deployed on RDS, which does not give you any file system to access.
I'd love to be able to create a custom thesaurus dictionary for our
situation, which seems like it is impossible in a setup like ours.

Has there been any discussion of making dictionary configuration files
accessible via a dictionary table instead of a physical, structured disk
file? Or, alternatively, something that could be accessed
remotely/externally as a URL or FDW?

Thanks for any comments.