Re: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-10-15 Thread Raghavendra Rao J S V
Hi John,

As you said, I have gone through the document. Which one is correct [(2 +
checkpoint_completion_target) * checkpoint_segments + 1 *or*
checkpoint_segments + wal_keep_segments

+
1 files] for 9.2 PostgreSQL?

In my environment we have kept *wal_keep_segments* and *checkpoint_segments*
as below. Will it cause any negative impact?

*checkpoint_segments = 128* # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
*#wal_keep_segments = 0* # in logfile segments, 16MB each; 0 disables

++

There will always be at least one WAL segment file, and will normally not
be more than (2 + checkpoint_completion_target) * checkpoint_segments + 1
or checkpoint_segments + wal_keep_segments

+
1 files. Each segment file is normally 16 MB (though this size can be
altered when building the server). You can use this to estimate space
requirements for WAL. Ordinarily, when old log segment files are no longer
needed, they are recycled (renamed to become the next segments in the
numbered sequence). If, due to a short-term peak of log output rate, there
are more than 3 * checkpoint_segments + 1 segment files, the unneeded
segment files will be deleted instead of recycled until the system gets
back under this limit.

++



Regards,
Raghavendra Rao


On Wed, 5 Sep 2018 at 23:23, Johnes Castro  wrote:

> 1 wal by default occupies 16MB.
> The parameter in version 9.2 that controls this is: wal_keep_segments
>
>
> By setting the parameter to 10, the maximum size of the US pg_xlog will be
> 160MB.
>
> Best Regards,
> Johnes Castro
>
>
> --
> *De:* Johnes Castro 
> *Enviado:* quarta-feira, 5 de setembro de 2018 15:48
> *Para:* Raghavendra Rao J S V; pgsql-general@lists.postgresql.org
> *Assunto:* RE: Max number of WAL files in pg_xlog directory for Postgres
> 9.2 version
>
> Hi,
>
> This page in the documentation can help you.
> https://www.postgresql.org/docs/9.2/static/wal-configuration.html
>
> Best Regards,
> Johnes Castro
> PostgreSQL: Documentation: 9.2: WAL Configuration
> 
> 29.4. WAL Configuration. There are several WAL-related configuration
> parameters that affect database performance.This section explains their
> use. Consult Chapter 18 for general information about setting server
> configuration parameters.. Checkpoints are points in the sequence of
> transactions at which it is guaranteed that the heap and index data files
> have been updated with all information ...
> www.postgresql.org
>
> --
> *De:* Raghavendra Rao J S V 
> *Enviado:* quarta-feira, 5 de setembro de 2018 15:39
> *Para:* pgsql-general@lists.postgresql.org
> *Assunto:* Max number of WAL files in pg_xlog directory for Postgres 9.2
> version
>
> Hi All,
>
> We are using postgres 9.2 verstion database.
>
> Please let me know, how many max number of wal files in pg_xlog directory?
>
> What is the formul. I am seeing different formulas. Could you provide me
> which decides number of max WAL files in PG_XLOG directory for Postgres 9.2
> Database,please?
>
>
> --
> Regards,
> Raghavendra Rao J S V
>
>

-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Is there any impact if "#wal_keep_segments = 0 " and "checkpoint_segments = 128" postgresql.conf file.

2018-10-15 Thread Raghavendra Rao J S V
Hi All,

Is there any impact if  "#wal_keep_segments = 0 " and "checkpoint_segments
= 128" postgresql.conf file. If yes,what is the imapct?

*checkpoint_segments = 128* # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
*#wal_keep_segments = 0* # in logfile segments, 16MB each; 0 disables
wal_level = archive # minimal, archive, or hot_standby
*archive_mode = off* # allows archiving to be done

-- 
Regards,
Raghavendra Rao J S V


Re: Is there any impact if "#wal_keep_segments = 0 " and "checkpoint_segments = 128" postgresql.conf file.

2018-10-15 Thread Laurenz Albe
Raghavendra Rao J S V wrote:
> Is there any impact if  "#wal_keep_segments = 0 " and "checkpoint_segments = 
> 128"
> postgresql.conf file. If yes,what is the imapct?

Yes.
- You will have fewer checkpoints requested by data modification activity.
- Crash recovery might take longer.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Enabling autovacuum per table

2018-10-15 Thread Rijo Roy
Hello Experts, 
Is there any possibility for autovacuum to work on a user table if we set Alter 
table sometable set (autovacuum_enabled = true) ; even if the parameter 
autovacuum = off in Postgresql.conf 
I am using Postgresql 10 on Linux 6.9.
According to me, it won't work without setting autovacuum = on except for 
Template0 database. What is your opinion? 
Thanks, Rijo Roy 

Sent from Yahoo Mail on Android

Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
wrote:

> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > please let me know the max length of varchar  & text in postgres
>
> https://www.postgresql.org/docs/10/static/datatype-character.html
> >
> >
> >
> > Regards
> >
> > Durgamahesh Manne
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


 Hi

Thank you for this information

as per the postgresql documentation

 If character varying is used without length specifier, the type accepts
strings of any size

but varchar does not accept more than this


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne 
wrote:

>
>
> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
> wrote:
>
>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>> > Hi
>> >
>> > please let me know the max length of varchar  & text in postgres
>>
>> https://www.postgresql.org/docs/10/static/datatype-character.html
>> >
>> >
>> >
>> > Regards
>> >
>> > Durgamahesh Manne
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>
>
>  Hi
>
> Thank you for this information
>
> as per the postgresql documentation   please ignore incomplete message i
> mailed to respected community members
>
>  If character varying is used without length specifier, the type accepts
> strings of any size
>
> but varchar does not accept more than this 10485760 value
>

   create table test(id serial primary key, str varchar(10485761));

 ERROR: length for type varchar cannot exceed 10485760


Re: Regarding varchar max length in postgres

2018-10-15 Thread Thomas Kellerer
Durgamahesh Manne schrieb am 15.10.2018 um 11:05:
> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > please let me know the max length of varchar  & text in postgres
> 
> https://www.postgresql.org/docs/10/static/datatype-character.html
> 
>  Hi 
> 
> Thank you for this information
> 
> as per the postgresql documentation   please ignore incomplete message i 
> mailed to respected community members 
> 
>  If |character varying |is used without length specifier, the type 
> accepts strings of any size 
> 
> but varchar does not accept more than this 10485760 value
> 
> 
>    create table test(id serial primary key, str varchar(10485761));
> 
>      ERROR: length for type varchar cannot exceed 10485760 

Further down on that page the overall limit is documented: 

In any case, the longest possible character string that can be stored is 
about 1 GB

So the part that you quoted implicitly means "accepts strings of any size  - up 
to the maximum of 1GB"
Maybe it makes sense to make that clearer at that point.

Regards
Thomas





Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne 
wrote:

>
>
> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
>> wrote:
>>
>>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>>> > Hi
>>> >
>>> > please let me know the max length of varchar  & text in postgres
>>>
>>> https://www.postgresql.org/docs/10/static/datatype-character.html
>>> >
>>> >
>>> >
>>> > Regards
>>> >
>>> > Durgamahesh Manne
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>
>>
>>  Hi
>>
>> Thank you for this information
>>
>
  as per the postgresql documentation

 If character varying is used without length specifier, the type accepts
strings of any size

but varchar does not accept more than this 

>
>> as per the postgresql documentation   please ignore incomplete above
>> message i mailed to respected community members
>>
>>  If character varying is used without length specifier, the type accepts
>> strings of any size
>>
>> but varchar does not accept more than this 10485760 value
>>
>
>create table test(id serial primary key, str varchar(10485761));
>
>  ERROR: length for type varchar cannot exceed 10485760
>


Re: Is there any impact if "#wal_keep_segments = 0 " and "checkpoint_segments = 128" postgresql.conf file.

2018-10-15 Thread Jehan-Guillaume (ioguix) de Rorthais
On Mon, 15 Oct 2018 09:46:47 +0200
Laurenz Albe  wrote:

> Raghavendra Rao J S V wrote:
> > Is there any impact if  "#wal_keep_segments = 0 " and "checkpoint_segments
> > = 128" postgresql.conf file. If yes,what is the imapct?  
> 
> Yes.
> - You will have fewer checkpoints requested by data modification activity.
> - Crash recovery might take longer.

And considering wal_keep_segments, there is no impact on perf/recovery. This
mostly related to the standby replication lag allowed and some other solutions
exists (slots, archiving).



Re: Is there any impact if "#wal_keep_segments = 0 " and "checkpoint_segments = 128" postgresql.conf file.

2018-10-15 Thread Raghavendra Rao J S V
Thanks a lot.

On Mon, 15 Oct 2018 at 14:43, Jehan-Guillaume (ioguix) de Rorthais <
iog...@free.fr> wrote:

> On Mon, 15 Oct 2018 09:46:47 +0200
> Laurenz Albe  wrote:
>
> > Raghavendra Rao J S V wrote:
> > > Is there any impact if  "#wal_keep_segments = 0 " and
> "checkpoint_segments
> > > = 128" postgresql.conf file. If yes,what is the imapct?
> >
> > Yes.
> > - You will have fewer checkpoints requested by data modification
> activity.
> > - Crash recovery might take longer.
>
> And considering wal_keep_segments, there is no impact on perf/recovery.
> This
> mostly related to the standby replication lag allowed and some other
> solutions
> exists (slots, archiving).
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 2:42 PM Durgamahesh Manne 
wrote:

>
>
> On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
>> maheshpostgr...@gmail.com> wrote:
>>
>>>
>>>
>>> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
>>> wrote:
>>>
 On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
 > Hi
 >
 > please let me know the max length of varchar  & text in postgres

 https://www.postgresql.org/docs/10/static/datatype-character.html
 >
 >
 >
 > Regards
 >
 > Durgamahesh Manne


 --
 Adrian Klaver
 adrian.kla...@aklaver.com
>>>
>>>
>>>  Hi
>>>
>>> Thank you for this information
>>>
>>
>   as per the postgresql documentation
>
>  If character varying is used without length specifier, the type accepts
> strings of any size
>
> but varchar does not accept more than this 
>
>>
>>> as per the postgresql documentation   please ignore incomplete above
>>> message i mailed to respected community members
>>>
>>>  If character varying is used without length specifier, the type
>>> accepts strings of any size
>>>
>>> but varchar does not accept more than this 10485760 value
>>>
>>
>>create table test(id serial primary key, str varchar(10485761));
>>
>>  ERROR: length for type varchar cannot exceed 10485760
>>
>

as thomas said that

Further down on that page the overall limit is documented:

In any case, the longest possible character string that can be stored
is about 1 GB

So the part that you quoted implicitly means "accepts strings of any size
- up to the maximum of 1GB"
Maybe it makes sense to make that clearer at that point.


was there any specific reason that you have given max length for varchar is
limited to 10485760 value?

why you have not given max length for varchar is unlimited like text
datatype ?

character varying(*n*), varchar(*n*)variable-length with limit
character(*n*), char(*n*)fixed-length, blank padded
textvariable unlimited length



Regards


Re: Enabling autovacuum per table

2018-10-15 Thread Arthur Zakirov

On 10/15/18 11:01 AM, Rijo Roy wrote:

Hello Experts,

Is there any possibility for autovacuum to work on a user table if we set
Alter table sometable set (autovacuum_enabled = true) ; even if the 
parameter autovacuum = off in Postgresql.conf


I am using Postgresql 10 on Linux 6.9.

According to me, it won't work without setting autovacuum = on except 
for Template0 database. What is your opinion?


I think you are right, autovacuum won't vacuum and analyze a table 
without setting autovacuum = on. But PostgreSQL might want to force 
vacuum if a table is at risk of wraparound.


--
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company



Re: Regarding varchar max length in postgres

2018-10-15 Thread Laurenz Albe
Durgamahesh Manne wrote:
> was there any specific reason that you have given max length for varchar is 
> limited to 10485760 value?
> 
> why you have not given max length for varchar is unlimited like text datatype 
> ?
> 
> character varying(n), varchar(n)variable-length with limit 
> character(n), char(n)fixed-length, blank padded
> textvariable unlimited length

The data type "text" has the same size limit of 1GB.
"character varying" (without type modifier) and "text" are pretty much
identical.

Since data of these types are loaded into memory when you read them
from or write them to the database, you usually start having problems
long before you reach that limit.

If you want to store huge text files, either store them outside the
database or use Large Objects, which can be read and written in chunks.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: Regarding varchar max length in postgres

2018-10-15 Thread Thomas Kellerer
Durgamahesh Manne schrieb am 15.10.2018 um 11:18:
> was there any specific reason that you have given max length for varchar is 
> limited to 10485760 value?
> 
> why you have not given max length for varchar is unlimited like text datatype 
> ?
> 
> |character varying(/|n|/)|, |varchar(/|n|/)|variable-length with limit 
> |character(/|n|/)|, |char(/|n|/)|fixed-length, blank padded
> |text|variable unlimited length

It "text" type is a "character string" just like all the other character types 
and thus is also limited to 1GB

"text", "varchar", "character varying" and "character" are all identical in how 
they are stored and processed. 

Thomas







Problem creating a database

2018-10-15 Thread Joshua White
Hi all,

I'm hoping someone can point me in the right direction. I've got a
PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
have full admin rights on this machine, so I can access logs, etc.

Recently I attempted to create a new database in this cluster. The command
succeeds, but when I try to connect to the new database, I get a "could not
open file" error:

psql: FATAL:  could not open file "base/618720/2610": No such file or
directory

It has been some time since I set up the database, so I don't know how long
ago this became an issue. I can't seem to find any other instances of this
problem online either. The logs are not helpful - even on the highest debug
setting, I only see the "connection authorized" then the fatal "could not
open file" error.

The data directory is on a separate disk array to the OS. Recently checked
it and there are no disk errors.

Any thoughts or ideas would be much appreciated.

Kind Regards,
Joshua


Re: Problem creating a database

2018-10-15 Thread Laurenz Albe
Joshua White wrote:
> I'm hoping someone can point me in the right direction. I've got a PostgreSQL 
> 10 server
> instance on CentOS 6, which I set up and manage. I have full admin rights on 
> this machine,
> so I can access logs, etc.
> 
> Recently I attempted to create a new database in this cluster. The command 
> succeeds,
> but when I try to connect to the new database, I get a "could not open file" 
> error:
> 
> psql: FATAL:  could not open file "base/618720/2610": No such file or 
> directory
> 
> It has been some time since I set up the database, so I don't know how long 
> ago this
> became an issue. I can't seem to find any other instances of this problem 
> online either.
> The logs are not helpful - even on the highest debug setting, I only see the
> "connection authorized" then the fatal "could not open file" error.
> 
> The data directory is on a separate disk array to the OS. Recently checked it 
> and
> there are no disk errors.
> 
> Any thoughts or ideas would be much appreciated.

Looks like the file backing the "pg_index" table is gone.

Can you check if the file exists in the data directory or not?

It's hard to determine what happened, but something has been
eating your data.  As it is, your best option would be to
drop the database and recreate it from a backup.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 3:11 PM Thomas Kellerer  wrote:

> Durgamahesh Manne schrieb am 15.10.2018 um 11:18:
> > was there any specific reason that you have given max length for varchar
> is limited to 10485760 value?
> >
> > why you have not given max length for varchar is unlimited like text
> datatype ?
> >
> > |character varying(/|n|/)|, |varchar(/|n|/)|variable-length with limit
> > |character(/|n|/)|, |char(/|n|/)|fixed-length, blank padded
> > |text|variable unlimited length
>
> It "text" type is a "character string" just like all the other character
> types and thus is also limited to 1GB
>
> "text", "varchar", "character varying" and "character" are all identical
> in how they are stored and processed.
>
> Thomas
>
>
> Thank you for this information
>
>
>


Re: Enabling autovacuum per table

2018-10-15 Thread Rijo Roy
Yeah when age(relfrozenxid) goes beyond the limit Postgresql will invoke the 
autovacuum session to avoid a wraparound issue.. But here that's not the case.. 

Sent from Yahoo Mail on Android 
 
  On Mon, 15 Oct 2018 at 2:58 pm, Arthur Zakirov 
wrote:   On 10/15/18 11:01 AM, Rijo Roy wrote:
> Hello Experts,
> 
> Is there any possibility for autovacuum to work on a user table if we set
> Alter table sometable set (autovacuum_enabled = true) ; even if the 
> parameter autovacuum = off in Postgresql.conf
> 
> I am using Postgresql 10 on Linux 6.9.
> 
> According to me, it won't work without setting autovacuum = on except 
> for Template0 database. What is your opinion?

I think you are right, autovacuum won't vacuum and analyze a table 
without setting autovacuum = on. But PostgreSQL might want to force 
vacuum if a table is at risk of wraparound.

-- 
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
  


Re: Setting up continuous archiving

2018-10-15 Thread Yuri Kanivetsky
Hi,

Thanks for your replies. By the way, I'm now running PostgreSQL 10 :)
My idea was to start with continuous archiving, then start a
log-shipping standby, then make it use streaming replication. Since
I'm kind of overwhelmed with information, options to be considered.

Anyways, I'm now trying to decide which one to use: pgBackRest or
Barman :) Barman's documentation is easier to follow, at least for me.
But it doesn't allow partial PITR. That is, you can't have full weekly
backups of the last couple of months (discrete) in addition to base
backups + WAL logs of the last few weeks (continuous).

pgBackRest doesn't seem to allow the latter: recovery to any point in
time, only to some discrete moments. Correct me if I'm wrong.

Then, with pgBackRest you can run standby that uses streaming
replication. Barman delegates that to repmgr. Which looks like a more
mature (?) solution. Probably easier to switch the site to the slave,
and back.

So, ideally I'd like to have standby that uses streaming replication,
plus full weekly backups of the last couple of months, plus ability to
restore to any point in time within the last few weeks period.

Is that doable with both of them (pgBackRest, Barman)? Does it make
sense to use repmgr with pgBackRest?

Regards,
Yuri Kanivetsky
On Wed, Sep 26, 2018 at 8:19 PM Pierre Timmermans  wrote:
>
> Hello
>
> What you are doing is called "log shipping", which means that when a wal 
> (write-ahead log) is filled in on the database server you ship it to a backup 
> server via rsync. It is fine but as you said the disadvantage is that the 
> file is shipped only when it is full, so you could have data loss (the last 
> wal not shipped)
>
> A more modern and easy way is to use streaming replication: in this case the 
> logs are streamed continuously to the standby or to the backup server (one 
> can use streaming replication without implementing a standby database). Look 
> at the doc on the next page than the one you referred to 
> (https://www.postgresql.org/docs/9.3/static/high-availability.html)
>
> There is a nice tool that does one you plan to do (rsync of archived file) 
> but also the more modern way (streaming replication): it is called barman: 
> https://www.pgbarman.org/. You should probably use their tool but you can 
> also read the doc to get the concepts and some ideas
>
> Rgds, Pierre
>
>
> On Wednesday, September 26, 2018, 9:21:29 AM GMT+2, Yuri Kanivetsky 
>  wrote:
>
>
> Hi,
>
> I'm trying to compile a basic set of instruction needed to set up
> continuous archiving and to recover from a backup. I'm running
> PostgreSQL 9.3 on Debian Stretch system.
>
> I've got a database and a backup servers. The idea is to set up WAL
> archiving, and occasionally do full (base) backups. A base backup plus
> WAL segment files from that point onward must provide enough
> information to restore database to the latest state. Except for a
> window for data loss that is caused by the fact that WAL segment files
> aren't transferred momentarily, and more importantly that WAL segment
> files are only transferred upon being filled.
>
> ---
>
> Setting up continuous archiving
>
> * Set up WAL archiving
>
> * on backup server under postgres user
>
> * create /var/lib/postgresql/wal_archive dir
>
> $ mkdir /var/lib/postgresql/wal_archive
>
> * on database server under postgres user
>
> * generate ssh key
>
> $ ssh-keygen -f /var/lib/postgresql/.ssh/id_rsa  # providing
> path to key file makes it
>   # to not ask 
> questions
>
> * add corresponding record to known_hosts file
>
> $ ssh-keyscan -t rsa BACKUP_SRV >> ~/.ssh/known_hosts
>
> * locally
>
> * authorize login from database to backup server
>
> $ ssh DATABASE_SRV cat ~/.ssh/id_rsa.pub | ssh BACKUP_SRV
> 'mkdir --mode 0700 .ssh; cat >> ~/.ssh/authorized_keys; chmod 0600
> .ssh/authorized_keys'
>
> * on database server under root
>
> * change postgresql.conf
>
> wal_level = archive
> archive_mode = on
> archive_command = 'rsync -a %p
> BACKUP_SRV:/var/lib/postgresql/wal_archive/%f'
>
> * restart PostgreSQL
>
> # systemctl resart postgresql
>
> * Make a base backup
>
> * on database server under root
>
> * add a line to postgresql.conf
>
> max_wal_senders = 1
>
> * add a line to pg_hba.conf
>
> host  replication  replication  BACKUP_SRV_IP/BACKUP_SRV_NETMASK  
> trust
>
> * restart PostgreSQL
>
> # systemctl restart postgresql
>
> * on database server under postgres user
>
> * create replication user
>
> CREATE USER replication WITH REPLICATION;
>
> or
>
> $ createuser --replication replication
>
> * on backup server under postgres user
>
> * make base backup
>
> $ pg_basebackup -h DATABASE_SRV -U replication -D
> /var/lib/postgresql/base_backups/$(

FATAL: terminating connection because protocol synchronization was lost

2018-10-15 Thread Shrikant Bhende
Hi Team,

I am trying to restore the backup into postgresql 9.6 and during restore
copy command fails with below error,

2018-10-12 06:21:40 UTC [40407]: [28-1]
application=psql.bin,user=postgres,db=cloudLOG:  could not send data to
client: Broken pipe
2018-10-12 06:21:40 UTC [40407]: [29-1]
application=psql.bin,user=postgres,db=cloudSTATEMENT:  COPY
d20160905_x_20da999ef9434e60942859c3690e642f (did, location_id, h00,
h00_min, h00_max, h00_wgt, h01, h01_min, h01_max, h01_wgt, h02, h02_min,
h02_max, h02_wgt, h03, h03_min, h03_max, h03_wgt, h04, h04_min, h04_max,
h04_wgt, h05, h05_min, h05_max, h05_wgt, h06, h06_min, h06_max, h06_wgt,
h07, h07_min, h07_max, h07_wgt, h08, h08_min, h08_max, h08_wgt, h09,
h09_min, h09_max, h09_wgt, h10, h10_min, h10_max, h10_wgt, h11, h11_min,
h11_max, h11_wgt, h12, h12_min, h12_max, h12_wgt, h13, h13_min, h13_max,
h13_wgt, h14, h14_min, h14_max, h14_wgt, h15, h15_min, h15_max, h15_wgt,
h16, h16_min, h16_max, h16_wgt, h17, h17_min, h17_max, h17_wgt, h18,
h18_min, h18_max, h18_wgt, h19, h19_min, h19_max, h19_wgt, h20, h20_min,
h20_max, h20_wgt, h21, h21_min, h21_max, h21_wgt, h22, h22_min, h22_max,
h22_wgt, h23, h23_min, h23_max, h23_wgt) FROM stdin;
2018-10-12 06:21:40 UTC [40407]: [30-1]
application=psql.bin,user=postgres,db=cloudFATAL:  terminating connection
because protocol synchronization was lost

I am trying to migrate the DB from EDB PPAS to community PostgreSQL, we
don't have any oracle or PPAS dependent objects so I took the pg_dumpall
using the PostgreSQL binaries and trying to restore the same.

PPAS :: 9.3
PostgreSQL : 9.6
O/S :: Centos 6.6

Thanks and Regards.

-- 
Shrikant Bhende
+91-9975543712


New tablespace: just an advice

2018-10-15 Thread Moreno Andreo

Hi everyone!

My space on my Debian 8 DB server is running a bit low (10% left of a 
2TB disk), so, since it's not possible to have a primary MBR disk with 
size > 2 TB, I decided to create another disk and map it on the server, 
creating another tablespace on it and moving databases aross disks to 
balance disk usage.


After creating a test server on this night's production server image, I 
created a folder and assigned ownership to postgres user

mkdir /newdisk/tbsp_new
chown -R postgres /newdisk/tbsp_new

then created new tablespace
create tablspace tb2 location '/newdisk/tbsp_new';

and moved a database
alter database db set tablespace tb2;

As you can see a flat, basic tablespace with no customizations.
I just ran a VACUUM FULL on the cluster before creating tablespace.
After the ALTER DATABASE command ran successful, I checked disk space 
(df -h) and saw some more free space on primary disk, and the same space 
occupied on new disk. Just what I needed.
I psql'd in the cluster with the user's username connecting on the 
database just migrated, and was able to SELECT and UPDATE (not tried 
INSERTing).


Now, 2 questions.
1. Is it all or do I need to adjust something else about permissions, 
indexes, vacuuming, etc...?
ALTERing the database namespace means copying its physical files to new 
directory, but is it applied to all objects (indexes, triggers, etc)?


2. What will happen to who tries to access the database while it's being 
moved from one tablespace to another?


Thanks in advance,
Moreno.




Re: Enabling autovacuum per table

2018-10-15 Thread Adrian Klaver

On 10/15/18 1:01 AM, Rijo Roy wrote:

Hello Experts,

Is there any possibility for autovacuum to work on a user table if we set
Alter table sometable set (autovacuum_enabled = true) ; even if the 
parameter autovacuum = off in Postgresql.conf


I am using Postgresql 10 on Linux 6.9.

According to me, it won't work without setting autovacuum = on except 
for Template0 database. What is your opinion?


The docs opinion:

https://www.postgresql.org/docs/10/static/sql-createtable.html
"autovacuum_enabled, toast.autovacuum_enabled (boolean)

Enables or disables the autovacuum daemon for a particular table. 
If true, the autovacuum daemon will perform automatic VACUUM and/or 
ANALYZE operations on this table following the rules discussed in 
Section 24.1.6. If false, this table will not be autovacuumed, except to 
prevent transaction ID wraparound. See Section 24.1.5 for more about 
wraparound prevention. Note that the autovacuum daemon does not run at 
all (except to prevent transaction ID wraparound) if the autovacuum 
parameter is false; setting individual tables' storage parameters does 
not override that. Therefore there is seldom much point in explicitly 
setting this storage parameter to true, only to false.

"



Thanks,
Rijo Roy

Sent from Yahoo Mail on Android 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: FATAL: terminating connection because protocol synchronization was lost

2018-10-15 Thread Adrian Klaver

On 10/15/18 5:36 AM, Shrikant Bhende wrote:

Hi Team,

I am trying to restore the backup into postgresql 9.6 and during restore 
copy command fails with below error,


2018-10-12 06:21:40 UTC [40407]: [28-1] 
application=psql.bin,user=postgres,db=cloudLOG:  could not send data to 
client: Broken pipe


I would say the above is the issue.

Where is the client running relative to the server?

2018-10-12 06:21:40 UTC [40407]: [29-1] 
application=psql.bin,user=postgres,db=cloudSTATEMENT:  COPY 
d20160905_x_20da999ef9434e60942859c3690e642f (did, location_id, h00, 
h00_min, h00_max, h00_wgt, h01, h01_min, h01_max, h01_wgt, h02, h02_min, 
h02_max, h02_wgt, h03, h03_min, h03_max, h03_wgt, h04, h04_min, h04_max, 
h04_wgt, h05, h05_min, h05_max, h05_wgt, h06, h06_min, h06_max, h06_wgt, 
h07, h07_min, h07_max, h07_wgt, h08, h08_min, h08_max, h08_wgt, h09, 
h09_min, h09_max, h09_wgt, h10, h10_min, h10_max, h10_wgt, h11, h11_min, 
h11_max, h11_wgt, h12, h12_min, h12_max, h12_wgt, h13, h13_min, h13_max, 
h13_wgt, h14, h14_min, h14_max, h14_wgt, h15, h15_min, h15_max, h15_wgt, 
h16, h16_min, h16_max, h16_wgt, h17, h17_min, h17_max, h17_wgt, h18, 
h18_min, h18_max, h18_wgt, h19, h19_min, h19_max, h19_wgt, h20, h20_min, 
h20_max, h20_wgt, h21, h21_min, h21_max, h21_wgt, h22, h22_min, h22_max, 
h22_wgt, h23, h23_min, h23_max, h23_wgt) FROM stdin;
2018-10-12 06:21:40 UTC [40407]: [30-1] 
application=psql.bin,user=postgres,db=cloudFATAL: terminating connection 
because protocol synchronization was lost


I am trying to migrate the DB from EDB PPAS to community PostgreSQL, we 
don't have any oracle or PPAS dependent objects so I took the pg_dumpall 
using the PostgreSQL binaries and trying to restore the same.


PPAS :: 9.3
PostgreSQL : 9.6
O/S :: Centos 6.6

Thanks and Regards.

--
Shrikant Bhende
+91-9975543712



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Setting up continuous archiving

2018-10-15 Thread talk to ben
Hi,

I am not sure what you call discrete / continuous.
pgbackrest allows backups of different kinds:  full, incremental and
differential.
It keeps the wals necessary to recover since the oldest backup until the
current time.
The retention is expressed in number of full backups. You can also specify
a number of differential backups.
You have the choice to keep the wal necessary to restore to the end of the
backup in the backup directory itself in addition to the archive directory.
If you use this method (parameter archive-copy) the backup is "standalone"
and you can copy it out of the pgbackrest backup repository and keep it
forever.

Le lun. 15 oct. 2018 à 12:31, Yuri Kanivetsky  a
écrit :

> pgBackRest doesn't seem to allow the latter: recovery to any point in
> time, only to some discrete moments. Correct me if I'm wrong.
>

Are you talking about PITR ?
https://pgbackrest.org/user-guide.html#pitr

Is that doable with both of them (pgBackRest, Barman)? Does it make
> sense to use repmgr with pgBackRest?
>

It's doable but remgr and barman are supposed to work together more
seemlessly since they re both products of 2ndQ.
But does it make sense to use repmgr ?

My opinion: I use pgbackrest for three years now and am very happy with it.
My choice was made based on personal preference, features (at the time
there was more discrepencies) and I prefered the overall design.
I use corosync & pacemaker with PAF for HA so I never had to use repmgr.

Benoit.


Re: Regarding varchar max length in postgres

2018-10-15 Thread Tom Lane
Durgamahesh Manne  writes:
>>> If character varying is used without length specifier, the type
>>> accepts strings of any size
>>> but varchar does not accept more than this 10485760 value

You're confusing the size of string that can be stored with the
largest value accepted for "n" in "varchar(n)".  This is documented,
in the same place that people have been pointing you to:

In any case, the longest possible character string that can be stored
is about 1 GB. (The maximum value that will be allowed for n in the
---
data type declaration is less than that. It wouldn't be useful to

change this because with multibyte character encodings the number of
characters and bytes can be quite different. If you desire to store
long strings with no specific upper limit, use text or character
varying without a length specifier, rather than making up an arbitrary
length limit.)

As you found out, the limit for "n" is ~ 10 million.

In principle, we could have allowed it to be as much as 1Gb divided by
the maximum character length of the database's encoding, but it did
not seem like a great idea for the limit to be encoding-dependent.

As the last sentence in the doc paragraph points out, the preferred
thing to do if you just want to allow very long strings is to leave
off "(n)" altogether.

The subtext here, which maybe we ought to state in a more in-your-face
way, is that if you use char(N) or varchar(N) without a concrete
application-driven reason why N has to be that particular value,
no more or less, then You're Doing It Wrong.  Artificially-chosen
column width limits are a bad idea left over from the days of
punched cards.  The reason the limit on N is much smaller than it
could theoretically be is that column declarations with very large
N are, without exception, violations of this principle.

regards, tom lane



Re: FATAL: terminating connection because protocol synchronization was lost

2018-10-15 Thread Tom Lane
Shrikant Bhende  writes:
> I am trying to restore the backup into postgresql 9.6 and during restore
> copy command fails with below error,

> 2018-10-12 06:21:40 UTC [40407]: [28-1]
> application=psql.bin,user=postgres,db=cloudLOG:  could not send data to
> client: Broken pipe

This says the client disconnected first.  What does the problem look
like from the client side?

regards, tom lane



Re: FATAL: terminating connection because protocol synchronization was lost

2018-10-15 Thread Shrikant Bhende
Hi Adrian,

There is no explicit client its just simple restore using psql with all
default settings required.

On Mon, Oct 15, 2018 at 6:50 PM Adrian Klaver 
wrote:

> On 10/15/18 5:36 AM, Shrikant Bhende wrote:
> > Hi Team,
> >
> > I am trying to restore the backup into postgresql 9.6 and during restore
> > copy command fails with below error,
> >
> > 2018-10-12 06:21:40 UTC [40407]: [28-1]
> > application=psql.bin,user=postgres,db=cloudLOG:  could not send data to
> > client: Broken pipe
>
> I would say the above is the issue.
>
> Where is the client running relative to the server?
>
> > 2018-10-12 06:21:40 UTC [40407]: [29-1]
> > application=psql.bin,user=postgres,db=cloudSTATEMENT:  COPY
> > d20160905_x_20da999ef9434e60942859c3690e642f (did, location_id, h00,
> > h00_min, h00_max, h00_wgt, h01, h01_min, h01_max, h01_wgt, h02, h02_min,
> > h02_max, h02_wgt, h03, h03_min, h03_max, h03_wgt, h04, h04_min, h04_max,
> > h04_wgt, h05, h05_min, h05_max, h05_wgt, h06, h06_min, h06_max, h06_wgt,
> > h07, h07_min, h07_max, h07_wgt, h08, h08_min, h08_max, h08_wgt, h09,
> > h09_min, h09_max, h09_wgt, h10, h10_min, h10_max, h10_wgt, h11, h11_min,
> > h11_max, h11_wgt, h12, h12_min, h12_max, h12_wgt, h13, h13_min, h13_max,
> > h13_wgt, h14, h14_min, h14_max, h14_wgt, h15, h15_min, h15_max, h15_wgt,
> > h16, h16_min, h16_max, h16_wgt, h17, h17_min, h17_max, h17_wgt, h18,
> > h18_min, h18_max, h18_wgt, h19, h19_min, h19_max, h19_wgt, h20, h20_min,
> > h20_max, h20_wgt, h21, h21_min, h21_max, h21_wgt, h22, h22_min, h22_max,
> > h22_wgt, h23, h23_min, h23_max, h23_wgt) FROM stdin;
> > 2018-10-12 06:21:40 UTC [40407]: [30-1]
> > application=psql.bin,user=postgres,db=cloudFATAL: terminating connection
> > because protocol synchronization was lost
> >
> > I am trying to migrate the DB from EDB PPAS to community PostgreSQL, we
> > don't have any oracle or PPAS dependent objects so I took the pg_dumpall
> > using the PostgreSQL binaries and trying to restore the same.
> >
> > PPAS :: 9.3
> > PostgreSQL : 9.6
> > O/S :: Centos 6.6
> >
> > Thanks and Regards.
> >
> > --
> > Shrikant Bhende
> > +91-9975543712
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 
Shrikant Bhende
+91-9975543712


Re: FATAL: terminating connection because protocol synchronization was lost

2018-10-15 Thread Adrian Klaver

On 10/15/18 8:04 AM, Shrikant Bhende wrote:

Hi Adrian,

There is no explicit client its just simple restore using psql with all 
default settings required.


psql is the client. Where is it running relative to the server locally 
or remote?




On Mon, Oct 15, 2018 at 6:50 PM Adrian Klaver > wrote:


On 10/15/18 5:36 AM, Shrikant Bhende wrote:
 > Hi Team,
 >
 > I am trying to restore the backup into postgresql 9.6 and during
restore
 > copy command fails with below error,
 >
 > 2018-10-12 06:21:40 UTC [40407]: [28-1]
 > application=psql.bin,user=postgres,db=cloudLOG:  could not send
data to
 > client: Broken pipe

I would say the above is the issue.

Where is the client running relative to the server?

 > 2018-10-12 06:21:40 UTC [40407]: [29-1]
 > application=psql.bin,user=postgres,db=cloudSTATEMENT:  COPY
 > d20160905_x_20da999ef9434e60942859c3690e642f (did, location_id, h00,
 > h00_min, h00_max, h00_wgt, h01, h01_min, h01_max, h01_wgt, h02,
h02_min,
 > h02_max, h02_wgt, h03, h03_min, h03_max, h03_wgt, h04, h04_min,
h04_max,
 > h04_wgt, h05, h05_min, h05_max, h05_wgt, h06, h06_min, h06_max,
h06_wgt,
 > h07, h07_min, h07_max, h07_wgt, h08, h08_min, h08_max, h08_wgt, h09,
 > h09_min, h09_max, h09_wgt, h10, h10_min, h10_max, h10_wgt, h11,
h11_min,
 > h11_max, h11_wgt, h12, h12_min, h12_max, h12_wgt, h13, h13_min,
h13_max,
 > h13_wgt, h14, h14_min, h14_max, h14_wgt, h15, h15_min, h15_max,
h15_wgt,
 > h16, h16_min, h16_max, h16_wgt, h17, h17_min, h17_max, h17_wgt, h18,
 > h18_min, h18_max, h18_wgt, h19, h19_min, h19_max, h19_wgt, h20,
h20_min,
 > h20_max, h20_wgt, h21, h21_min, h21_max, h21_wgt, h22, h22_min,
h22_max,
 > h22_wgt, h23, h23_min, h23_max, h23_wgt) FROM stdin;
 > 2018-10-12 06:21:40 UTC [40407]: [30-1]
 > application=psql.bin,user=postgres,db=cloudFATAL: terminating
connection
 > because protocol synchronization was lost
 >
 > I am trying to migrate the DB from EDB PPAS to community
PostgreSQL, we
 > don't have any oracle or PPAS dependent objects so I took the
pg_dumpall
 > using the PostgreSQL binaries and trying to restore the same.
 >
 > PPAS :: 9.3
 > PostgreSQL : 9.6
 > O/S :: Centos 6.6
 >
 > Thanks and Regards.
 >
 > --
 > Shrikant Bhende
 > +91-9975543712


-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Shrikant Bhende
+91-9975543712



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: FATAL: terminating connection because protocol synchronization was lost

2018-10-15 Thread Shrikant Bhende
Hi Adrain,
Its running on the local machine itself. I googled this around but most of
the blog says that stop supporting the older version, also I took
pg_dumpall from EDB PPAS cluster using PostgreSQL binaries, if that might
have caused anything.

Thanks.

On Mon, Oct 15, 2018 at 8:36 PM Adrian Klaver 
wrote:

> On 10/15/18 8:04 AM, Shrikant Bhende wrote:
> > Hi Adrian,
> >
> > There is no explicit client its just simple restore using psql with all
> > default settings required.
>
> psql is the client. Where is it running relative to the server locally
> or remote?
>
> >
> > On Mon, Oct 15, 2018 at 6:50 PM Adrian Klaver  > > wrote:
> >
> > On 10/15/18 5:36 AM, Shrikant Bhende wrote:
> >  > Hi Team,
> >  >
> >  > I am trying to restore the backup into postgresql 9.6 and during
> > restore
> >  > copy command fails with below error,
> >  >
> >  > 2018-10-12 06:21:40 UTC [40407]: [28-1]
> >  > application=psql.bin,user=postgres,db=cloudLOG:  could not send
> > data to
> >  > client: Broken pipe
> >
> > I would say the above is the issue.
> >
> > Where is the client running relative to the server?
> >
> >  > 2018-10-12 06:21:40 UTC [40407]: [29-1]
> >  > application=psql.bin,user=postgres,db=cloudSTATEMENT:  COPY
> >  > d20160905_x_20da999ef9434e60942859c3690e642f (did, location_id,
> h00,
> >  > h00_min, h00_max, h00_wgt, h01, h01_min, h01_max, h01_wgt, h02,
> > h02_min,
> >  > h02_max, h02_wgt, h03, h03_min, h03_max, h03_wgt, h04, h04_min,
> > h04_max,
> >  > h04_wgt, h05, h05_min, h05_max, h05_wgt, h06, h06_min, h06_max,
> > h06_wgt,
> >  > h07, h07_min, h07_max, h07_wgt, h08, h08_min, h08_max, h08_wgt,
> h09,
> >  > h09_min, h09_max, h09_wgt, h10, h10_min, h10_max, h10_wgt, h11,
> > h11_min,
> >  > h11_max, h11_wgt, h12, h12_min, h12_max, h12_wgt, h13, h13_min,
> > h13_max,
> >  > h13_wgt, h14, h14_min, h14_max, h14_wgt, h15, h15_min, h15_max,
> > h15_wgt,
> >  > h16, h16_min, h16_max, h16_wgt, h17, h17_min, h17_max, h17_wgt,
> h18,
> >  > h18_min, h18_max, h18_wgt, h19, h19_min, h19_max, h19_wgt, h20,
> > h20_min,
> >  > h20_max, h20_wgt, h21, h21_min, h21_max, h21_wgt, h22, h22_min,
> > h22_max,
> >  > h22_wgt, h23, h23_min, h23_max, h23_wgt) FROM stdin;
> >  > 2018-10-12 06:21:40 UTC [40407]: [30-1]
> >  > application=psql.bin,user=postgres,db=cloudFATAL: terminating
> > connection
> >  > because protocol synchronization was lost
> >  >
> >  > I am trying to migrate the DB from EDB PPAS to community
> > PostgreSQL, we
> >  > don't have any oracle or PPAS dependent objects so I took the
> > pg_dumpall
> >  > using the PostgreSQL binaries and trying to restore the same.
> >  >
> >  > PPAS :: 9.3
> >  > PostgreSQL : 9.6
> >  > O/S :: Centos 6.6
> >  >
> >  > Thanks and Regards.
> >  >
> >  > --
> >  > Shrikant Bhende
> >  > +91-9975543712
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
> >
> >
> > --
> > Shrikant Bhende
> > +91-9975543712
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 
Shrikant Bhende
+91-9975543712


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 7:54 PM Tom Lane  wrote:

> Durgamahesh Manne  writes:
> >>> If character varying is used without length specifier, the type
> >>> accepts strings of any size
> >>> but varchar does not accept more than this 10485760 value
>
> You're confusing the size of string that can be stored with the
> largest value accepted for "n" in "varchar(n)".  This is documented,
> in the same place that people have been pointing you to:
>
> In any case, the longest possible character string that can be stored
> is about 1 GB. (The maximum value that will be allowed for n in the
> ---
> data type declaration is less than that. It wouldn't be useful to
> 
> change this because with multibyte character encodings the number of
> characters and bytes can be quite different. If you desire to store
> long strings with no specific upper limit, use text or character
> varying without a length specifier, rather than making up an arbitrary
> length limit.)
>
> As you found out, the limit for "n" is ~ 10 million.
>
> In principle, we could have allowed it to be as much as 1Gb divided by
> the maximum character length of the database's encoding, but it did
> not seem like a great idea for the limit to be encoding-dependent.
>
> As the last sentence in the doc paragraph points out, the preferred
> thing to do if you just want to allow very long strings is to leave
> off "(n)" altogether.
>
> The subtext here, which maybe we ought to state in a more in-your-face
> way, is that if you use char(N) or varchar(N) without a concrete
> application-driven reason why N has to be that particular value,
> no more or less, then You're Doing It Wrong.  Artificially-chosen
> column width limits are a bad idea left over from the days of
> punched cards.  The reason the limit on N is much smaller than it
> could theoretically be is that column declarations with very large
> N are, without exception, violations of this principle.
>
> regards, tom lane
>



Hi sir

>>> If character varying is used without length specifier, the  datatype
>>> accepts strings of any size up to maximum of 1GB as  i found this info
in pgdg doc

I have not used  this max length 10485760 value at varchar in table of db
as well as i have not confused about this maximium length of the string for
varchar upto 1GB

I have used this column datatype varchar with out using any limit

I have checked with more than above value by creating table test
with create table test(id serial primary key, str varchar(10485761)) as an
example

ERROR: length for type varchar cannot exceed 10485760


text variable unlimited length
character varying(*n*), varchar(*n*) variable-length with limit
as per the documented text is with unlimited length and varchar variable
length is with limit 1GB

So i need unlimited length data type for required column of the table for
storing the large values

is there any issue to use unlimited length datatype  text  for the required
column of the table instead of using varchar ?



Regards

Durgamahesh Manne


Re: Setting up continuous archiving

2018-10-15 Thread Yuri Kanivetsky
> I am not sure what you call discrete / continuous.

>> pgBackRest doesn't seem to allow the latter: recovery to any point in
>> time, only to some discrete moments. Correct me if I'm wrong.
>
>
> Are you talking about PITR ?

Yes. I had the impression, that with pgBackRest you do backups
occasionally, and as a result have a fixed number of states you can
restore to. But it appears they both keep the WAL files. So you can
restore to any point in time.

By the way, do/can they both use streaming to receive WAL records? Or
streaming is only for standby servers. For backups you have only
file-based log shipping?

Then, I suppose they both don't support partial PITR
(http://docs.pgbarman.org/release/2.4/#scope), where there are
standalone backups that extends to points in time for which there are
no WAL files. I'm not sure if this matters, but I assume that it might
be effective in terms of disk space.

Like, base backups + WAL files covering the last month, and a couple
of standalone backups for a couple of months before that. Compared to
base backups + WAL files covering the same period of time.

> But does it make sense to use repmgr ?

By that you mean, why use repmgr, that targets specifically PostgreSQL
in place of Pacemaker + Corosync which are more general pieces of
software?

> I use corosync & pacemaker with PAF for HA so I never had to use repmgr.

I'd like to be able to handle db failure as fast as possible. Ideally,
automatically. Which probably means either repmgr, or corosync +
pacemaker + PAF. Is that what you mean by HA here? Or at least, have a
running instance I can switch to manually. Which means, for example,
pgBackRest's streaming replication.

Regards,
Yuri Kanivetsky



Re: Regarding varchar max length in postgres

2018-10-15 Thread David G. Johnston
On Mon, Oct 15, 2018 at 8:24 AM Durgamahesh Manne 
wrote:

> So i need unlimited length data type for required column of the table for
> storing the large values
> is there any issue to use unlimited length datatype  text  for the
> required column of the table instead of using varchar ?
>

Between the two you should use the "text" data type for those columns.  You
will need to describe your use case in more detail if you want input as to
whether you need to use the large object API instead.

The database cannot store an unlimited amount of data in a single
row+column (cell) - you will still encounter a physical limit to the number
of bytes able to be stored in a single cell when using text.

David J.


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 9:07 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Oct 15, 2018 at 8:24 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> So i need unlimited length data type for required column of the table for
>> storing the large values
>> is there any issue to use unlimited length datatype  text  for the
>> required column of the table instead of using varchar ?
>>
>
> Between the two you should use the "text" data type for those columns.
> You will need to describe your use case in more detail if you want input as
> to whether you need to use the large object API instead.
>
> The database cannot store an unlimited amount of data in a single
> row+column (cell) - you will still encounter a physical limit to the number
> of bytes able to be stored in a single cell when using text.
>
> David J.
>


Hi sir

 i need to store more values on required column of the table by using text
than varchar

you said that physical limit needs to be stored values  in column of the
table

text variable unlimited length


Thank you for this valuable information

 PostgreSQL is always no 1 world s leading open source  RDBMS



 I request you all community members to provide built in bdr v3 version
replication for public as multimaster replication is on high priority
against other dbms


Filtering before join with date_trunc()

2018-10-15 Thread Phil Endecott
Dear Experts,

I have a few tables with "raw" timestamsps like this:

+---+--+
| time  | pressure |
+---+--+
| 2018-09-14 00:00:07.148378+00 |  1007.52 |
| 2018-09-14 00:10:07.147506+00 |  1007.43 |
| 2018-09-14 00:20:07.147533+00 |  1007.28 |
+---+--+

For each of these tables I have a view which rounds the timestamp 
to the nearest minute, and ensures there is only one row per minute:

 SELECT date_trunc('minute'::text, tbl."time") AS "time",
max(tbl.pressure) AS pressure
   FROM tbl
  GROUP BY (date_trunc('minute'::text, tbl."time"))
  ORDER BY (date_trunc('minute'::text, tbl."time"));

I then join these tables on the rounded time:

 SELECT COALESCE(rain."time", pressures."time", temperatures."time") AS "time",
rain.rain,
pressures.pressure,
temperatures.temperature
   FROM rain
 FULL JOIN pressures USING ("time")
 FULL JOIN temperatures USING ("time");

++--+--+-+
|  time  | rain | pressure | temperature |
++--+--+-+
| 2018-09-14 00:00:00+00 |0 |  1007.52 |  11.349 |
| 2018-09-14 00:10:00+00 |0 |  1007.43 | 11.2317 |
| 2018-09-14 00:20:00+00 |0 |  1007.28 | 11.2317 |
++--+--+-+

The COALESCE for time and the full joins are needed because some 
columns may be missing for some minutes.

Now I'd like to find the values for a particular short time period:

SELECT * FROM readings
WHERE "time" BETWEEN '2018-10-01T00:00:00' AND '2018-10-01T24:00:00'

This works, but it is inefficient; it seems to create all the rounded 
data, do the join on all of it, and then filter on the time period.  
Ideally it would filter the raw data, and then need to round and join 
far fewer rows.

It would not be difficult for me to round the timestamps when inserting 
the data, and also ensure that there is only one row for each minute. 
But I've done some experiments and even if I remove all the rounding and 
replace the full joins with regular joins, it still does sequential 
scans on at least one of the tables:

Nested Loop  (cost=12.95..144.99 rows=135 width=20)
   Join Filter: (x_rain."time" = x_pressures."time")
   ->  Hash Join  (cost=12.67..97.83 rows=135 width=24)
 Hash Cond: (x_temperatures."time" = x_rain."time")
 ->  Seq Scan on x_temperatures  (cost=0.00..67.50 rows=4350 width=12)
 ->  Hash  (cost=10.98..10.98 rows=135 width=12)
   ->  Index Scan using x_rain_by_time on x_rain  (cost=0.28..10.98 
rows=135 width=12)
 Index Cond: (("time" >= '2018-10-01 
00:00:00+00'::timestamp with time zone) AND ("time" <= '2018-10-02 
00:00:00+00'::timestamp with time zone))
   ->  Index Scan using x_pressures_by_time on x_pressures  (cost=0.28..0.34 
rows=1 width=12)
 Index Cond: ("time" = x_temperatures."time")

Maybe that is because the tables are currently relatively small (a 
few thousands rows) and it believes that sequential scans are faster. 
(I have sometimes wished for an "explain" variant that tells me what 
query plan it would choose if all the tables were 100X larger.)

Is there anything I can do to make this more efficient when the tables 
are larger?


Thanks for any suggestions.


Regards, Phil.





Re: Setting up continuous archiving

2018-10-15 Thread Benoit Lobréau
By the way, do/can they both use streaming to receive WAL records? Or
> streaming is only for standby servers. For backups you have only
> file-based log shipping?
>

barman supports streaming but it's not as magical as one might think.
See pgbarman's documentation for how to manager .partial files.

pgbackrest archives only wal files when postgres uses the archive_command.
You also have the option to do paralllel async wal push/get.
It can be useful if you write wals quicker than you can archive them or if
you want to restore more quickly.

Then, I suppose they both don't support partial PITR
> (http://docs.pgbarman.org/release/2.4/#scope), where there are
> standalone backups that extends to points in time for which there are
> no WAL files. I'm not sure if this matters, but I assume that it might
> be effective in terms of disk space.
>

It's a hot backup so you have to have wals files so that your backup is
consistent at the end of the backup.
You can build something like what you describe with pgbackrest,
archive-copy and a copy of the backup directory to another place.


> Like, base backups + WAL files covering the last month, and a couple
> of standalone backups for a couple of months before that. Compared to
> base backups + WAL files covering the same period of time.
>

I see. Yes keeping the wal for months can take some space...

By that you mean, why use repmgr, that targets specifically PostgreSQL
> in place of Pacemaker + Corosync which are more general pieces of
> software?
>
I'd like to be able to handle db failure as fast as possible. Ideally,
> automatically. Which probably means either repmgr, or corosync +
> pacemaker + PAF. Is that what you mean by HA here?


yes. You can also look into patrony for an alternative. It's a matter of
preference and requirements.

Set up like corosync & pacemaker can be a little complex at first. But HA
is not a trivial matter. And fencing is tricky to set up correctly.
If you require Hight availability it means you value your data and what
your service up. So it's a non sense to go quick and cheap. You should try
each and see for your self. (I contributed to PAF so I am not unbiased)


> Or at least, have a
> running instance I can switch to manually. Which means, for example,
> pgBackRest's streaming replication.
>

I am not following here. switchover doesn't requiert a backup tool. For
your switchover you just have to stop the master (so that all wal file are
sent to online stanbies) and promote the standby.

If your standby is lagging too far behind (and you dont use slots) you can
use pgbackrest's archive to fill the gap in wal files and catchup with the
master.

regards,
Benoit


Re: Regarding varchar max length in postgres

2018-10-15 Thread Adrian Klaver

On 10/15/18 8:56 AM, Durgamahesh Manne wrote:



On Mon, Oct 15, 2018 at 9:07 PM David G. Johnston 
mailto:david.g.johns...@gmail.com>> wrote:


On Mon, Oct 15, 2018 at 8:24 AM Durgamahesh Manne
mailto:maheshpostgr...@gmail.com>> wrote:

So i need unlimited length data type for required column of the
table for storing the large values
is there any issue to use unlimited length datatype  text  for
the required column of the table instead of using varchar ?


Between the two you should use the "text" data type for those
columns.  You will need to describe your use case in more detail if
you want input as to whether you need to use the large object API
instead.

The database cannot store an unlimited amount of data in a single
row+column (cell) - you will still encounter a physical limit to the
number of bytes able to be stored in a single cell when using text.

David J.



Hi sir

  i need to store more values on required column of the table by using 
text  than varchar


you said that physical limit needs to be stored values  in column of the 
table


|text|  variable unlimited length


There are two limits at work here. n in varchar(n) and char(n) is the 
number of characters that can be stored. Text does not have the ability 
to limit the characters on declaration. In the special case of varchar 
(no n declared) it is equal to text. All the character types(varchar, 
char, text) have a storage limit of 1GB per field. This storage value is 
not necessarily a one to one relationship to n as the encoding for the 
string maybe a multibyte one and a character may occupy multiple bytes.










ype
Thank you for this valuable information

  PostgreSQL is always no 1 world s leading open source  RDBMS



  I request you all community members to provide built in bdr v3 version 
replication for public as multimaster replication is on high priority  
against other dbms



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Regarding varchar max length in postgres

2018-10-15 Thread Adrian Klaver

On 10/15/18 8:56 AM, Durgamahesh Manne wrote:


  I request you all community members to provide built in bdr v3 version 
replication for public as multimaster replication is on high priority  
against other dbms


BDR v3 is third party extension from 2ndQuadrant, it is not community 
code. Elements of it have found there into the community code as logical 
replication, but that is as far as it has gotten. You will need to take 
this up with 2ndQuadrant.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Filtering before join with date_trunc()

2018-10-15 Thread Francisco Olarte
Hi Phil:

On Mon, Oct 15, 2018 at 5:57 PM, Phil Endecott
 wrote:
...
> For each of these tables I have a view which rounds the timestamp
> to the nearest minute, and ensures there is only one row per minute:
...
> I then join these tables on the rounded time:

> Now I'd like to find the values for a particular short time period:

For what I propose I assume the SHORT time is really short

...
> This works, but it is inefficient; it seems to create all the rounded
> data, do the join on all of it, and then filter on the time period.
> Ideally it would filter the raw data, and then need to round and join
> far fewer rows.

I think you are expecting too much, I mean, you are expecting the
server to know it can expand your time-period into a (maybe) bigger
one which covers the original data and push that condition down. In my
experience this is normally too much.

...

> Is there anything I can do to make this more efficient when the tables
> are larger?

If your periods are really short you could try to replace the time
condition on the views to a time condition in the tables and do the
rounding and grouping afterwards. I mean, use a half-open interval to
catch the relevant chunks of the tables and then join the short
results.

I think with a trio of CTE selecting with a time interval on the WHERE
and doing the date_trunc()/MAX() group by you should be able to do
three index scans producing short results which can then be
full-joined and coalesced. If you want the interval from $A to $B (
rounded to minutes ), do something like

WITH pressures AS (
 SELECT date_trunc('minute'::text, tbl."time") AS "time",
max(tbl.pressure) AS pressure
   FROM tbl
-- Chop the relevant time..
WHERE time >= $A and time < $B + '1 minute'::interval
-- There may be easier ways to make the above condition if you are
generating the text, but always use half-open
  GROUP BY 1 ),  -- Easier to type, and no order-by here ( and I
normally label order by in views as a code smell )
,
yada, yada-- repeat for temperatures, rain

SELECT
  SELECT COALESCE(rain."time", pressures."time", temperatures."time") AS "time",
rain.rain,
pressures.pressure,
temperatures.temperature
   FROM rain
 FULL JOIN pressures USING ("time")
 FULL JOIN temperatures USING ("time")
ORDER BY 1; -- ORDER BY GOES HERE, I suspect your example got ordered
by chance, not by dessign.

( use other names, I just used the view names for C&P, lazy me ).


Francisco Olarte.



Re: Filtering before join with date_trunc()

2018-10-15 Thread Adrian Klaver

On 10/15/18 8:57 AM, Phil Endecott wrote:

Dear Experts,

I have a few tables with "raw" timestamsps like this:

+---+--+
| time  | pressure |
+---+--+
| 2018-09-14 00:00:07.148378+00 |  1007.52 |
| 2018-09-14 00:10:07.147506+00 |  1007.43 |
| 2018-09-14 00:20:07.147533+00 |  1007.28 |
+---+--+

For each of these tables I have a view which rounds the timestamp
to the nearest minute, and ensures there is only one row per minute:

  SELECT date_trunc('minute'::text, tbl."time") AS "time",
 max(tbl.pressure) AS pressure
FROM tbl
   GROUP BY (date_trunc('minute'::text, tbl."time"))
   ORDER BY (date_trunc('minute'::text, tbl."time"));

I then join these tables on the rounded time:

  SELECT COALESCE(rain."time", pressures."time", temperatures."time") AS "time",
 rain.rain,
 pressures.pressure,
 temperatures.temperature
FROM rain
  FULL JOIN pressures USING ("time")
  FULL JOIN temperatures USING ("time");

++--+--+-+
|  time  | rain | pressure | temperature |
++--+--+-+
| 2018-09-14 00:00:00+00 |0 |  1007.52 |  11.349 |
| 2018-09-14 00:10:00+00 |0 |  1007.43 | 11.2317 |
| 2018-09-14 00:20:00+00 |0 |  1007.28 | 11.2317 |
++--+--+-+

The COALESCE for time and the full joins are needed because some
columns may be missing for some minutes.

Now I'd like to find the values for a particular short time period:

SELECT * FROM readings
WHERE "time" BETWEEN '2018-10-01T00:00:00' AND '2018-10-01T24:00:00'


Is readings a table or view?

If view is the SELECT COALESCE ... query the view query?



This works, but it is inefficient; it seems to create all the rounded
data, do the join on all of it, and then filter on the time period.
Ideally it would filter the raw data, and then need to round and join
far fewer rows.

It would not be difficult for me to round the timestamps when inserting
the data, and also ensure that there is only one row for each minute.
But I've done some experiments and even if I remove all the rounding and
replace the full joins with regular joins, it still does sequential
scans on at least one of the tables:

Nested Loop  (cost=12.95..144.99 rows=135 width=20)
Join Filter: (x_rain."time" = x_pressures."time")
->  Hash Join  (cost=12.67..97.83 rows=135 width=24)
  Hash Cond: (x_temperatures."time" = x_rain."time")
  ->  Seq Scan on x_temperatures  (cost=0.00..67.50 rows=4350 width=12)
  ->  Hash  (cost=10.98..10.98 rows=135 width=12)
->  Index Scan using x_rain_by_time on x_rain  
(cost=0.28..10.98 rows=135 width=12)
  Index Cond: (("time" >= '2018-10-01 00:00:00+00'::timestamp with time 
zone) AND ("time" <= '2018-10-02 00:00:00+00'::timestamp with time zone))
->  Index Scan using x_pressures_by_time on x_pressures  (cost=0.28..0.34 
rows=1 width=12)
  Index Cond: ("time" = x_temperatures."time")

Maybe that is because the tables are currently relatively small (a
few thousands rows) and it believes that sequential scans are faster.
(I have sometimes wished for an "explain" variant that tells me what
query plan it would choose if all the tables were 100X larger.)

Is there anything I can do to make this more efficient when the tables
are larger?


Thanks for any suggestions.


Regards, Phil.






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Filtering before join with date_trunc()

2018-10-15 Thread Tom Lane
"Phil Endecott"  writes:
> ...
> For each of these tables I have a view which rounds the timestamp 
> to the nearest minute, and ensures there is only one row per minute:

>  SELECT date_trunc('minute'::text, tbl."time") AS "time",
> max(tbl.pressure) AS pressure
>FROM tbl
>   GROUP BY (date_trunc('minute'::text, tbl."time"))
>   ORDER BY (date_trunc('minute'::text, tbl."time"));

> I then join these tables on the rounded time:

>  SELECT COALESCE(rain."time", pressures."time", temperatures."time") AS 
> "time",
> rain.rain,
> pressures.pressure,
> temperatures.temperature
>FROM rain
>  FULL JOIN pressures USING ("time")
>  FULL JOIN temperatures USING ("time");

> The COALESCE for time and the full joins are needed because some 
> columns may be missing for some minutes.

> Now I'd like to find the values for a particular short time period:

> SELECT * FROM readings
> WHERE "time" BETWEEN '2018-10-01T00:00:00' AND '2018-10-01T24:00:00'

> This works, but it is inefficient; it seems to create all the rounded 
> data, do the join on all of it, and then filter on the time period.  
> Ideally it would filter the raw data, and then need to round and join 
> far fewer rows.

You're expecting too much.  In the first place, it's generally impossible
to push constraints down through full joins, because that could change the
results.  (For example, if we remove a row from one join input, that could
result in getting unwanted null-extended rows from the other join input.)
Maybe with a whole lot of analysis we could prove that applying the "same"
constraint to both join keys doesn't break anything, but there's no such
intelligence there now --- in general, we've expended little if any effort
on optimizing full joins.  It's also not very clear to me that we can do
anything at all with pushing down constraints that are expressed in terms
of a JOIN USING merged column; they don't really constrain either input
individually.

> But I've done some experiments and even if I remove all the rounding and 
> replace the full joins with regular joins, it still does sequential 
> scans on at least one of the tables:

I think you're also expecting the system to deduce that it can apply an
inequality on one join column to the other one.  It doesn't; only equality
constraints have any sort of transitivity logic.

So you'll need to write out the BETWEEN separately for each table,
and put it below the full join, which means you won't be able to
use those nice views :-(

regards, tom lane



Re: Setting up continuous archiving

2018-10-15 Thread David Steele

On 10/15/18 5:09 PM, Benoit Lobréau wrote:


By the way, do/can they both use streaming to receive WAL records? Or
streaming is only for standby servers. For backups you have only
file-based log shipping?

barman supports streaming but it's not as magical as one might think.
See pgbarman's documentation for how to manager .partial files.

pgbackrest archives only wal files when postgres uses the 
archive_command. You also have the option to do paralllel async wal 
push/get.
It can be useful if you write wals quicker than you can archive them or 
if you want to restore more quickly.


We have focused on archive_command because the performance is much 
better because it can be parallelized.



Then, I suppose they both don't support partial PITR
(http://docs.pgbarman.org/release/2.4/#scope), where there are
standalone backups that extends to points in time for which there are
no WAL files. I'm not sure if this matters, but I assume that it might
be effective in terms of disk space.

It's a hot backup so you have to have wals files so that your backup is 
consistent at the end of the backup.
You can build something like what you describe with pgbackrest, 
archive-copy and a copy of the backup directory to another place.


The --repo1-retention-archive-type and --repo1-retention-archive options 
allow you do keep WAL for a smaller number of backups in order to save 
space.


https://pgbackrest.org/configuration.html#section-repository/option-repo-retention-archive

So, for example:

[global]
repo1-retention-full=4
# The following option is the default but included for clarity
repo1-retention-archive-type=full
repo1-retention-archive=2

This configuration will retain 4 full backups but only keep PITR WAL for 
2 of them, i.e. the WAL generated between backups.  The WAL required to 
make a backup consistent is always retained so the 2 older backups can 
be played to consistency but no further.



I'd like to be able to handle db failure as fast as possible. Ideally,
automatically. Which probably means either repmgr, or corosync +
pacemaker + PAF. Is that what you mean by HA here? 



yes. You can also look into patrony for an alternative. It's a matter of 
preference and requirements.


pgBackRest certainly works with Pacemaker/Corosync and Patroni.

If your standby is lagging too far behind (and you dont use slots) you 
can use pgbackrest's archive to fill the gap in wal files and catchup 
with the master.


Recovering WAL out of the archive is safer than using slots.  Since the 
WAL is transferred over the network compressed it can also save a lot of 
bandwidth.


Regards,
--
-David
da...@pgmasters.net



Saving view turns SELECT * into field list

2018-10-15 Thread Ben Uphoff
Hey team – I’m sure this has come up, but my search engine skills couldn’t find 
an explanation:

Why, when I save a simple view like:

SELECT * FROM a_table

…does PostgreSQL turn the * into a field list like:

SELECT field1, field2, field3, field4 FROM a_table

?

This is super-frustrating, as it means I’ll have to manually change an 
aggregating “parent” view’s select lists every time I change the “child” views.

Thanks for any info. -Ben
*** PLEASE NOTE *** This E-Mail/telefax message 
and any documents accompanying this transmission may contain information that 
is privileged, confidential, and/or exempt from disclosure under applicable law 
and is intended solely for the addressee(s) named above. If you are not the 
intended addressee/recipient, you are hereby notified that any use of, 
disclosure, copying, distribution, or reliance on the contents of this 
E-Mail/telefax information is strictly prohibited and may result in legal 
action against you. Please reply to the sender advising of the error in 
transmission and immediately delete/destroy the message and any accompanying 
documents. Thank you.


Re: Saving view turns SELECT * into field list

2018-10-15 Thread Tom Lane
Ben Uphoff  writes:
> Why, when I save a simple view like:
> SELECT * FROM a_table
> …does PostgreSQL turn the * into a field list like:
> SELECT field1, field2, field3, field4 FROM a_table
> ?

Because the SQL standard says we should.  There's explicit text in there
to the effect that adding columns to the underlying table does not add
columns to the view.  Which, I'm sure, is exactly what you wished would
happen ... but they say not to.

regards, tom lane



Re: Saving view turns SELECT * into field list

2018-10-15 Thread David G. Johnston
On Monday, October 15, 2018, Ben Uphoff  wrote:

>
> Why, when I save a simple view like:
>
>
>
> SELECT * FROM a_table
>
>
>
> …does PostgreSQL turn the * into a field list like:
>
>
>
> SELECT field1, field2, field3, field4 FROM a_table
>
>
>
Yes, deeply nesting views is a maintenance concern.  It works this way so
the view remains stable (there may be others, the reasoning no longer
really matters...).  If you really want dynamic SQL you will need to use a
function.

David J.


Re: FATAL: terminating connection because protocol synchronization was lost

2018-10-15 Thread Adrian Klaver

On 10/15/18 8:10 AM, Shrikant Bhende wrote:

Hi Adrain,
Its running on the local machine itself. I googled this around but most 
of the blog says that stop supporting the older version, also I took 
pg_dumpall from EDB PPAS cluster using PostgreSQL binaries, if that 
might have caused anything.


The data looks alright, this is a client(psql) issue. So:

1) Which psql are you using the EDB or community one?

2) What is the exact psql command you are using?




Thanks.

On Mon, Oct 15, 2018 at 8:36 PM Adrian Klaver > wrote:


On 10/15/18 8:04 AM, Shrikant Bhende wrote:
 > Hi Adrian,
 >
 > There is no explicit client its just simple restore using psql
with all
 > default settings required.

psql is the client. Where is it running relative to the server locally
or remote?

 >
 > On Mon, Oct 15, 2018 at 6:50 PM Adrian Klaver
mailto:adrian.kla...@aklaver.com>
 > >> wrote:
 >
 >     On 10/15/18 5:36 AM, Shrikant Bhende wrote:
 >      > Hi Team,
 >      >
 >      > I am trying to restore the backup into postgresql 9.6 and
during
 >     restore
 >      > copy command fails with below error,
 >      >
 >      > 2018-10-12 06:21:40 UTC [40407]: [28-1]
 >      > application=psql.bin,user=postgres,db=cloudLOG:  could not
send
 >     data to
 >      > client: Broken pipe
 >
 >     I would say the above is the issue.
 >
 >     Where is the client running relative to the server?
 >
 >      > 2018-10-12 06:21:40 UTC [40407]: [29-1]
 >      > application=psql.bin,user=postgres,db=cloudSTATEMENT:  COPY
 >      > d20160905_x_20da999ef9434e60942859c3690e642f (did,
location_id, h00,
 >      > h00_min, h00_max, h00_wgt, h01, h01_min, h01_max, h01_wgt,
h02,
 >     h02_min,
 >      > h02_max, h02_wgt, h03, h03_min, h03_max, h03_wgt, h04,
h04_min,
 >     h04_max,
 >      > h04_wgt, h05, h05_min, h05_max, h05_wgt, h06, h06_min,
h06_max,
 >     h06_wgt,
 >      > h07, h07_min, h07_max, h07_wgt, h08, h08_min, h08_max,
h08_wgt, h09,
 >      > h09_min, h09_max, h09_wgt, h10, h10_min, h10_max, h10_wgt,
h11,
 >     h11_min,
 >      > h11_max, h11_wgt, h12, h12_min, h12_max, h12_wgt, h13,
h13_min,
 >     h13_max,
 >      > h13_wgt, h14, h14_min, h14_max, h14_wgt, h15, h15_min,
h15_max,
 >     h15_wgt,
 >      > h16, h16_min, h16_max, h16_wgt, h17, h17_min, h17_max,
h17_wgt, h18,
 >      > h18_min, h18_max, h18_wgt, h19, h19_min, h19_max, h19_wgt,
h20,
 >     h20_min,
 >      > h20_max, h20_wgt, h21, h21_min, h21_max, h21_wgt, h22,
h22_min,
 >     h22_max,
 >      > h22_wgt, h23, h23_min, h23_max, h23_wgt) FROM stdin;
 >      > 2018-10-12 06:21:40 UTC [40407]: [30-1]
 >      > application=psql.bin,user=postgres,db=cloudFATAL: terminating
 >     connection
 >      > because protocol synchronization was lost
 >      >
 >      > I am trying to migrate the DB from EDB PPAS to community
 >     PostgreSQL, we
 >      > don't have any oracle or PPAS dependent objects so I took the
 >     pg_dumpall
 >      > using the PostgreSQL binaries and trying to restore the same.
 >      >
 >      > PPAS :: 9.3
 >      > PostgreSQL : 9.6
 >      > O/S :: Centos 6.6
 >      >
 >      > Thanks and Regards.
 >      >
 >      > --
 >      > Shrikant Bhende
 >      > +91-9975543712
 >
 >
 >     --
 >     Adrian Klaver
 > adrian.kla...@aklaver.com 
>
 >
 >
 >
 > --
 > Shrikant Bhende
 > +91-9975543712


-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Shrikant Bhende
+91-9975543712



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Filtering before join with date_trunc()

2018-10-15 Thread Phil Endecott
Thanks all for the replies.

Tom Lane wrote:
> You're expecting too much.

That often seems to be the case.

> I think you're also expecting the system to deduce that it can apply an
> inequality on one join column to the other one.  It doesn't; only equality
> constraints have any sort of transitivity logic.
>
> So you'll need to write out the BETWEEN separately for each table,
> and put it below the full join, which means you won't be able to
> use those nice views :-(

Here's an example:

create table t1 ("time" timestamptz, value1 numeric);
create index t1_time on t1("time");
\copy t1 from ..

create table t2 ("time" timestamptz, value2 numeric);
create index t2_time on t2("time");
\copy t2 from ..

explain select * from t1 join t2 using("time") where "time" between 
'2018-10-01' and '2018-10-02';
++
|   QUERY PLAN  
 |
++
| Hash Join  (cost=12.99..101.03 rows=138 width=21) 
 |
|   Hash Cond: (t2."time" = t1."time")  
 |
|   ->  Seq Scan on t2  (cost=0.00..70.11 rows=4411 width=15)   
 |
|   ->  Hash  (cost=11.18..11.18 rows=145 width=14) 
 |
| ->  Index Scan using t1_time on t1  (cost=0.28..11.18 rows=145 
width=14)   |
|   Index Cond: (("time" >= '2018-10-01 00:00:00+00'::timestamp 
with time zone) AND .|
|.("time" <= '2018-10-02 00:00:00+00'::timestamp with time zone))   
 |
++

explain with q1 as (select * from t1 where "time" between '2018-10-01' and 
'2018-10-02'), q2 as (select * from t2 where "time" between '2018-10-01' and 
'2018-10-02') select * from q1 join q2 using("time");
++
|   QUERY PLAN  
 |
++
| Hash Join  (cost=26.60..31.41 rows=136 width=72)  
 |
|   Hash Cond: (q1."time" = q2."time")  
 |
|   CTE q1  
 |
| ->  Index Scan using t1_time on t1  (cost=0.28..11.18 rows=145 width=14)  
 |
|   Index Cond: (("time" >= '2018-10-01 00:00:00+00'::timestamp with 
time zone) AND ("ti.|
|.me" <= '2018-10-02 00:00:00+00'::timestamp with time zone))   
 |
|   CTE q2  
 |
| ->  Index Scan using t2_time on t2  (cost=0.28..11.00 rows=136 width=15)  
 |
|   Index Cond: (("time" >= '2018-10-01 00:00:00+00'::timestamp with 
time zone) AND ("ti.|
|.me" <= '2018-10-02 00:00:00+00'::timestamp with time zone))   
 |
|   ->  CTE Scan on q1  (cost=0.00..2.90 rows=145 width=40) 
 |
|   ->  Hash  (cost=2.72..2.72 rows=136 width=40)   
 |
| ->  CTE Scan on q2  (cost=0.00..2.72 rows=136 width=40)   
 |
++


So as you say, even if I strip out all of the complexity of approximate 
timestamps and missing values, it's never going to push the BETWEEN filter 
down below the join.  Even with just a few thousand rows I see a 5X speedup 
with the second query with the explicit filtering below the join.

This is rather disappointing.  Am I the only person who's ever wanted to do 
this?


Regards, Phil.






Re: Saving view turns SELECT * into field list

2018-10-15 Thread Ben Madin
Actually, it's super useful, because if someone adds a salaries column to
your staff table, it doesn't automatically appear on the front page of your
corporate website... :)

Made up example, but if you presume that data security is an important part
of data management, it is a livesaver.

To get around it all you have to do is script a drop and replace action.

A last word - if you have nested views, remember that they are essentially
just query aliases that return an unindexed result set...

cheers

Ben



On 16 October 2018 at 03:50, Ben Uphoff  wrote:

> Hey team – I’m sure this has come up, but my search engine skills couldn’t
> find an explanation:
>
>
>
> Why, when I save a simple view like:
>
>
>
> SELECT * FROM a_table
>
>
>
> …does PostgreSQL turn the * into a field list like:
>
>
>
> SELECT field1, field2, field3, field4 FROM a_table
>
>
>
> ?
>
>
>
> This is super-frustrating, as it means I’ll have to manually change an
> aggregating “parent” view’s select lists every time I change the “child”
> views.
>
>
>
> Thanks for any info. -Ben
> *** PLEASE NOTE *** This E-Mail/telefax
> message and any documents accompanying this transmission may contain
> information that is privileged, confidential, and/or exempt from disclosure
> under applicable law and is intended solely for the addressee(s) named
> above. If you are not the intended addressee/recipient, you are hereby
> notified that any use of, disclosure, copying, distribution, or reliance on
> the contents of this E-Mail/telefax information is strictly prohibited and
> may result in legal action against you. Please reply to the sender advising
> of the error in transmission and immediately delete/destroy the message and
> any accompanying documents. Thank you.
>



-- 

[image: Ausvet Logo] 

Dr Ben Madin

BVMS MVPHMgmt PhD MANZCVS GAICD
Managing Director
Mobile:
+61 448 887 220 <+61448887220>
E-mail:
b...@ausvet.com.au
Website:
www.ausvet.com.au
Skype: benmadin
Address:
5 Shuffrey Street
Fremantle, WA 6160
Australia


Re: Problem creating a database

2018-10-15 Thread Ben Madin
Do you have adequate disk space left on your array?

cheers

Ben


On 15 October 2018 at 17:46, Joshua White  wrote:

> Hi all,
>
> I'm hoping someone can point me in the right direction. I've got a
> PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
> have full admin rights on this machine, so I can access logs, etc.
>
> Recently I attempted to create a new database in this cluster. The command
> succeeds, but when I try to connect to the new database, I get a "could not
> open file" error:
>
> psql: FATAL:  could not open file "base/618720/2610": No such file or
> directory
>
> It has been some time since I set up the database, so I don't know how
> long ago this became an issue. I can't seem to find any other instances of
> this problem online either. The logs are not helpful - even on the highest
> debug setting, I only see the "connection authorized" then the fatal "could
> not open file" error.
>
> The data directory is on a separate disk array to the OS. Recently checked
> it and there are no disk errors.
>
> Any thoughts or ideas would be much appreciated.
>
> Kind Regards,
> Joshua
>
>


-- 

[image: Ausvet Logo] 

Dr Ben Madin

BVMS MVPHMgmt PhD MANZCVS GAICD
Managing Director
Mobile:
+61 448 887 220 <+61448887220>
E-mail:
b...@ausvet.com.au
Website:
www.ausvet.com.au
Skype: benmadin
Address:
5 Shuffrey Street
Fremantle, WA 6160
Australia


Re: Problem creating a database

2018-10-15 Thread Joshua White
Thanks for the suggestion - plenty of disk space left (several hundred
gigabytes free).

Kind Regards,
Joshua White

On Tue, 16 Oct 2018 at 15:03, Ben Madin  wrote:

> Do you have adequate disk space left on your array?
>
> cheers
>
> Ben
>
>
> On 15 October 2018 at 17:46, Joshua White  wrote:
>
>> Hi all,
>>
>> I'm hoping someone can point me in the right direction. I've got a
>> PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
>> have full admin rights on this machine, so I can access logs, etc.
>>
>> Recently I attempted to create a new database in this cluster. The
>> command succeeds, but when I try to connect to the new database, I get a
>> "could not open file" error:
>>
>> psql: FATAL:  could not open file "base/618720/2610": No such file or
>> directory
>>
>> It has been some time since I set up the database, so I don't know how
>> long ago this became an issue. I can't seem to find any other instances of
>> this problem online either. The logs are not helpful - even on the highest
>> debug setting, I only see the "connection authorized" then the fatal "could
>> not open file" error.
>>
>> The data directory is on a separate disk array to the OS. Recently
>> checked it and there are no disk errors.
>>
>> Any thoughts or ideas would be much appreciated.
>>
>> Kind Regards,
>> Joshua
>>
>>
>
>
> --
>
> [image: Ausvet Logo] 
>
> Dr Ben Madin
>
> BVMS MVPHMgmt PhD MANZCVS GAICD
> Managing Director
> Mobile:
> +61 448 887 220 <+61448887220>
> E-mail:
> b...@ausvet.com.au
> Website:
> www.ausvet.com.au
> Skype: benmadin
> Address:
> 5 Shuffrey Street
> Fremantle, WA 6160
> Australia
>