Re: [GENERAL] Limiting user from changing its own attributes

2015-04-10 Thread Sameer Kumar
On Sat, Apr 11, 2015 at 12:57 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Fri, Apr 10, 2015 at 9:01 AM, Sameer Kumar 
> wrote:
>
>> Hi,
>>
>> In PostgreSQL a user can alter itself to change its user level
>> parameters. e.g. I can alter the user to change work_mem -
>>
>>
>> psql -U user1 -d postgres
>> postgres=# alter user user user1 set work_mem to '1024000';
>>
>
> ​Is this a typo? - the above has a syntax error...​
>

Yes that is a typo. Sorry about that.

>
> ALTER ROLE
>> postgres=#
>>
>> Is there a way I restrict this behavior? or atleast put a restriction on
>> the certain parameters e.g. work_mem to be not set to too high?
>>
>>
> ​Not that I'm aware of - and the ability to change parameters is not
> limited to ALTER ROLE.
>
> Setting "work_mem" too low can be just as problematic as setting it too
> high.  This one could probably be solved readily enough but you sound like
> you are looking for some blanket capability to either add targeted security
> about GUCs or setup a way to alter generically the "upper_bound,
> lower_bound" ​properties of numeric variables.
>

Yes either an upper bound to which users can set their own values to.


> Upper is somewhat easier but currently the system would only recognize a
> global constraint.
>

Does it? Even though my work_mem in postgresql.conf is 1MB, the user can
alter itself to set its own work_mem to 1GB. Or did I interpret your
statement wrongly?



>
> ​David J.​
>
>
>


Re: [GENERAL] Limiting user from changing its own attributes

2015-04-11 Thread Sameer Kumar
 On Sat, 11 Apr 2015 16:05 David G. Johnston 
wrote:

On Friday, April 10, 2015, Sameer Kumar  wrote:

On Sat, Apr 11, 2015 at 12:57 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

 On Fri, Apr 10, 2015 at 9:01 AM, Sameer Kumar 
wrote:



 Yes either an upper bound to which users can set their own values to.



 Upper is somewhat easier but currently the system would only recognize a
global constraint.

  Does it? Even though my work_mem in postgresql.conf is 1MB, the user can
alter itself to set its own work_mem to 1GB. Or did I interpret your
statement wrongly?

 Pg_settings currently has an upper bound column - though it is a view and
that value cannot be changed that I know of.


I guess that upper bound column is more of the limit that is imposed by
system which you can have for a parameter i.e. the system imposed limit or
valid range if values for a parameter. I don't think one can update that.

But if it could I suspect that whatever catalog you would change to affect
it would only cause a global change.  There is no alter database, role, or
postgresql way to change that value.

 Oh ok... anyway of achieving that? There no EVENT trigger for "alter
user"?


 David J.


Re: [GENERAL] Limiting user from changing its own attributes

2015-04-12 Thread Sameer Kumar
 On Mon, 13 Apr 2015 11:35 Jim Nasby  wrote:

On 4/11/15 4:11 PM, Sameer Kumar wrote:
> Pg_settings currently has an upper bound column - though it is a
> view and that value cannot be changed that I know of.
>
>
> I guess that upper bound column is more of the limit that is imposed by
> system which you can have for a parameter i.e. the system imposed limit
> or valid range if values for a parameter. I don't think one can update
that.

Correct.

> But if it could I suspect that whatever catalog you would change to
> affect it would only cause a global change. There is no alter
> database, role, or postgresql way to change that value.
>
> Oh ok... anyway of achieving that? There no EVENT trigger for "alter
user"?

There is not, but as David mentioned there's way more ways to modify
settings than just ALTER ROLE. Attempting to lock that down won't help
you at all.

Unfortunately, there's no hook support for doing something special when
GUCs change, though it might be possible to do something here via
planner hooks. That would be pretty complicated and would need to be
done in C.

It doesn't look like SELinux would help either.

So basically, there is currently no way to restrict someone changing
GUCs, other than GUCs that are marked as superuser-only.

 Is there anything ecpected in any of the near future release?


Re: [GENERAL] Limiting user from changing its own attributes

2015-04-12 Thread Sameer Kumar
On Mon, Apr 13, 2015 at 1:03 PM Jim Nasby  wrote:

> On 4/12/15 11:55 PM, Sameer Kumar wrote:
> >
> > On Mon, 13 Apr 2015 11:35 Jim Nasby  > <mailto:jim.na...@bluetreble.com>> wrote:
> >
> > On 4/11/15 4:11 PM, Sameer Kumar wrote:
> >  > Pg_settings currently has an upper bound column - though it
> is a
> >  > view and that value cannot be changed that I know of.
> >  >
> >  >
> >  > I guess that upper bound column is more of the limit that is
> > imposed by
> >  > system which you can have for a parameter i.e. the system imposed
> > limit
> >  > or valid range if values for a parameter. I don't think one can
> > update that.
> >
> > Correct.
> >
> >  > But if it could I suspect that whatever catalog you would
> > change to
> >  > affect it would only cause a global change. There is no alter
> >  > database, role, or postgresql way to change that value.
> >  >
> >  > Oh ok... anyway of achieving that? There no EVENT trigger for
> > "alter user"?
> >
> > There is not, but as David mentioned there's way more ways to modify
> > settings than just ALTER ROLE. Attempting to lock that down won't
> help
> > you at all.
> >
> > Unfortunately, there's no hook support for doing something special
> when
> > GUCs change, though it might be possible to do something here via
> > planner hooks. That would be pretty complicated and would need to be
> > done in C.
> >
> > It doesn't look like SELinux would help either.
> >
> > So basically, there is currently no way to restrict someone changing
> > GUCs, other than GUCs that are marked as superuser-only.
> >
> > Is there anything ecpected in any of the near future release?
>
> No. I suspect the community would support at least a hook for GUC
> changes, if not a full-on permissions system. A hook would make it
> fairly easy to add event trigger support.
>
>
I hope someone out there is listening :)

I hope I have made my concern clear, I currently don't have a way to
control users from changing the parameter values for their own settings,
which allows each user to set in-appropriate values e.g. for work_mem.


Regards
Sameer


Re: [GENERAL] Re: Hot standby problems: consistent state not reached, no connection to master server.

2015-04-14 Thread Sameer Kumar
 On Tue, 14 Apr 2015 11:59 Ilya Ashchepkov  wrote:

On Mon, 13 Apr 2015 12:24:11 -0700
Adrian Klaver  wrote:

> On 04/13/2015 11:25 AM, Ilya Ashchepkov wrote:
> > On Mon, 13 Apr 2015 10:06:05 -0700
> > Adrian Klaver  wrote:
> >
> >> On 04/13/2015 09:42 AM, Ilya Ashchepkov wrote:
> >>> On Sun, 12 Apr 2015 17:30:44 -0700
> >>> Adrian Klaver  wrote:
> >>>
> >>
> >>
> >
>
>  If a connection is not being made:
> 
>  1) Dose user replication have REPLICATION rights?
>  2) Is the pg_hba.conf on the master set up to allow a connection
>  from the standby for user replication and database replication?
> >>>
> >>> I commented 'restore_command' in recovery.conf and after start
> >>> slave connected to master.
> >>> Then I uncomment it back. Is it possible to have a both, streaming
> >>> connection and restoring from wal files from NFS share?
> >>
> >> Yes:
> >>
> >>
http://www.postgresql.org/docs/9.3/interactive/warm-standby.html#STREAMING-REPLICATION
> >>
> >> I wonder if your master is recycling WALs fast enough that the
> >> streaming can't find them and the standby has to go to the archive
> >> instead.
> >>
> >> What is your wal_keep_segments on the master set to?:
> > # select name,setting from pg_settings where name like
> > 'wal_keep_segments'; name| setting
> > ---+-
> >   wal_keep_segments | 128
> >
> >
> > I run tcpdump -ni eth0 port 5432 on slave and didn't see any packet
> > from slave to master after restart.
>
> Just to be clear:
>
> 1) When you comment out the restore_command the standby connects to
> the master, correct?

Yes.

>
> 2) When you uncomment restore_command you do not see a standby
> connection, correct?

Yes.

>
> So:
>
> 1) When you are changing the restore_command status do you restart
> the standby server?

Yes.

>
> 2) What does  select * from pg_stat_replication show, in either case?
>
>
www.postgresql.org/docs/9.3/static/monitoring-stats.html#PG-STAT-REPLICATION-VIEW

0 rows on master
0 rows on slave

>
> 3) I may have missed it, but what is your archive_command on the
> master?

# select name,setting from pg_settings where name like 'archive_command';
  name   |  setting
-+
 archive_command | test ! -f /media/psqlbak/wals/main/%f && cp %p
/media/psqlbak/wals/main/%f

>
> >
> >>
> >>
http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER
> >>
> >>>
> 
>  Where are the WAL files coming from?
> >>>
> >>> NFS share on master.
>
>



Can you share the cluster log for your standby database and also your
primary database? Sorry if you have already shared it and I have missed it.
If there is an error in connection there are good chances that some hint
about it must be logged. Generally the logs will be placed in pg_log inside
your data directory. And log_collector must be set to on.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting user from changing its own attributes

2015-05-04 Thread Sameer Kumar
Sorry about the long silence on this.

On Mon, Apr 13, 2015 at 3:34 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Sun, Apr 12, 2015 at 10:23 PM, Sameer Kumar 
> wrote:
>
>> On Mon, Apr 13, 2015 at 1:03 PM Jim Nasby 
>> wrote:
>>
>
>>> No. I suspect the community would support at least a hook for GUC
>>> changes, if not a full-on permissions system. A hook would make it
>>> fairly easy to add event trigger support.
>>>
>>>
>> I hope someone out there is listening :)
>>
>> I hope I have made my concern clear, I currently don't have a way to
>> control users from changing the parameter values for their own settings,
>> which allows each user to set in-appropriate values e.g. for work_mem.
>>
>>
> If work_mem is the only example you can describe then I'm doubtful that
> any kind of urgency is going to be associated with this request.
>

I guess any parameter which can be altered at the user level should have
some limitations e.g. one may not want to allow a user to reset its own
level of client messages.
As an admin one may want to have control over what a user can alter and
what the user can not alter.

e.g.
"

ALTER USER my_app_user  restrict set for client_min_messages;
or
[
   ALTER USER my_app_user  restrict set for client_min_messages in
('DEBUG');
 and
   ALTER USER my_app_user restrict set for work_mem limit to 10MB;

]

"

  Your actual request does nothing because the same user can simply issue
> "SET work_mem" at session start and bypass the user defaults that you want
> to prevent
>

>

My point is the limitations imposed should not be just at user level but
should also be inherited by all sessions made by that user.


>
> You haven't provided enough meat for anyone to offer advice regarding the
> scenario you are encountering that you think has "restrict alter role" as a
> solution.
>

To put it very simply as a DBA one would want to be in control of what the
users in that environment can change about themselves and what they can not.


> If you want to take the time to actually paint us a picture then maybe
> suggestions or motivation for change will result.  But, in the end, the
> current architecture of PostgreSQL means that people with credentials to
> the database have the capability to DoS the server.  work_mem is simply one
> possible avenue and, in reality, one where an inappropriate value can be
> either too large or too small.
>

Yes! Plus as an user I can change the logging parameter for my account
(which as well is risky in many environment)


>
> The useful solution here is not restring work_mem but rather having a
> process in place that provides data regarding excessive memory utilization
> AND disk I/O and associating that data with the work_mem value and
> executing user.  The event triggers would also allow for monitoring,
> without setting an excessive log_statements level, changes and their values.
>

But that is more of a cure, won't it be a good idea to have some level of
preventive measures to ensure DBAs can control which user can change which
"user-level" value (and even better if we can attach a limit to it).


>
> If you really care about their "set role" aspect you can at least setup
> cron shell script to query the catalog and report any undesirable settings
> and maybe even remove the offending entry.  You are still open to "SET
> work_mem" during the session though...
>
> David J.
>
>
>
>
>


Re: [GENERAL] Limiting user from changing its own attributes

2015-05-04 Thread Sameer Kumar
On Tue, May 5, 2015 at 1:44 PM David G. Johnston 
wrote:

> On Mon, May 4, 2015 at 10:23 PM, Sameer Kumar 
> wrote:
>
>> Sorry about the long silence on this.
>>
>> On Mon, Apr 13, 2015 at 3:34 PM David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Sun, Apr 12, 2015 at 10:23 PM, Sameer Kumar 
>>> wrote:
>>>
>>>> On Mon, Apr 13, 2015 at 1:03 PM Jim Nasby 
>>>> wrote:
>>>>
>>>
>>>>> No. I suspect the community would support at least a hook for GUC
>>>>> changes, if not a full-on permissions system. A hook would make it
>>>>> fairly easy to add event trigger support.
>>>>>
>>>>>
>>>> I hope someone out there is listening :)
>>>>
>>>> I hope I have made my concern clear, I currently don't have a way to
>>>> control users from changing the parameter values for their own settings,
>>>> which allows each user to set in-appropriate values e.g. for work_mem.
>>>>
>>>>
>>> If work_mem is the only example you can describe then I'm doubtful that
>>> any kind of urgency is going to be associated with this request.
>>>
>>
>> I guess any parameter which can be altered at the user level should have
>> some limitations e.g. one may not want to allow a user to reset its own
>> level of client messages.
>> As an admin one may want to have control over what a user can alter and
>> what the user can not alter.
>>
>> e.g.
>> "
>>
>> ALTER USER my_app_user  restrict set for client_min_messages;
>>
>
> ​I make a point below but the server should not care what level of logging
> messages the client wants to receive...is there some kind of security issue
> with a overly verbose specification here?  Are you concerned about resource
> utilization by said clients?​
>

Ahh that was just an example. client_min_message is more helpful for me in
debugging issues with application code.



>
>
>   Your actual request does nothing because the same user can simply issue
>>> "SET work_mem" at session start and bypass the user defaults that you want
>>> to prevent
>>>
>>
>>>
>>
>> My point is the limitations imposed should not be just at user level but
>> should also be inherited by all sessions made by that user.
>>
>
> ​While that can be inferred it is good to be explicit in order to
> communicate understanding of the current reality.​
>

Noted! I agree, its better to be explicit. :)


>
>
>
>>
>>>
>>> You haven't provided enough meat for anyone to offer advice regarding
>>> the scenario you are encountering that you think has "restrict alter role"
>>> as a solution.
>>>
>>
>> To put it very simply as a DBA one would want to be in control of what
>> the users in that environment can change about themselves and what they can
>> not.
>>
>
> ​OK.  This is already the case for numerous things and a blanket statement
> like this doesn't help others understand where we are lacking.​
>
>
>
>>
>>> If you want to take the time to actually paint us a picture then maybe
>>> suggestions or motivation for change will result.  But, in the end, the
>>> current architecture of PostgreSQL means that people with credentials to
>>> the database have the capability to DoS the server.  work_mem is simply one
>>> possible avenue and, in reality, one where an inappropriate value can be
>>> either too large or too small.
>>>
>>
>> Yes! Plus as an user I can change the logging parameter for my account
>> (which as well is risky in many environment)
>>
>
> ​You seem to need a better understanding of what limitations are already
> in place.  While it is true you can alter "client_min_messages" you cannot
> alter "log_min_messages" (in addition to quite a few other log related
> settings).
>
> ​http://www.postgresql.org/docs/9.4/static/runtime-config-logging.html​
>
> You make a claim that altering client_min_messages is "risk in many
> environment [sic]" but provide zero substantiation for that claim.
>

I am sorry. My bad. I thought log_statements and log_connections (and
log_disconnections) are allowed to be altered at user level. I was more
referring to those.


>>> The useful solution here is not restring work_mem but rather having a
>>> process in place that provides data regarding excessive memory utilization
>>> AND disk I/O and associating that data with the work_mem value and
>>> executing user.  The event triggers would also allow for monitoring,
>>> without setting an excessive log_statements level, changes and their values.
>>>
>>
>> But that is more of a cure, won't it be a good idea to have some level of
>> preventive measures to ensure DBAs can control which user can change which
>> "user-level" value (and even better if we can attach a limit to it).
>>
>>
> ​Most likely such a patch would be accepted by the community.  If you are
> depending on someone else writing it the muted response to this thread
> should be discouraging.
>

I am not really a hacker/very good at coding. But I get your message here :)


>
> David J.
>
>


Re: [GENERAL] Streaming replication and an archivelog

2015-05-09 Thread Sameer Kumar
On Fri, May 8, 2015 at 2:29 PM James Sewell 
wrote:

> Hello All,
>
> I am running 9.4 on Centos.
>
> I have three servers, one master and two slaves. The slaves have the
> following recovery.conf
>
> standby_mode = 'on'
> primary_conninfo = 'user=postgres host=mastervip port=5432'
> restore_command = 'scp -o BatchMode=yes postgres@backuphost:/archived_wals/%f
> %p'
> recovery_target_timeline= 'latest'
>
> Is there any way to combine following
>
>- a master switch (ie: if node1 dies and node2 is promoted then node3
>follows node2)
>- using a WAL archive, such that if node2 goes down for two days it
>will get WALs from the archive if they are no longer on the master
>
> At the moment the master switch works fine, but if I was to have a WAL
> archive with multiple timelines in it then I would end up in the newest
> timeline.
>

The timeline switch would happen whenever do a standby promotion (or PITR)
in which case you want the 2nd standby to follow the newly promoted master
i.e. it should be following the new timeline.


> I suppose what I want is the following:
>
> If I am a streaming replica only follow streamed timeline switches, not
> archive timeline switches.
>

I am trying to guess when would these two be different (unless there are
two different masters writing to the same archive which I suppose is wrong
anyways).

Can you write down an example for clarity so that others can understand
your scenario better?


> Obviously if I am not a streaming replica I need to follow archive
> timeline switches so I don't break PIT recovery.
>
> Possible?
>
>
> James Sewell,
> PostgreSQL Team Lead / Solutions Architect
> __
>
>
>  Level 2, 50 Queen St, Melbourne VIC 3000
>
> *P *(+61) 3 8370 8000  *W* www.lisasoft.com  *F *(+61) 3 8370 8099
>
>
> --
> The contents of this email are confidential and may be subject to legal or
> professional privilege and copyright. No representation is made that this
> email is free of viruses or other defects. If you have received this
> communication in error, you may not copy or distribute any part of it or
> otherwise disclose its contents to anyone. Please advise the sender of your
> incorrect receipt of this correspondence.
>
>


Re: [GENERAL] pg_xlog on a hot_stanby slave

2015-06-16 Thread Sameer Kumar
 On Tue, 16 Jun 2015 16:55 Xavier 12  wrote:

Hi everyone,

Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).

Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).

Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :

- checkpoints
- archive_command
- archive_cleanup

Master postgresq.conf :

[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
bar...@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64


What's this parameter's value on Slave?


autovacuum = on

Slave postgresql.conf :

[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = on

Slave recovery.conf :

standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'



How can I reduce the number of WAL files on the hot_stanby slave ?

Thanks

Regards.

Xavier C.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_xlog on a hot_stanby slave

2015-06-17 Thread Sameer Kumar
 On Wed, 17 Jun 2015 15:24 Xavier 12  wrote:

On 17/06/2015 03:17, Sameer Kumar wrote:


On Tue, 16 Jun 2015 16:55 Xavier 12  wrote:

 Hi everyone,

Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).

Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).

Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :

- checkpoints
- archive_command
- archive_cleanup

Master postgresq.conf :

[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
bar...@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64

  What's this parameter's value on Slave?

 Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.

 That was not my point. I was actually asking about wal_keep_segment.
Nevermind I found that I had misses the info (found it below. Please see my
response).
Besides I try to keep my master and standby config as same as possible(so
my advise ia to not switchoff autovacuum). The parameters which are
imeffective on slave anyways won't have an effect. Same goes for parameters
on master.
This helps me when I swap roles or do a failover. I have less parameters to
be worried about.

 Can you check the pg_log for log files. They may have se info? I am sorry
if you have already provided that info (after I finish I will try to look
at your previous emails on this thread)

Also can you share the vacuum cost parameters in your environment?


autovacuum = on

Slave postgresql.conf :

[...]
wal_level = minimal
wal_keep_segments = 32

  Sorry I missed this somehow earlier. Any reason why you think you need to
retain 32 wal files on slave?


hot_standby = on

Slave recovery.conf :

standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'

 Also consider setting hot_standby_feesback to on.

How can I reduce the number of WAL files on the hot_stanby slave ?

Thanks

Regards.

Xavier C.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_xlog on a hot_stanby slave

2015-06-18 Thread Sameer Kumar
On Thu, 18 Jun 2015 15:17 Xavier 12  wrote:

On 18/06/2015 04:00, Sameer Kumar wrote:



On Wed, 17 Jun 2015 15:24 Xavier 12  wrote:


 On 17/06/2015 03:17, Sameer Kumar wrote:


On Tue, 16 Jun 2015 16:55 Xavier 12  wrote:

  Hi everyone,

Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).

Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).

Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :

- checkpoints
- archive_command
- archive_cleanup

Master postgresq.conf :

[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
bar...@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64

   What's this parameter's value on Slave?

  Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.



That was not my point. I was actually asking about wal_keep_segment.
Nevermind I found that I had misses the info (found it below. Please see my
response).
Besides I try to keep my master and standby config as same as possible(so
my advise ia to not switchoff autovacuum). The parameters which are
imeffective on slave anyways won't have an effect. Same goes for parameters
on master.
This helps me when I swap roles or do a failover. I have less parameters to
be worried about.


 Okay


 Can you check the pg_log for log files. They may have se info? I am sorry
if you have already provided that info (after I finish I will try to look
at your previous emails on this thread)


 Nothing...
/var/log/postgresql/postgresql-2015-06-17_31.log is empty (except old
messages at the begining related to a configuration issue - which is now
solved - after rebuilding the cluster yesterday).
/var/log/syslog has nothing but these :

Jun 18 09:10:11 Bdd02 postgres[28400]: [2-1] 2015-06-18 09:10:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:10:41 Bdd02 postgres[28523]: [2-1] 2015-06-18 09:10:41 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:11:11 Bdd02 postgres[28557]: [2-1] 2015-06-18 09:11:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:11:41 Bdd02 postgres[28652]: [2-1] 2015-06-18 09:11:41 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:12:11 Bdd02 postgres[28752]: [2-1] 2015-06-18 09:12:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:12:41 Bdd02 postgres[28862]: [2-1] 2015-06-18 09:12:41 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:13:11 Bdd02 postgres[28891]: [2-1] 2015-06-18 09:13:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:13:40 Bdd02 postgres[28987]: [2-1] 2015-06-18 09:13:40 CEST LOG:
paquet de d?marrage incomplet

These messages are related to Zabbix (psql port check).


You sure these are the only messages you have in the log files?


 Also can you share the vacuum cost parameters in your environm

 en

t?



  I don't understand that part... is this in postgresql.conf ?

 There are vacuum cost parameters in postgresql.conf

http://www.postgresql.org/docs/9.4/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST



autovacuum = on

Slave postgresql.conf :

[...]
wal_level = minimal
wal_keep_segments = 32



Sorry I missed this somehow earlier. Any reason why you think you need to
retain 32 wal files on slave?


 No but I get the feeling that the parameter is ignored by my slave...
should I try another value ?




AFAIK you don't nees this parameter to set to > 0 unless you have cascaded
replica pull wal from stand by or you have backup jobs running to backup
from standby. Set it to 0 on the standby and check.



hot_standby = on

Slave recovery.conf :

standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'



Also consider setting hot_standby_feesback to on.


 I will check that parameter in the documentation,

Thanks


 How can I reduce the number of WAL files on the hot_stanby slave ?


Re: [GENERAL] WAL log archival on standby

2015-06-20 Thread Sameer Kumar
On Fri, 19 Jun 2015 13:46 Michael Paquier  wrote:

On Fri, Jun 19, 2015 at 2:38 PM, James Sewell 
wrote:

Hey All,

Is it possible to create WAL archive files on standby servers when using
streaming replication?

 Yes and no, standbys do not archive WAL in ~9.4, but you could use
archive_mode = 'always' with the upcoming 9.5.

I know I can use archive_command or pg_receivexlog - but these will both
result in me sending information to the standby servers which has already
been sent via standard streaming replication. This would result in a
doubling of my network traffic.


 Ideally I'd prefer to conserve this bandwidth and write the files to a
local archive on any active standby.

 Both methods you mentioned are the way to go for now I am afraid, or you
wait for 9.5.

 I have not tried it yet but should be possible to use pg_receivexlog from
the standby. If I can use standby server to create a streaming replica and
with pg_basebackup it should be possible to stream wal for archiving too.

Regards
Sameer


Re: [GENERAL] very slow queries and ineffective vacuum

2015-07-01 Thread Sameer Kumar
On Wed, Jul 1, 2015 at 3:37 AM Lukasz Wrobel <
lukasz.wro...@motorolasolutions.com> wrote:

> Hello.
>
> I have multiple problems with my database, the biggest of which is how to
> find out what is actually wrong.
>
> First of all I have a 9.3 postgres database that is running for about a
> month. Right now the queries on that database are running very slowly
> (select with a simple "where" on a non-indexed column on a table with about
> 5000 records takes 1,5s, a complicated hibernate select with 7 joins on
> tables of about 5000 records takes about 15s, insert or update on a table
> with 35000 records takes up to 20 mins).
>
> The tables and indexes on those tables are bloated to the point where this
> query: https://wiki.postgresql.org/wiki/Show_database_bloat shows wasted
> bytes in hundreds of MB.
>
> For whatever reason there is also no data in pg_stat* tables.
>

Make sure that your tracking parameters are on-
track_counts and track_activities


>
> So due to the long query times, there are multiple errors in my
> application logs like "No free connection available" or "Could not
> synchronize database state with session", or "Failed to rollback
> transaction" and the application fails to start in the required time.
>
>
One of things you can do is to set statement timeout in PostgreSQL
configuration (but that may actually increase your problems by cancelling
long running queries which seems to be too many in your case).


> The only thing that helps fix the situation seems to be vacuum full of the
> entire database. Regular vacuum doesn't even lower the dead tuples count
> (which appear by the thousands during application launching).
>

Though I am not very sure but to me it seems this could be because
your track_counts and track_activities is not set to on. Since your are not
tracking them they are not being updated at all.

try this-
vacuum analyze a table

vacuum analyze schema_name.table_name;

reindex one of that table
reindex table schema_name.table_name;



> Reindex of all the indexes in the database didn't help as well. All
> autovacuum parameters are default.
>

Did you analyze the database tables? Since your track_count is off (I have
assumed based on your above statements) your database tables might never
have been analyzed which could be leading to wrong/sub-optimal plans.


>
> There doesn't seem to be any issues with disk space, memory or CPU, as
> neither of those is even 50% used (as per df and top).
>
> Is there any good tool that will monitor the queries and generate a report
> with useful information on what might be the problem? I tried pg_badger,
> but all I got were specific queries and their times, but the long query
> times are just one of the symptoms of what's wrong with the database, not
> the cause.
>
> Perhaps I'm missing some indexes on the tables (creating them on the
> columns on which the where clause was used in the long queries seemed to
> halve their times).
>

Yes, if you create indexes then certainly those will be helpful depending
on the volume of data in that table.


> Also how can I monitor my transactions and if they are closed properly?
>

Check pg_stat_activity view. There is a column for state of the connection
check there are too many connections in  state. This
means a connection has initiated a transaction but has not committed it yet.
You can combine the state with status change time (state_change) column-

select * from pg_stat_activity where
now()-state_change>'1 min'::interval and
state='idle in transaction';

This will list all those sessions which have not committed for last one
minute.

You can look at using pgBouncer to effectively manage your sessions and
connections.


>
> I will be grateful for any help and if you need more details I can provide
> them if possible.
>
> Best regards.
> Lukasz
>


Re: [GENERAL] very slow queries and ineffective vacuum

2015-07-01 Thread Sameer Kumar
On Wed, Jul 1, 2015 at 4:51 AM Pavel Stehule 
wrote:

> Hi
>
> What is an output of VACUUM VERBOSE statement?
>
> VACUUM can be blocked by some forgotten transaction. Check your
> pg_stat_activity table for some old process in "idle in transaction" state.
> Then connection should not be reused, and you can see a error messages
> about missing connections. I found this issue more time in Java application
> - when it doesn't handle transactions correctly. Same effect can have
> forgotten 2PC transaction.
>
> When VACUUM long time was not executed - the most fast repair process is a
> export via pg_dump and load. Another way is dropping all indexes, VACUUM
> FULL and creating fresh indexes.
>
> Autovacuum is based on tracking statistics - you have to see your tables
> in table pg_stat_user_tables, and you can check there autovacuum timestamp.
> Sometimes autovacuum has too low priority and it is often cancelled.
>

As he has mentioned that he can not see anything in pg_stat* table which
means that probably track_count and track_activities is set to off. In that
case won't autovacuum be *unable* to do anything (since count of row
changes etc is not being captured)?


>
> Regards
>
> Pavel Stehule
>
> 2015-06-30 14:57 GMT+02:00 Lukasz Wrobel <
> lukasz.wro...@motorolasolutions.com>:
>
>> Hello.
>>
>> I have multiple problems with my database, the biggest of which is how to
>> find out what is actually wrong.
>>
>> First of all I have a 9.3 postgres database that is running for about a
>> month. Right now the queries on that database are running very slowly
>> (select with a simple "where" on a non-indexed column on a table with about
>> 5000 records takes 1,5s, a complicated hibernate select with 7 joins on
>> tables of about 5000 records takes about 15s, insert or update on a table
>> with 35000 records takes up to 20 mins).
>>
>> The tables and indexes on those tables are bloated to the point where
>> this query: https://wiki.postgresql.org/wiki/Show_database_bloat shows
>> wasted bytes in hundreds of MB.
>>
>> For whatever reason there is also no data in pg_stat* tables.
>>
>> So due to the long query times, there are multiple errors in my
>> application logs like "No free connection available" or "Could not
>> synchronize database state with session", or "Failed to rollback
>> transaction" and the application fails to start in the required time.
>>
>> The only thing that helps fix the situation seems to be vacuum full of
>> the entire database. Regular vacuum doesn't even lower the dead tuples
>> count (which appear by the thousands during application launching). Reindex
>> of all the indexes in the database didn't help as well. All autovacuum
>> parameters are default.
>>
>> There doesn't seem to be any issues with disk space, memory or CPU, as
>> neither of those is even 50% used (as per df and top).
>>
>> Is there any good tool that will monitor the queries and generate a
>> report with useful information on what might be the problem? I tried
>> pg_badger, but all I got were specific queries and their times, but the
>> long query times are just one of the symptoms of what's wrong with the
>> database, not the cause.
>>
>> Perhaps I'm missing some indexes on the tables (creating them on the
>> columns on which the where clause was used in the long queries seemed to
>> halve their times). Also how can I monitor my transactions and if they are
>> closed properly?
>>
>> I will be grateful for any help and if you need more details I can
>> provide them if possible.
>>
>> Best regards.
>> Lukasz
>>
>
>


Re: [GENERAL] very slow queries and ineffective vacuum

2015-07-02 Thread Sameer Kumar
On Thu, Jul 2, 2015 at 9:57 PM Lukasz Wrobel <
lukasz.wro...@motorolasolutions.com> wrote:

> Hello again.
>
> Thank you for all your responses. I will try to clarify more and attempt
> to answer the questions you raised.
>
> I'm attaching the postgresql.conf this time. I cannot supply you guys with
> a proper database schema, so I will try to supply you with some obfuscated
> logs and queries. Sorry for the complication.
>

You postgresql.conf seems to have some issues. Can you explain about the
choice of parameter values for below parameters?

maintenance_work_mem = 32MB
bgwriter_lru_maxpages = 0
synchronous_commit = off
effective_cache_size is left to default
random_page_cost is left to default

I don't know anything about your hardware- memory, cpu and disk layout (and
IOPS of disk) so can not really say what would be the right setting but
this certainly does not seem right to me.


>
> First of all I seem to have misdirected you guys about the pg_stat*
> tables. I have a virtual machine with the database from our test team,
> which was running for a month. When I deploy it, our java application is
> not running, so no queries are being executed. The pg_stat* tables contain
> no data (which is surprising). When I launch the application and queries
> start going, the stats are collected normally and autovacuums are being
> performed.
>

It is still confusing to me. To help us understand can you specifically
tell if you see anything in pg_stat_user_tables and pg_stat_user_indexes?


>
> I attached the output of vacuum verbose command.
>
> Seems like a lot of your tables have bloats


> As for the pg_stat_activity, I have no "idle in transaction" records
> there, but I do have some in "idle" state, that don't disappear. Perhaps
> this means some sessions are not closed? I attached the query result as
> activity.txt.
>
> I also have a few "sending cancel to blocking autovacuum" and "canceling
> autovacuum task" messages in syslog.
>
>
Can you share some of these log files?




> Sample query explain analyze. This was ran after vacuum analyze of the
> entire database.
>
> explain analyze SELECT col1, col2, col3, col4, col5 FROM ( table84 table84
> LEFT JOIN table57 table57 ON table84.col7 = table57.col7 ) LEFT JOIN
> table19 table19 ON table84.col7 = table19.col7;
>  QUERY
> PLAN
>
> -
>  Hash Right Join  (cost=46435.43..108382.29 rows=189496 width=79) (actual
> time=4461.686..13457.233 rows=5749 loops=1)
>Hash Cond: (table57.col7 = table84.col7)
>->  Seq Scan on table57 table57  (cost=0.00..49196.63 rows=337963
> width=57) (actual time=0.040..8981.438 rows=6789 loops=1)
>->  Hash  (cost=42585.73..42585.73 rows=189496 width=38) (actual
> time=4447.731..4447.731 rows=5749 loops=1)
>  Buckets: 16384  Batches: 2  Memory Usage: 203kB
>  ->  Hash Right Join  (cost=18080.66..42585.73 rows=189496
> width=38) (actual time=1675.223..4442.046 rows=5749 loops=1)
>Hash Cond: (table19.col7 = table84.col7)
>->  Seq Scan on table19 table19  (cost=0.00..17788.17
> rows=187317 width=26) (actual time=0.007..2756.501 rows=5003 loops=1)
>->  Hash  (cost=14600.96..14600.96 rows=189496 width=20)
> (actual time=1674.940..1674.940 rows=5749 loops=1)
>  Buckets: 32768  Batches: 2  Memory Usage: 159kB
>  ->  Seq Scan on table84 table84  (cost=0.00..14600.96
> rows=189496 width=20) (actual time=0.059..1661.482 rows=5749 loops=1)
>  Total runtime: 13458.301 ms
> (12 rows)
>

You have a lot of issues with this plan-
- The statistics is not updated
- There is a lot of hash join, sequential scan implying you don't have
proper indexes or those are not useful (meaning your indexes are bloated
too, consider reindexing them)




>
> Thank you again for your advice and I hope that with your help I'll be
> able to solve this issue.
>
> Best regards.
> Lukasz
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Replication

2016-06-02 Thread Sameer Kumar
On Thu, 2 Jun 2016, 10:34 p.m. Scott Mead,  wrote:

> On Thu, Jun 2, 2016 at 10:16 AM, Melvin Davidson 
> wrote:
>
>> It's been a few years since I worked with slony, and you did not state
>> which version of slony or PostgreSQL you are working with, nor did you
>> indicate the O/S.
>>
>
> I think OP had pointed to using streaming
>
>
>> That being said, you should be able to formulate a query with a join
>> between sl_path & sl_node that gives you the information you need.
>>
>> On Thu, Jun 2, 2016 at 9:32 AM, Bertrand Paquet <
>> bertrand.paq...@doctolib.fr> wrote:
>>
>>> Hi,
>>>
>>> On an hot standby streaming server, is there any way to know, in SQL, to
>>> know the ip of current master ?
>>> The solution I have is to read the recovery.conf file to find
>>> primary_conninfo, but, it can be false.
>>>
>>>
> I've run into this as well.  Only way is recovery.conf.
>

9.6 onward you will have a new view which will facilitate you to query the
replication details on standby.

I have not tried but probably you can check the pid of wal receiver and
find out what host it is connected to (should be possible from network
stats).


> --Scott
>
>
>
>> Regards,
>>>
>>> Bertrand
>>>
>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>
>
> --
> --
> Scott Mead
> Sr. Architect
> *OpenSCG <http://openscg.com>*
> http://openscg.com
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] WAL's listing in pg_xlog by some sql query

2016-06-02 Thread Sameer Kumar
On Fri, 3 Jun 2016, 12:14 a.m. Alex Ignatov, 
wrote:

> Hello!
>
> Can I list all WAL files in pg_xlog  by using some sql query in Postgres?
>


Try

Select pg_ls_dir('pg_xlog');



> --
> Alex Ignatov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] WAL's listing in pg_xlog by some sql query

2016-06-02 Thread Sameer Kumar
On Fri, Jun 3, 2016 at 4:30 AM Stephen Frost  wrote:

> * Sameer Kumar (sameer.ku...@ashnik.com) wrote:
> > On Fri, 3 Jun 2016, 12:14 a.m. Alex Ignatov, 
> > wrote:
> > > Can I list all WAL files in pg_xlog  by using some sql query in
> Postgres?
> >
> > Try
> >
> > Select pg_ls_dir('pg_xlog');
>
> Note that this currently requires superuser privileges.
>
> Given the usefulness of this specific query and that it could be used
> without risk of the user being able to gain superuser access through it,
> I'd like to see a new function added which does not have the superuser
> check, but is not allowed to be called by public initially either.
>

Can I not wrap it around another user defined function with SECURITY
DEFINER and grant privilege to specific users who can use it?


>
> Something along the lines of 'pg_xlog_file_list()', perhaps.  There is a
> check in check_postgres.pl which could take advantage of this also.
> Should be a very straight-forward function to write, perhaps good as a
> starter project for someone.
>
> Thanks!
>
> Stephen
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] connection pooling, many users, many datasources

2016-06-08 Thread Sameer Kumar
On Thu, 9 Jun 2016, 12:50 a.m. Michael McInness, 
wrote:

> I am working with a system that uses JDBC and JNDI-based connection
> pooling. There are currently many organizations that use the system. Each
> of the organizations has multiple, individual system users.
>
> Currently, each entity has its own database and a corresponding
> application-based datasource and connection pool.
>
> Because of the nature of the data, it is of utmost importance that each
> organization's data is secure and inaccessible to other entities.
>
> Maintaining individual datasources/connection pools for each entity is
> extremely cumbersome. It just doesn't seem like a good solution as the
> number of entities grows to many dozens.
>
> And while handling connections on-demand would provide a dynamic
> alternative, it also feels like a bad option as I would hate to lose the
> performance and robustness of pooling connections.
>

Consider using a database side pool which can pool connections for each
user and dbname pair. You might want to check pgpool and pgbouncer.
With one of them you can get benefits of pooling and yet let
applicationmake connection on the go as request comes in


> How about using SET/RESET ROLE so I could connect to the db under a group
> role and then changing roles per session to restrict access to
> entity-specific schema (assuming permissions are set correctly in the
> database)?
>

I am not sure if this is the reason or right use case for SET ROLE.

Using different users (either with same app server or different) is the
best way if ensuring data is not accessible across organizations.


> Are there other options?
>
> Profuse appreciation for your thoughts and suggestions,
>
> Mick
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Re: regarding schema only migration from sqlserver to postgres with runmtk.sh

2016-06-16 Thread Sameer Kumar
@gmail.com>
> > <mailto:maheshpostgr...@gmail.com
> > <mailto:maheshpostgr...@gmail.com>>>wrote:
> >
> >
> > sir
> >
> > PostgreSQL always no1 in world
> > as per above discussion
> >
> > migration not working from sql to postgreSQL  with
> > runmtk.sh
> >
> >
> > Connecting with source SQL Server database server...
> > Connected to Microsoft SQL Server, version
> '10.50.1600'
> > Connecting with target EnterpriseDB database
> server...
> > MTK-10045: The URL specified for the "target"
> > database is
> > invalid.
> > Check the connectivity credentials.
> > Stack Trace:
> > com.edb.MTKException: MTK-10045: The URL specified
> > for the
> > "target" database is invalid.
> > Check the connectivity credentials.
> >
> >
> >
> > same error repeated even there mentioned correct
> > credentials
> > as here i checked multiple times
> >
> >
> > ​Maybe you should try using an IP address instead of a
> > host name.
> >
> > David J.​
> >
> >
> >
> >
> >
> > I've never used this tool so like everyone else I am guessing here
> > but having quickly looked at the documentation have you tried
> > specifying the '-targetdbtype postgres' in addition to the
> > '-sourcedbtype sqlserver' that you have already specified?
> >
> > Neil A
> >
> >
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org
> > <mailto:pgsql-general@postgresql.org>)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
> >
>
> That appears to be an issue finding the right Java library to connect to
> Postgres. The documentation here outlines where to download and install
> the JDBC drivers
>
> https://www.enterprisedb.com/docs/en/9.5/migrate/Postgres_Plus_Migration_Guide.1.13.html#
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Hot disable WAL archiving

2016-06-17 Thread Sameer Kumar
On Fri, 17 Jun 2016, 9:12 p.m. Igor Neyman,  wrote:

>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] On Behalf Of Job
> Sent: Friday, June 17, 2016 9:01 AM
> To: Albe Laurenz ; pgsql-general@postgresql.org
> Subject: [GENERAL] R: Hot disable WAL archiving
>
> Hi Albe and thank you, first of all.
>
> Is there a way to disable Wal logging only for specific table, permanently?
>

Yes.

You can set it to NO LOGGING

Alter table table_name set unlogged;

https://www.postgresql.org/docs/9.5/static/sql-altertable.html

>
> Thank you again!
> Francesco
>
> 
> Da: Albe Laurenz [laurenz.a...@wien.gv.at]
> Inviato: venerdì 17 giugno 2016 13.48
> A: Job; pgsql-general@postgresql.org
> Oggetto: RE: Hot disable WAL archiving
>
> Job wrote:
> > is there a way in Postgresql-9.5 to disable temporarily WAL archiving
> > to speed up pg_bulkload with restarting database engine?
>
>
> You can set 'archive_command=/bin/true' and reload, then no WAL archives
> will be written.
>
> Make sure to perform a base backup as soon as your bulk load is finished.
>
> Yours,
> Laurenz Albe
>
> --
>
> Francesco,
>
> Check CREATE UNLOGGED TABLE... in the docs.
>
> Regards,
> Igor
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] pg_dump from a hot standby replication slave

2016-06-20 Thread Sameer Kumar
On Tue, 21 Jun 2016, 4:03 a.m. Johan Thomsen, 
wrote:

> Hi,
>
> In relation to this thread:
>
> https://www.postgresql.org/message-id/0B4197B9-5DB8-4475-A83C-38DA5965782A%40etasseo.com
>
> > I ran the pg_dump process again this morning, ensuring that the standby
> > parameters were set, and it completed successfully with the
> > hot_standby_feedback enabled.
>
> In case I were to set the hot_standby_feedback param to "on" at my
> slave node for the _only_ reason that I would like to make a dump of a
> database while replication is running, how would it affect the primary
> node performance-wise?
>

In my understanding it might create some bloats but those should get
cleaned eventually.
Though I think if you are using standby for read purpose, this parameter
should be kept on.


> Worth mentioning: I do not require the dumped data to be highly
> up-to-date, I just need a static snapshot, knowing perfectly well that
> data is updated continuously while dumping.
>
> Are there perhaps other ways to perform a dump of a hot standby node
> without affecting the primary node performance and without stopping or
> pausing the replication?
>

Are you using archiving as well?
Consider using pg_basebackup to take physical backup of the standby.

Version: postgres (PostgreSQL) 9.2.2
>
> Thank you in advance,
>
>
> Regards,
> Johan
>
>
> --
> Johan Thomsen
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Protect a table against concurrent data changes while allowing to vacuum it

2016-06-22 Thread Sameer Kumar
On Wed, Jun 22, 2016 at 5:10 PM Vlad Arkhipov  wrote:

> Hello,
>
> I have a constraint that requires a table to be locked before checking
> it (i.e. no more than 2 records with the same value in the same column).
> If I lock the table in the SHARE ROW EXCLUSIVE mode, any vacuuming (or
> autovacuuming) process prevents me from checking the constraint. What
> are possible solutions?
>

May be you would like to share-
- Table Structure
- PostgreSQL version

This will help people who would try to help you.

I think you might want to consider an optimistic way of locking your
records, instead of locking them. Or look at advisory locks (but that
depends on your Postgres version).


>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Protect a table against concurrent data changes while allowing to vacuum it

2016-06-22 Thread Sameer Kumar
On Wed, Jun 22, 2016 at 6:08 PM Vlad Arkhipov  wrote:

> I am running PostgreSQL 9.5.
>
> CREATE TABLE t (id BIGINT NOT NULL PRIMARY KEY, name TEXT);
>
> The constraint that the data must satisfy is `there is no more than 3
> records with the same name`.
>
> I am not in control of queries that modify the table, so advisory locks
> can hardly be of help to me.
>

Define a function which does a count of the rows and if count is 3 it
return false if count is less it returns true.

Use check constraint with this function. I have not tried this so not sure
if you can use function with SELECT on same table in CHECK constraint. So
test it out first.

If this works, any insert trying to get the 4th record in table would fail.

A last resort could be using triggers. But either of these approaches will
cause issues if you have high concurrency.



>
>
> On 06/22/2016 05:20 PM, Sameer Kumar wrote:
>
>
>
> On Wed, Jun 22, 2016 at 5:10 PM Vlad Arkhipov 
> wrote:
>
>> Hello,
>>
>> I have a constraint that requires a table to be locked before checking
>> it (i.e. no more than 2 records with the same value in the same column).
>> If I lock the table in the SHARE ROW EXCLUSIVE mode, any vacuuming (or
>> autovacuuming) process prevents me from checking the constraint. What
>> are possible solutions?
>>
>
> May be you would like to share-
> - Table Structure
> - PostgreSQL version
>
> This will help people who would try to help you.
>
> I think you might want to consider an optimistic way of locking your
> records, instead of locking them. Or look at advisory locks (but that
> depends on your Postgres version).
>
>
>>
>>
>> --
>> Sent via pgsql-general mailing list ( 
>> pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
> --
> --
> Best Regards
> Sameer Kumar | DB Solution Architect
> *ASHNIK PTE. LTD.*
>
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>
> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


[GENERAL] What Causes Access Exclusive Lock?

2016-06-23 Thread Sameer Kumar
Hi,

I just wanted to understand what are the commands which will acquire Access
Exclusive Lock on a table? In my knowledge below operations will acquire
access exclusive lock:-

1. VACUUM FULL
2. ALTER TABLE
3. DROP TABLE
4. TRUNCATE
5. REINDEX
6. LOCK command with Access Exclusive Mode (or no mode specified)

I am using PostgreSQL v9.4.



Regards
Sameer
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] What Causes Access Exclusive Lock?

2016-06-23 Thread Sameer Kumar
On Thu, Jun 23, 2016 at 11:57 PM Adrian Klaver 
wrote:

> On 06/23/2016 08:14 AM, Sameer Kumar wrote:
> >
> > Hi,
> >
> > I just wanted to understand what are the commands which will acquire
> > Access Exclusive Lock on a table? In my knowledge below operations will
> > acquire access exclusive lock:-
> >
> > 1. VACUUM FULL
> > 2. ALTER TABLE
> > 3. DROP TABLE
> > 4. TRUNCATE
> > 5. REINDEX
> > 6. LOCK command with Access Exclusive Mode (or no mode specified)
> >
> > I am using PostgreSQL v9.4.
>
> https://www.postgresql.org/docs/9.4/static/explicit-locking.html
>
> ACCESS EXCLUSIVE
>

Thanks!
I had checked that and arrived at the list above.

Why I wanted to confirm because, I am facing a situation similar (or rather
same) as what is described in two threads below-

https://www.postgresql.org/message-id/7f74c5ea-6741-44fc-b6c6-e96f18d76...@simply.name

https://www.postgresql.org/message-id/BE95C564-0D49-462A-A57C-4C9DF6238F71%40simply.name

pg_stat_database_conflicts.confl_lock is *non-zero* and connections on
standby (idle in transaction or executing SELECT) are disconnected.

I *do not* see the message -
"User query might have needed to see row versions that must be removed."

But I see disconnection on standby because of a "relation lock" being held
for long.

>From what I understood that if there is a LOCK conflict on standby (between
a session an a WAL replay), it might cause even cause disconnection of an
"idle in transaction" session (which is causing conflict on standby). Is
this right?

My understanding is only Access Exclusive Locks will cause conflicts
against a read-only query. Is that right?

So I checked and confirmed that there is no such operation on master which
would result in Access Exclusive lock.

I am using v9.4.4. Is there a bug which is hitting me or is there any other
kind of query which might cause lock conflict on standby?

The threads above seem to have same issue, but I did not see any conclusive
reason explained in the hacker or admin thread.

>
> >
> >
> > Regards
> > Sameer
> > --
> > --
> > Best Regards
> > Sameer Kumar | DB Solution Architect
> > *ASHNIK PTE. LTD.*
> >
> > 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
> >
> > T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] What Causes Access Exclusive Lock?

2016-06-23 Thread Sameer Kumar
On Fri, 24 Jun 2016, 1:47 a.m. Jeff Janes,  wrote:

> On Thu, Jun 23, 2016 at 8:14 AM, Sameer Kumar 
> wrote:
> >
> > Hi,
> >
> > I just wanted to understand what are the commands which will acquire
> Access
> > Exclusive Lock on a table? In my knowledge below operations will acquire
> > access exclusive lock:-
> >
> > 1. VACUUM FULL
> > 2. ALTER TABLE
> > 3. DROP TABLE
> > 4. TRUNCATE
> > 5. REINDEX
> > 6. LOCK command with Access Exclusive Mode (or no mode specified)
> >
> > I am using PostgreSQL v9.4.
>
> A regular VACUUM (not a FULL one), including autovac, will take an
> ACCESS EXCLUSIVE lock if it believes there are enough empty
> (truncatable) pages at the end of the table to be worth truncating and
> returning that storage to the OS. On master it will quickly abandon
> the lock if it detects someone else wants it, but that does not work
> on a standby.
>

Thanks! This is helpful. I believe going by this explaination I can try to
reproduce this issue manually.

Is this part about regular vacuum acquiring an AccessExclusive Lock
documented? I did not see a reference to it on page for Explicit Locking.


> Before version 9.6, if there are bunch of all-visible (but non-empty)
> pages at the end of the table, then every vacuum will think it can
> possibly truncate those pages, take the lock, and immediately realize
> it can't truncate anything and release the lock. On master, this is
> harmless, but on a standby it can lead to spurious cancellations.  In
> 9.6, we made it check those pages to see if they actually are
> truncatable before it takes the lock, then check again after it has
> the lock to make sure they are still truncatable.  That should greatly
> decrease the occurrence of such cancellations.
>
>
> Cheers,
>
> Jeff
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] What Causes Access Exclusive Lock?

2016-06-23 Thread Sameer Kumar
On Fri, 24 Jun 2016, 1:54 a.m. Sameer Kumar, 
wrote:

>
>
> On Fri, 24 Jun 2016, 1:47 a.m. Jeff Janes,  wrote:
>
>> On Thu, Jun 23, 2016 at 8:14 AM, Sameer Kumar 
>> wrote:
>> >
>> > Hi,
>> >
>> > I just wanted to understand what are the commands which will acquire
>> Access
>> > Exclusive Lock on a table? In my knowledge below operations will acquire
>> > access exclusive lock:-
>> >
>> > 1. VACUUM FULL
>> > 2. ALTER TABLE
>> > 3. DROP TABLE
>> > 4. TRUNCATE
>> > 5. REINDEX
>> > 6. LOCK command with Access Exclusive Mode (or no mode specified)
>> >
>> > I am using PostgreSQL v9.4.
>>
>> A regular VACUUM (not a FULL one), including autovac, will take an
>> ACCESS EXCLUSIVE lock if it believes there are enough empty
>> (truncatable) pages at the end of the table to be worth truncating and
>> returning that storage to the OS. On master it will quickly abandon
>> the lock if it detects someone else wants it, but that does not work
>> on a standby.
>>
>
> Thanks! This is helpful. I believe going by this explaination I can try to
> reproduce this issue manually.
>

Thanks!
I could reproduce this.

The test setup-

1. I have master and standby databases. To get the error I reduced my
max_streaming_delay to 10s
2. On standby start a new transaction and read data from a very huge table

Begin transaction;
Select count(*) from table_with10k_rows;

3. On master delete rows from the bottom of this table (i.e. the rows
inserted last)

4. Run a vacuum on the table in master (normal vacuum).

5. Go back to the transaction on standby, fire
Select 1;

6. You will see session is disconnected

I repeated this a few times and if I don't run vacuum manually (and wait
for a while) autovacuum would fire and results in similar situation.

I repeated the same steps with REPEATABLE READ isolation level on standby
transaction and I got SQLSTATE 40001 but with detail "User Query might have
needed to see riw versions that must be removed". I have
hot_standby_feedback on.

Thanks!


> Is this part about regular vacuum acquiring an AccessExclusive Lock
> documented? I did not see a reference to it on page for Explicit Locking.
>
>
>> Before version 9.6, if there are bunch of all-visible (but non-empty)
>> pages at the end of the table, then every vacuum will think it can
>> possibly truncate those pages, take the lock, and immediately realize
>> it can't truncate anything and release the lock. On master, this is
>> harmless, but on a standby it can lead to spurious cancellations.  In
>> 9.6, we made it check those pages to see if they actually are
>> truncatable before it takes the lock, then check again after it has
>> the lock to make sure they are still truncatable.  That should greatly
>> decrease the occurrence of such cancellations.
>>
>>
>> Cheers,
>>
>> Jeff
>>
> --
> --
> Best Regards
> Sameer Kumar | DB Solution Architect
> *ASHNIK PTE. LTD.*
>
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>
> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] What Causes Access Exclusive Lock?

2016-06-23 Thread Sameer Kumar
On Fri, 24 Jun 2016, 6:23 a.m. Jeff Janes,  wrote:

> On Thu, Jun 23, 2016 at 10:54 AM, Sameer Kumar 
> wrote:
> >
> >
> > On Fri, 24 Jun 2016, 1:47 a.m. Jeff Janes,  wrote:
> >>
> >> On Thu, Jun 23, 2016 at 8:14 AM, Sameer Kumar 
> >> wrote:
> >> >
> >> > Hi,
> >> >
> >> > I just wanted to understand what are the commands which will acquire
> >> > Access
> >> > Exclusive Lock on a table? In my knowledge below operations will
> acquire
> >> > access exclusive lock:-
> >> >
> >> > 1. VACUUM FULL
> >> > 2. ALTER TABLE
> >> > 3. DROP TABLE
> >> > 4. TRUNCATE
> >> > 5. REINDEX
> >> > 6. LOCK command with Access Exclusive Mode (or no mode specified)
> >> >
> >> > I am using PostgreSQL v9.4.
> >>
> >> A regular VACUUM (not a FULL one), including autovac, will take an
> >> ACCESS EXCLUSIVE lock if it believes there are enough empty
> >> (truncatable) pages at the end of the table to be worth truncating and
> >> returning that storage to the OS. On master it will quickly abandon
> >> the lock if it detects someone else wants it, but that does not work
> >> on a standby.
> >
> >
> > Thanks! This is helpful. I believe going by this explaination I can try
> to
> > reproduce this issue manually.
> >
> > Is this part about regular vacuum acquiring an AccessExclusive Lock
> > documented? I did not see a reference to it on page for Explicit Locking.
>
> Not that I know of.  I don't think any part of the user documentation
> attempts to make an exhaustive list of all actions which take which
> level of locks.  It only provides some illustrative examples.
>


Thanks!
But is it something which is worth mentioning on the page about VACUUM?


> Cheers,
>
> Jeff
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] pg_dump fundenental question

2016-07-05 Thread Sameer Kumar
On Wed, 6 Jul 2016, 1:39 a.m. J. Cassidy,  wrote:

> Francisco,
>
> appreciate the tips/hints.
>
> My input (source) DB is  1TB in size, using the options as stated in my
> original email (i.e. no compression it would seem) the output file size is
> "only" 324GB.
>

It would be because of indexes do not take any space in backup, since they
are just an SQL statement. Some space might be saved because of bloats in
your db (which are not copied over in the sql backup).


I presume all of the formatting/indices have been ommited. As I said
> before, I can browse the backup file with less/heat/cat/tail etc.
>

Ofcourse you can but consider using custom or directory format (what is
your version? It is a good practice to state that along with your query)
and use -j to specify multiple threads to copy the data. Use -Z for
compression level.

pg_dump -d prod_db -Fd -j6 -f /mybackups/20160706-prod-db -Z9

If ever you want to browse the backup or look a specific table from the
backup, pg_restore with -f will be helpful e.g

pg_restore -f emp-from-backup.sql -Fd /mybackups/20160706-prod-db




> Regards,
>
> Henk
>
>
>
> On Tue, Jul 5, 2016 at 4:54 PM, J. Cassidy  wrote:
> > I have hopefully an "easy" question.
> > If I issue the pg_dump command with no switches or options i.e.
> > /usr/local/pgsql/bin/pg_dump -v dbabcd > /data3TB/Backup/dbabcd
> > Is their any "default" compression involved or not? Does pg_dump talk to
> > zlib during the dump process given that I have not specified any
> compression
> > on the > command line? (see above).
>
> IIRC no options means you are requesting an SQL-script. Those are not
> compressed, just pipe them through your favorite compressor. ( In a
> later message you stated you were in Linux and had a 324Gb file, and
> could head/tail it, have you done so? ).
>
> > Your considered opinions would be much appreciated.
>
> OTOH, with those big backup I would recommend using custom format (
> -Fc ), it's much more flexible ( andyou can have the sql script but
> asking pg_restore to generate it if you need it, but not the other way
> round ).
>
>
> Francisco Olarte.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
>

-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Cascade streaming replication + wal_files - Pgsql 9.2

2016-07-06 Thread Sameer Kumar
On Thu, 7 Jul 2016, 9:20 a.m. Patrick B,  wrote:

> Hi guys,
>
> I got two slaves using Postgresql 9.2.
>
> slave01 - Streaming replication from master
> slave02 - nothing.. new server
>
> is it possible to get slave02 replicating from slave01 (Cascade streaming
> replication) this can be done with streaming ?
>

Yes, a cascaded standby can be setup with Streaming Replication in v9.2 and
above.

and also slave01 sending the wal_files to that new slave02?
>

What exactly do you mean here by "sending the wal_files"??


> So it would be on slave01
>
> archive_mode = ok
> archive_command = 
>

This will have no effect on a standby server. archive_command will be
neglected

This is possible only in v9.5 onward when archive_mode is set to always.

wal_level = hot_standby
> max_wal_senders = 2
> wal_keep_segments = 128
>
> Is that right?
>

If you want to set up archiving from slave01, checkout pg_receivexlog.


https://www.postgresql.org/docs/9.2/static/app-pgreceivexlog.html

It might be helpful as it does something similar to archiving but using
streaming protocol (so I think it should work even in cascaded mode). But
since it works using streaming protocol, max_wal_senders on slave01 will be
2(what you have set now) + 1 (for pg_reveivexlog)


> Cheers guys!
> Patrick
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Cascade streaming replication + wal_files - Pgsql 9.2

2016-07-06 Thread Sameer Kumar
On Thu, 7 Jul 2016, 9:51 a.m. Patrick B,  wrote:

>
>>
>> On Thu, 7 Jul 2016, 9:20 a.m. Patrick B, 
>> wrote:
>>
>>> Hi guys,
>>>
>>> I got two slaves using Postgresql 9.2.
>>>
>>> slave01 - Streaming replication from master
>>> slave02 - nothing.. new server
>>>
>>> is it possible to get slave02 replicating from slave01 (Cascade
>>> streaming replication) this can be done with streaming ?
>>>
>>
>> Yes, a cascaded standby can be setup with Streaming Replication in v9.2
>> and above.
>>
>> and also slave01 sending the wal_files to that new slave02?
>>>
>>
>> What exactly do you mean here by "sending the wal_files"??
>>
>>
>>> So it would be on slave01
>>>
>>> archive_mode = ok
>>> archive_command = 
>>>
>>
>> This will have no effect on a standby server. archive_command will be
>> neglected
>>
>> This is possible only in v9.5 onward when archive_mode is set to always.
>>
>> wal_level = hot_standby
>>> max_wal_senders = 2
>>> wal_keep_segments = 128
>>>
>>> Is that right?
>>>
>>
>> If you want to set up archiving from slave01, checkout pg_receivexlog.
>>
>>
>> https://www.postgresql.org/docs/9.2/static/app-pgreceivexlog.html
>>
>> It might be helpful as it does something similar to archiving but using
>> streaming protocol (so I think it should work even in cascaded mode). But
>> since it works using streaming protocol, max_wal_senders on slave01 will be
>> 2(what you have set now) + 1 (for pg_reveivexlog)
>>
>>
>
> ok.. got a little confused now.
>

I thought I did my best to explain :)



> the only way to archive the wal_files from a slave into another slave on
> PostgreSQL 9.2, is by using pg_reveivexlog?
>

Yes. Though I have never tried this myself but going by the theory, it
should work.

> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Cascade streaming replication + wal_files - Pgsql 9.2

2016-07-06 Thread Sameer Kumar
On Thu, 7 Jul 2016, 9:52 a.m. Sameer Kumar,  wrote:

>
>
> On Thu, 7 Jul 2016, 9:51 a.m. Patrick B,  wrote:
>
>>
>>>
>>> On Thu, 7 Jul 2016, 9:20 a.m. Patrick B, 
>>> wrote:
>>>
>>>> Hi guys,
>>>>
>>>> I got two slaves using Postgresql 9.2.
>>>>
>>>> slave01 - Streaming replication from master
>>>> slave02 - nothing.. new server
>>>>
>>>> is it possible to get slave02 replicating from slave01 (Cascade
>>>> streaming replication) this can be done with streaming ?
>>>>
>>>
>>> Yes, a cascaded standby can be setup with Streaming Replication in v9.2
>>> and above.
>>>
>>> and also slave01 sending the wal_files to that new slave02?
>>>>
>>>
>>> What exactly do you mean here by "sending the wal_files"??
>>>
>>>
>>>> So it would be on slave01
>>>>
>>>> archive_mode = ok
>>>> archive_command = 
>>>>
>>>
>>> This will have no effect on a standby server. archive_command will be
>>> neglected
>>>
>>> This is possible only in v9.5 onward when archive_mode is set to always.
>>>
>>> wal_level = hot_standby
>>>> max_wal_senders = 2
>>>> wal_keep_segments = 128
>>>>
>>>> Is that right?
>>>>
>>>
>>> If you want to set up archiving from slave01, checkout pg_receivexlog.
>>>
>>>
>>> https://www.postgresql.org/docs/9.2/static/app-pgreceivexlog.html
>>>
>>> It might be helpful as it does something similar to archiving but using
>>> streaming protocol (so I think it should work even in cascaded mode). But
>>> since it works using streaming protocol, max_wal_senders on slave01 will be
>>> 2(what you have set now) + 1 (for pg_reveivexlog)
>>>
>>>
>>
>> ok.. got a little confused now.
>>
>
> I thought I did my best to explain :)
>
>
>
>> the only way to archive the wal_files from a slave into another slave on
>> PostgreSQL 9.2, is by using pg_reveivexlog?
>>
>
> Yes. Though I have never tried this myself but going by the theory, it
> should work.
>

I think with pg_receivexlog, in v9.2 you will not be able to get timeline
switch information. So while the wal_file received using pg_receivexlog can
be used for replication or even for PITR, complex recovery or continuing
replication after promotion of slave01 will not be possible. Untill v9.2
this timeline switch info is recorded only in archives generated by
archive_mode (on). But pg_receivexlog is creating archives only from live
WAL using wal_sender.

Though you may still use archive_command on slave01. When your master goes
down, slave01 (if you choose to) is promoted to be a master. Then
archive_command set in slave01 will be effective and will send the archives
to slave02 or whatever you have set in archive_command. But as long as
slave01 is standby you will have to rely on pg_receivexlog (running on
slave02) and by doing that you will miss timeline switch info.



-- 
> --
> Best Regards
> Sameer Kumar | DB Solution Architect
> *ASHNIK PTE. LTD.*
>
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>
> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Cascade streaming replication + wal_files - Pgsql 9.2

2016-07-06 Thread Sameer Kumar
On Thu, 7 Jul 2016, 9:59 a.m. John R Pierce,  wrote:

> On 7/6/2016 6:52 PM, Sameer Kumar wrote:
>
>
>> the only way to archive the wal_files from a slave into another slave on
>> PostgreSQL 9.2, is by using pg_reveivexlog?
>>
>
> Yes. Though I have never tried this myself but going by the theory, it
> should work.
>
>
> a slave doesn't generate wal_files at all.
>
> if your master is keeping a wal_archive, slaves, including cascaded
> streaming slaves, can all use that same archive as their wal source, these
> are only used for catchup when streaming is interrupted.
>

But the slave can send wal to a wal_receiver and I think pg_receivexlog
will be nothing less than a wal_receiver. I will hold back from any further
comments unless I have tried it.

But a good question here is, why do you want to do that? Why Archive from
standby to cascaded standby? If it is only to catchup, the archives from
master are good enough


>
>
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Cascade streaming replication + wal_files - Pgsql 9.2

2016-07-06 Thread Sameer Kumar
On Thu, 7 Jul 2016, 10:10 a.m. Patrick B,  wrote:

> ok...
>
> I'll archive the wal_files locally on slave01,
>

How? I assume you are planning on doing this with scp/rsync in
archive_command in upstream master.

and then send them to slave02 every 10 minutes, for example.
>

Sounds good. But why? Is it just to have redundunt archives?
On your upstream master you can also use a custom script which inturn would
be scp'ing the wals to 2 or 3 servers for archiving purposes

This will be done by rsync or something else.
>
> I'll have still the problem with timeline, and slave02 won't be able to
> become a master in the future.
>

No, you should not have a problem. Since rsync will also copy the timeline
related files to slave02


> Is that right?
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Cascade streaming replication + wal_files - Pgsql 9.2

2016-07-06 Thread Sameer Kumar
On Thu, 7 Jul 2016, 10:32 a.m. Patrick B,  wrote:

> The slave02 server will be a copy of the DB into Amazon. There will be a
> migration to Amazon in the future, and the company's managers want a copy
> of the current DB there to test and do all the stuffs they need (migrating
> to 9.5, too).
>

Have you checked out Amazon's DMS?


> slave01 is already working as a streaming replication server.
> The master server sends the wal_files to slave01 by archive_command.
>
> The plan below isn't my idea, I would do different but isn't my call:
>

Been there :)


>
> *Current scenario:*
>
> master stores wal_files into slave01
> slave02 does not exists
>
>
> *The plan is:*
>
> 1. setup slave02 at amazon EC2 (just for testing and future master server
> for devs)
>

Is it EC2 Classic? Or is it EC2 Virtual Pvt Cloud (VPC)?


> 2. setup postgres on slave02 (9.2)
> 3. pg_basebackup will be run from slave01. This will split the base in
> files of 50GB each (example)
> 4. Send the splitted files from slave01 to slave02
> 5. restore/join the files
> 6. start postgres on the slave02 slave
> 7. restore the DB using the wal_files from slave01
>

Given that slave02 is a standby, how do you plan on doing your regression
testing? It will be just a read only database.

You can restore the wal_file by specifying resotre_command to copy from the
archive generated by the master (rsync or scp to pull from your in-premise
setup to EC2). This would be fairly simple if you are using VPC

Question:
>
> Is possible to make slave01 archive the wal_files?
>

If you really can not just live with archive generated on master itself,
you need to try the options discussed up thread.
1. Copy the archives generated on master to a shared location or may be
copy it to S3 bucket
2. Archive generated on master is rsync (schedule basis) to EC2
3. pg_receivexlog running on EC2 to copy WAL from slave01

can just be a archive_command and archive_mode = on?
>

Setting these parameters on slave02 will have not any effect.

> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Cascade streaming replication + wal_files - Pgsql 9.2

2016-07-06 Thread Sameer Kumar
On Thu, Jul 7, 2016 at 11:02 AM Patrick B  wrote:

> 2016-07-07 14:55 GMT+12:00 Sameer Kumar :
>
>>
>>
>> On Thu, 7 Jul 2016, 10:32 a.m. Patrick B, 
>> wrote:
>>
>>> The slave02 server will be a copy of the DB into Amazon. There will be a
>>> migration to Amazon in the future, and the company's managers want a copy
>>> of the current DB there to test and do all the stuffs they need (migrating
>>> to 9.5, too).
>>>
>>
>> Have you checked out Amazon's DMS?
>>
>
> Like I said.. it's gonna be a test server. Does not need to be powerful or
> to use tools.. a EC2 would be enough.
>

DMS is Database Migration Service from Amazon. :)
It allows you to setup a hybrid architecture like what you are planning to
have.


>
>>
>>
>>> slave01 is already working as a streaming replication server.
>>> The master server sends the wal_files to slave01 by archive_command.
>>>
>>> The plan below isn't my idea, I would do different but isn't my call:
>>>
>>
>> Been there :)
>>
>>
>>>
>>> *Current scenario:*
>>>
>>> master stores wal_files into slave01
>>> slave02 does not exists
>>>
>>>
>>> *The plan is:*
>>>
>>> 1. setup slave02 at amazon EC2 (just for testing and future master
>>> server for devs)
>>>
>>
>> Is it EC2 Classic? Or is it EC2 Virtual Pvt Cloud (VPC)?
>>
>
> Have no idea lol - I believe it will be classic
>

Since you plan to use this for Production later on, better to use VPC. It
also ensures a fixed IP.


>
>
>>
>>
>>> 2. setup postgres on slave02 (9.2)
>>> 3. pg_basebackup will be run from slave01. This will split the base in
>>> files of 50GB each (example)
>>> 4. Send the splitted files from slave01 to slave02
>>> 5. restore/join the files
>>> 6. start postgres on the slave02 slave
>>> 7. restore the DB using the wal_files from slave01
>>>
>>
>> Given that slave02 is a standby, how do you plan on doing your regression
>> testing? It will be just a read only database.
>>
>
> hmmm... do u mean by this, that I won't be able to turn slave02 as a
> master?
>

No, that's is not what I mean. I think you are mixing it up.

Slave02, with or without archives can always be promoted. I never saw that
you will be doing a promotion.

Anyways, for your case, AWS DMS looks like the best option. I suggest that
you explore that.

If I understood it right, your aim is to create a stand alone test DB
server restored using backup of slave01 and recovered to latest point in
time using archives. Now getting these archives to EC2 is you challenge and
hence you are thinking of ways to send archives from slave01 to EC2.

A. Can you not just start the server with the backup itself or do you
really need to start EC2 stand alone server with latest transaction? if not
then just drop the whole idea of getting the archives restored

B. If you need latest data and transaction before EC2 server is promoted as
master, take a look at DMS

Everything else we are discussing/discussed is not the best way and
involves workaround


>
>>
>> You can restore the wal_file by specifying resotre_command to copy from
>> the archive generated by the master (rsync or scp to pull from your
>> in-premise setup to EC2). This would be fairly simple if you are using VPC
>>
>
> ok. so a RSYNC would grab the wal-files from the current folder on the
> slave01 server, and send them to slave02.
> easy
>
>
>>
>> Question:
>>>
>>> Is possible to make slave01 archive the wal_files?
>>>
>>
>> If you really can not just live with archive generated on master itself,
>> you need to try the options discussed up thread.
>>
>
> I'd prefer, but I can't lol
>
>
>> 1. Copy the archives generated on master to a shared location or may be
>> copy it to S3 bucket
>>
>
> as i said, the servers will be migrated to amazon, any change now will not
> be approved.
>
>
>> 2. Archive generated on master is rsync (schedule basis) to EC2
>> 3. pg_receivexlog running on EC2 to copy WAL from slave01
>>
>
> this is not needed, as the wal_files will be sent by RSYNC from slave01.
>
>
>>
>> can just be a archive_command and archive_mode = on?
>>>
>>
>> Setting these parameters on slave02 will have not any effect.
>>
>
>
> slave01* *not* slave02.
>

yep. typo.


>
> With this scenario, slave02 will be able to turn up to a master server in
> the future?
>

Yes. Infact, slave02 (I now prefer to call it EC2 instance) can be promoted
to master even without archives. I assume you don't intend to run it (EC2)
as a slave/standby for very long and will promote it to become a standalone
test DB server. With that, I would not much worry about archives or
timeline switch.
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Cascade streaming replication + wal_files - Pgsql 9.2

2016-07-06 Thread Sameer Kumar
On Thu, Jul 7, 2016 at 11:37 AM Patrick B  wrote:

> 2016-07-07 15:19 GMT+12:00 Sameer Kumar :
>
>>
>>
>> On Thu, Jul 7, 2016 at 11:02 AM Patrick B 
>> wrote:
>>
>>> 2016-07-07 14:55 GMT+12:00 Sameer Kumar :
>>>
>>>>
>>>>
>>>> On Thu, 7 Jul 2016, 10:32 a.m. Patrick B, 
>>>> wrote:
>>>>
>>>>> The slave02 server will be a copy of the DB into Amazon. There will be
>>>>> a migration to Amazon in the future, and the company's managers want a 
>>>>> copy
>>>>> of the current DB there to test and do all the stuffs they need (migrating
>>>>> to 9.5, too).
>>>>>
>>>>
>>>> Have you checked out Amazon's DMS?
>>>>
>>>
>>> Like I said.. it's gonna be a test server. Does not need to be powerful
>>> or to use tools.. a EC2 would be enough.
>>>
>>
>> DMS is Database Migration Service from Amazon. :)
>> It allows you to setup a hybrid architecture like what you are planning
>> to have.
>>
>>
> oh ok thanks for the tip!
>
>
>>
>>>
>>>>
>>>>
>>>>> slave01 is already working as a streaming replication server.
>>>>> The master server sends the wal_files to slave01 by archive_command.
>>>>>
>>>>> The plan below isn't my idea, I would do different but isn't my call:
>>>>>
>>>>
>>>> Been there :)
>>>>
>>>>
>>>>>
>>>>> *Current scenario:*
>>>>>
>>>>> master stores wal_files into slave01
>>>>> slave02 does not exists
>>>>>
>>>>>
>>>>> *The plan is:*
>>>>>
>>>>> 1. setup slave02 at amazon EC2 (just for testing and future master
>>>>> server for devs)
>>>>>
>>>>
>>>> Is it EC2 Classic? Or is it EC2 Virtual Pvt Cloud (VPC)?
>>>>
>>>
>>> Have no idea lol - I believe it will be classic
>>>
>>
>> Since you plan to use this for Production later on, better to use VPC. It
>> also ensures a fixed IP.
>>
>
> I've just checked and is indeed a VPC with fixed IP.
>
>
>>
>>
>>>
>>>
>>>>
>>>>
>>>>> 2. setup postgres on slave02 (9.2)
>>>>> 3. pg_basebackup will be run from slave01. This will split the base in
>>>>> files of 50GB each (example)
>>>>> 4. Send the splitted files from slave01 to slave02
>>>>> 5. restore/join the files
>>>>> 6. start postgres on the slave02 slave
>>>>> 7. restore the DB using the wal_files from slave01
>>>>>
>>>>
>>>> Given that slave02 is a standby, how do you plan on doing your
>>>> regression testing? It will be just a read only database.
>>>>
>>>
>>> hmmm... do u mean by this, that I won't be able to turn slave02 as a
>>> master?
>>>
>>
>> No, that's is not what I mean. I think you are mixing it up.
>>
>> Slave02, with or without archives can always be promoted. I never saw
>> that you will be doing a promotion.
>>
>> Anyways, for your case, AWS DMS looks like the best option. I suggest
>> that you explore that.
>>
>> If I understood it right, your aim is to create a stand alone test DB
>> server restored using backup of slave01 and recovered to latest point in
>> time using archives. Now getting these archives to EC2 is you challenge
>> and hence you are thinking of ways to send archives from slave01 to EC2.
>>
>> A. Can you not just start the server with the backup itself or do you
>> really need to start EC2 stand alone server with latest transaction? if not
>> then just drop the whole idea of getting the archives restored
>>
>
> Nope.. The EC2 server has to be updated with latest transaction. There
> will be more ideas around here, that's why we need it updated .
>
>
>>
>> B. If you need latest data and transaction before EC2 server is promoted
>> as master, take a look at DMS
>>
>> Everything else we are discussing/discussed is not the best way and
>> involves workaround
>>
>>
>>>
>>>>
>>>> You can restore the wal_file by specifying resotre_command to copy from
>>>> the archive generated by the master (rsync or scp to pull from your
>>

Re: [GENERAL] pasting a lot of commands to psql

2016-07-07 Thread Sameer Kumar
On Fri, 8 Jul 2016, 1:23 a.m. Dmitry Shalashov,  wrote:

> Hi everyone.
>
> Let say that I have some sql file with like hundred of simple statements
> in it. I `cat` it, copy it to buffer, go to my beloved psql and insert it
> there.
> But somewhere after first few lines it screws over:
>
> b2b=> BEGIN;
> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','ati.su
> ',0,NULL,5);
> INSERT INTO oko_topsites VALUES('russian_federation'b2b=> INSERT INTO
> oko_topsites VALUES('russian_federation','atn.ua',0,NULL,5);
> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','audit-it.ru
> ',0,NULL,5);
> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','aup.ru
> ',0,NULL,5);
> '
> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','autocentre.ua
> ',0,NULL,5);
>
> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','autodoc.ru
> ',0,NULL,5);
>
> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','calend.ru'
>
> b2b(> INSERT INTO oko_topsites VALUES('russian_federation','calorizator.ru
> ',0,NULL,5)
>
> Unclosed quotes, unclosed parenthesis - anyway it wont work.
>
> How to safely insert big number of statements to psql at once?
> I am aware about "execute this file" \i option of psql, that is not the
> answer I am looking for, thanks :-)
>

What are you exactly aiming to do?

Have you tried -
psql  < myfile



> Dmitry Shalashov, surfingbird.ru & relap.io
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] pasting a lot of commands to psql

2016-07-07 Thread Sameer Kumar
On Fri, Jul 8, 2016 at 1:31 AM Dmitry Shalashov  wrote:

> Hi Sameer,
>
> I am trying to copy-paste (and execute) random snippets of SQL to psql
> console.
>

Why? Is it some migration of data? You are better off exporting the data to
csv and use COPY command.

>
> There is another ways to do it, which do not involve copy-paste, but I am
> wondering why is copy-paste won't work. What exactly is happening there...
>

Have you looked at this line in the file-
INSERT INTO oko_topsites VALUES('russian_federation','calend.ru'

Either the line in file lacks a closing parenthesis or may be your terminal
is not able to process so many characters so fast and hence it is
missing/skipping on some characters. I have experiences this behavior, not
just with psql but with usual shell when I use utterly slow VPN or when I
use screen share tools like TeamViewer or WebEx etc over slow network


> Dmitry Shalashov, surfingbird.ru & relap.io
>
> 2016-07-07 20:26 GMT+03:00 Sameer Kumar :
>
>>
>>
>> On Fri, 8 Jul 2016, 1:23 a.m. Dmitry Shalashov, 
>> wrote:
>>
>>> Hi everyone.
>>>
>>> Let say that I have some sql file with like hundred of simple statements
>>> in it. I `cat` it, copy it to buffer, go to my beloved psql and insert it
>>> there.
>>> But somewhere after first few lines it screws over:
>>>
>>> b2b=> BEGIN;
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','ati.su
>>> ',0,NULL,5);
>>> INSERT INTO oko_topsites VALUES('russian_federation'b2b=> INSERT INTO
>>> oko_topsites VALUES('russian_federation','atn.ua',0,NULL,5);
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','audit-it.ru
>>> ',0,NULL,5);
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','aup.ru
>>> ',0,NULL,5);
>>> '
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','
>>> autocentre.ua',0,NULL,5);
>>>
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','autodoc.ru
>>> ',0,NULL,5);
>>>
>>>
The below is broken...


> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','calend.ru'
>>>
>>


>
>>> b2b(> INSERT INTO oko_topsites VALUES('russian_federation','
>>> calorizator.ru',0,NULL,5)
>>>
>>> Unclosed quotes, unclosed parenthesis - anyway it wont work.
>>>
>>> How to safely insert big number of statements to psql at once?
>>> I am aware about "execute this file" \i option of psql, that is not the
>>> answer I am looking for, thanks :-)
>>>
>>
>> What are you exactly aiming to do?
>>
>> Have you tried -
>> psql  < myfile
>>
>>
>>
>>> Dmitry Shalashov, surfingbird.ru & relap.io
>>>
>> --
>> --
>> Best Regards
>> Sameer Kumar | DB Solution Architect
>> *ASHNIK PTE. LTD.*
>>
>> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>>
>> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>>
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] [BUGS] Where clause in pg_dump: need help

2016-07-08 Thread Sameer Kumar
On Fri, Jul 8, 2016 at 5:38 PM Francisco Olarte 
wrote:

> 1.- CCing to the list ( remember to hit reply-all or whatever your MUA
> uses for that, otherwise threads may get lost ).
>
> 2.- Try to avoid top-posting, it's not the style of the list ( or mine ).
>
> On Fri, Jul 8, 2016 at 4:43 AM, Prashanth Adiyodi
>  wrote:
> > Basically my requirement is, I have a live Db with certain tables and a
> backup Db at another location (both on postgressql).


Both databases are PostgreSQL (?). What version?


> I need to take a backup of this live DB every night for the previous day
> (i.e the backup script running on 07/07/2016 will take the backup of the DB
> for 06/07/2016).


Does this need to be done for one table or multiple tables?


> This backup will be then transferred to the backup DB server and will be
> inserted into that DB.


What will you be doing on the target database? Is it a read-only database?



> From what I have read pg_dump is the solution (similar to export in
> oracle), do you think of any other approach to get to this objective, have
> you come across a script or something that already does this,
>
>
May be you can use
psql -c "COPY (SELECT .. WHERE..) TO stdout" | psql -c "COPY (mytable) FROM
stdin"

OR
Setup replication and have a scheduled script to set recovery_target_time
and puase_at_recovery_target to effectively replicate changes from one DB
to other DB and maintaining a gap. But then the targetDB would be a read
only replica and needs to be exactly same at the main DB/sourceDB

You need to explain more on version of the database, what exactly you aim
at doing with the target DB.


> Your requirement is a bit 'understated'. I assume your problem is:
>
> 1.- You have a backup with a series of tables which get inserted WITH
> a timestamp.
> 2.- At the end of the day you want to transfer the inserted data, and
> only the inserted data, to another server and insert it ther.
>
> If BOTH servers are postgres, you can do it easily with a series of
> COPY commands easily. If the target one is not postgres I would use it
> too, but pass the COPY data through a perl script to generate whatever
> syntax the target DB needs ( I've done that to go from postgres to sql
> server and back using freebcp, IIRC, on the sql server side )
>
> You still can have problems IF you have updates to the tables, or
> deletions, or . But
> if you just have insertions, copy is easy to do.
>
> Francisco Olarte.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] [BUGS] Where clause in pg_dump: need help

2016-07-08 Thread Sameer Kumar
On Fri, 8 Jul 2016, 8:06 p.m. Prashanth Adiyodi, 
wrote:

> Hi Sameer, Please see comments inline
>
>
>
>
>
> *Prashanth Adiyodi  *
>
> *Technical Account Manager*
>
> *Skype: prashanth.adiyodi*
>
> *Mob: +91-9819488395*
>
> [image: celltick]
>
>
>
>
>
>
>
> *From:* Sameer Kumar [mailto:sameer.ku...@ashnik.com]
> *Sent:* Friday, July 08, 2016 3:18 PM
> *To:* Francisco Olarte; Prashanth Adiyodi
> *Cc:* pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] [BUGS] Where clause in pg_dump: need help
>
>
>
>
>
> On Fri, Jul 8, 2016 at 5:38 PM Francisco Olarte 
> wrote:
>
> 1.- CCing to the list ( remember to hit reply-all or whatever your MUA
> uses for that, otherwise threads may get lost ).
>
> 2.- Try to avoid top-posting, it's not the style of the list ( or mine ).
>
> On Fri, Jul 8, 2016 at 4:43 AM, Prashanth Adiyodi
>  wrote:
> > Basically my requirement is, I have a live Db with certain tables and a
> backup Db at another location (both on postgressql).
>
>
>
> Both databases are PostgreSQL (?). What version?
>
> Yes, Both are postgres SQL, ver 9.3.4
>
>
>
> I need to take a backup of this live DB every night for the previous day
> (i.e the backup script running on 07/07/2016 will take the backup of the DB
> for 06/07/2016).
>
>
>
> Does this need to be done for one table or multiple tables?
>
> Multiple tables
>
>
>
> This backup will be then transferred to the backup DB server and will be
> inserted into that DB.
>
>
>
> What will you be doing on the target database? Is it a read-only database?
>
> It is not a read only database
>
>
>
> From what I have read pg_dump is the solution (similar to export in
> oracle), do you think of any other approach to get to this objective, have
> you come across a script or something that already does this,
>
>
>
> May be you can use
> psql -c "COPY (SELECT .. WHERE..) TO stdout" | psql -c "COPY (mytable)
> FROM stdin"
>
>
>
> OR
> Setup replication and have a scheduled script to set recovery_target_time
> and puase_at_recovery_target to effectively replicate changes from one DB
> to other DB and maintaining a gap. But then the targetDB would be a read
> only replica and needs to be exactly same at the main DB/sourceDB
>
>
>
> You need to explain more on version of the database, what exactly you aim
> at doing with the target DB.
>
> Hi, the requirement is this, I have multiple tables where there may not be
> a timestamp column. I need to run a script which will execute post-midnight
> say at 2 AM and create a dump file (say data.sql), which will have records
> for all the previous day. I will then transfer this file to the target
> server and dump this data there, the idea is to create two copies of the
> data in case of a disaster on the original database server.
>

So it is more like a DR server which always lags behind the master by a day
or is at mid night time of previous day.

Above you have mentioned this target db server (which I assume serves the
purpose of DR) is not read-only(?). What kind of writes will you be doing
on this servers?

>
>
> Your requirement is a bit 'understated'. I assume your problem is:
>
> 1.- You have a backup with a series of tables which get inserted WITH
> a timestamp.
> 2.- At the end of the day you want to transfer the inserted data, and
> only the inserted data, to another server and insert it ther.
>
> If BOTH servers are postgres, you can do it easily with a series of
> COPY commands easily. If the target one is not postgres I would use it
> too, but pass the COPY data through a perl script to generate whatever
> syntax the target DB needs ( I've done that to go from postgres to sql
> server and back using freebcp, IIRC, on the sql server side )
>
> You still can have problems IF you have updates to the tables, or
> deletions, or . But
> if you just have insertions, copy is easy to do.
>
> Francisco Olarte.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
> --
>
> --
>
> Best Regards
>
> Sameer Kumar | DB Solution Architect
>
> *ASHNIK PTE. LTD.*
>
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>
> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Sameer Kumar
On Tue, 12 Jul 2016, 7:25 p.m. Miguel Ramos, <
org.postgre...@miguel.ramos.name> wrote:

>
> Hi,
>
> We have backed up a database and now when trying to restore it to the
> same server we get this:
>
>  > # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
>  > pg_restore: [custom archiver] out of memory
>  > 12:09:56.58  9446.593u+1218.508s 24.3%  167+2589k  6+0io  0pf+0sw
> 6968822cs
>
>
> Some information about the application:
>
> - We have sensor data, including pictures, and number crunshing output,
> then so the large tables on this database have 319, 279, 111 and 26GB.
> Mostly on TOAST pages, but the 279GB one divides it evenly. This
> database is 765GB. We try to keep them under 4TB.
> - Transactions are large, some 100 MB at a time.
> - We also use PostGIS.
>
> About the server (dedicated):
>
> - FreeBSD 9.1-RELEASE #0 on AMD64
> - 16 GB of RAM
> - 8x3GB hardware RAID 10
> - 10TB slice for pgdata UFS-formatted and 32kB block
> - PostgreSQL 9.1.8 custom compiled to get 32kB blocks
> - Installed in 2013 with ~10 people working with it, 145 days uptime today.
>
> I found two relevant threads on the mailing-lists.
> The most recent one sugested that postgresql was being configured to use
> more memory than what's available.
> The older one sugested that the system limits on the size of the data or
> stack segments were lower than required.
>
> So here are some server parameters (relevant or otherwise):
>
>  > max_connections = 100
>  > shared_buffers = 4GB  -- 25% of RAM
>  > temp_buffers = 32MB  -- irrelevant?
>  > work_mem = 64MB
>  > maintenance_work_mem = was 1G lowered to 256M then 64M
>


Why did you lower it? I think increasing it should help better. But 1GB
seems like fine.


 > wal_buffers = -1  -- should mean 1/32 of shared_buffers = 128MB
>

Increase this during the restore, may be 512MB

 > checkpoint_segments = 64  -- WAL segments are 16MB
>  > effective_cache_size = 8GB  -- irrelevant?
>
>
> I suspect that the restore fails when constructing the indices. After
> the process is aborted, the data appears to be all or most there, but no
> indices.
>

What is logged in database log files? Have you checked that?

So, all I did so far, was lowering maintenance_work_mem and it didn't work.
>
> System limits, as you can see, are at defaults (32GB for data and 512MB
> for stack):
>
>  > # limit
>  > cputime  unlimited
>  > filesize unlimited
>  > datasize 33554432 kbytes
>  > stacksize524288 kbytes
>  > coredumpsize unlimited
>  > memoryuseunlimited
>  > vmemoryuse   unlimited
>  > descriptors  11095
>  > memorylocked unlimited
>  > maxproc  5547
>  > sbsize   unlimited
>  > swapsize unlimited
>
> Shared memory is configured to allow for the single shared memory
> segment postgresql appears to use, plus a bit of extra (8GB):
>
>  > # ipcs -M
>  > shminfo:
>  > shmmax:   8589934592(max shared memory segment size)
>  > shmmin:1(min shared memory segment size)
>  > shmmni:  192(max number of shared memory
> identifiers)
>  > shmseg:  128(max shared memory segments per process)
>  > shmall:  2097152(max amount of shared memory in pages)
>
> And semaphores (irrelevant?)...
>
>  > # ipcs -S
>  > seminfo:
>  > semmni:  256(# of semaphore identifiers)
>  > semmns:  512(# of semaphores in system)
>  > semmnu:  256(# of undo structures in system)
>  > semmsl:  340(max # of semaphores per id)
>  > semopm:  100(max # of operations per semop call)
>  > semume:   50(max # of undo entries per process)
>  > semusz:  632(size in bytes of undo structure)
>  > semvmx:32767(semaphore maximum value)
>  > semaem:16384(adjust on exit max value)
>

What are your vm.dirty_ratio and vm.dirty_background_ratio? I think
reducing them may help. But can not really say what exactly would help
unless you are able to get the error source in db logs


>
>
> I don't know what else to try.
> I lowered maintenance_work_mem without restarting the server.
> In some of the attempts, but not all, the restore was done while people
> were working.
>
> Each attempt takes 12 hours...
> We couldn't use the directory -Fd dump because it's postgresql 9.1.
> The original database is still on the server, this is a test restore.
>
> We have about one or two months of slack before we really need to remove
> them from the server to recover space.
>
>
> --
> Miguel Ramos
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Slow query on primary server runs fast on hot standby

2016-07-15 Thread Sameer Kumar
On Fri, Jul 15, 2016 at 4:17 PM Kaixi Luo  wrote:

> Hello,
>
> I have a primary PostgreSQL server with 64GB of RAM that is replicated
> using streaming replication to a hot standby server with 16GB of RAM.
>

Do you have different hardware configuration for master and standby? I am
not sure if that is the right thing to do. I usually prefer them to be
equally sized and have same configuration. But may be someone else with
more experience can comment better on that aspect.



> My problem is as follows: I've detected a query that takes a long time to
> run on my primary server but runs very fast on the standby server. I did an
> EXPLAIN ANALYZE on the query:
>
>
What is the version of PostgreSQL on both servers? Check with

select version();



EXPLAIN ANALYZE
> SELECT this_.id AS id1_31_0_,
>this_.nom AS nom2_31_0_,
>this_.nom_slug AS nom3_31_0_,
>this_.descripcio AS descripc4_31_0_,
>this_.url AS url5_31_0_,
>this_.data_captura AS data6_31_0_,
>this_.data_publicacio AS data7_31_0_,
>this_.propietari AS propieta8_31_0_,
>this_.privacitat AS privacit9_31_0_,
>this_.desnivellpujada AS desnive10_31_0_,
>this_.desnivellbaixada AS desnive11_31_0_,
>this_.longitud AS longitu13_31_0_,
>this_.beginpoint AS beginpo14_31_0_,
>this_.endpoint AS endpoin15_31_0_,
>this_.caixa3d AS caixa16_31_0_,
>this_.pic_id AS pic17_31_0_,
>this_.skill AS skill18_31_0_,
>this_.spatial_type AS spatial19_31_0_,
>this_.tags_cached AS tags20_31_0_,
>this_.images_cached AS images21_31_0_,
>this_.ncomments AS ncommen22_31_0_,
>this_.group_order AS group23_31_0_,
>this_.author AS author24_31_0_,
>this_.proper_a AS proper25_31_0_,
>this_.duration AS duratio26_31_0_,
>this_.isloop AS isloop27_31_0_,
>this_.seo_country AS seo28_31_0_,
>this_.seo_region AS seo29_31_0_,
>this_.seo_place AS seo30_31_0_,
>this_.source AS source31_31_0_,
>this_.source_name AS source32_31_0_,
>this_.api_key AS api33_31_0_,
>this_.ratingui AS ratingu34_31_0_,
>this_.nratings AS nrating35_31_0_,
>this_.trailrank AS trailra36_31_0_,
>this_.ncoords AS ncoords37_31_0_,
>this_.egeom AS egeom38_31_0_,
>this_.elevels AS elevels39_31_0_,
>this_.elevations AS elevati40_31_0_,
>this_.nphotoswpts AS nphotos41_31_0_,
>this_.nfavourited AS nfavour42_31_0_,
>this_.ncompanions AS ncompan43_31_0_,
>this_.group_id AS group44_31_0_
> FROM spatial_artifact this_
> WHERE this_.group_id IS NULL
>   AND this_.propietari=7649
> ORDER BY this_.id DESC LIMIT 20
>
>
> *--PRIMARY SERVER *(EXPLAIN ANALYZE output)
>
> "Limit  (cost=0.43..22734.71 rows=20 width=604) (actual
> time=1804.124..293469.085 rows=20 loops=1)"
> "  ->  Index Scan Backward using "PK_SPATIAL_ARTIFACT" on spatial_artifact
> this_  (cost=0.43..7776260.84 rows=6841 width=604) (actual
> time=1804.121..293469.056 rows=20 loops=1)"
> "Filter: ((group_id IS NULL) AND (propietari = 7649))"
> "Rows Removed by Filter: 2848286"
> "Total runtime: *293469.135 ms*"
>
>
> *--STANDBY SERVER *(EXPLAIN ANALYZE output)
>
> "Limit  (cost=23533.73..23533.78 rows=20 width=604) (actual
> time=2.566..2.569 rows=20 loops=1)"
> "  ->  Sort  (cost=23533.73..23550.83 rows=6841 width=604) (actual
> time=2.566..2.567 rows=20 loops=1)"
> "Sort Key: id"
> "Sort Method: top-N heapsort  Memory: 35kB"
> "->  Index Scan using idx_own_spas on spatial_artifact this_
>  (cost=0.43..23351.70 rows=6841 width=604) (actual time=0.037..2.119
> rows=618 loops=1)"
> "  Index Cond: (propietari = 7649)"
> "Total runtime: *2.612 ms*"
>
>
> I've run ANALYZE on my table and have reindexed the index idx_own_spas on
> my primary server, but it hasn't helped.
>
> Here is the postgresql config of my two servers:
>
> *--PRIMARY SERVER *(postgresql.conf)
> shared_buffers = 8GB
> work_mem = 42MB
> maintenance_work_mem = 2GB
> effective_cache_size = 44GB
>
>
what are the values of random_page_cost and seq_page_cost?

Also what might help here is the number of rows and pages in the table -
select relpages,reltuples, relname from pg_class where relname in ('
idx_own_spas ','spatial_artifact','PK_SPATIAL_ARTIFACT');




> *--**STANDBY** SERVER *(postgresql.conf)
> shared_buffers = 800MB
> work_mem = 20MB
> maintenance_work_mem = 128MB
> effective_cache_size = 1024MB
>
>


>
> Could you shed some light into why this is happening? Thank you.
>
> Cheers,
>
> Kaixi
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-07-18 Thread Sameer Kumar
On Fri, 11 Mar 2016, 9:39 p.m. Paul Jones,  wrote:

> I have been running the EDB benchmark that compares Postgres and MongoDB.
> I believe EDB ran it against PG 9.4 and Mongo 2.6.  I am running it
> against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000
> JSON records generated by the benchmark.  It looks like Mongo is winning,
> and apparently because of its cache management.
>
> The first queries on both run in ~30 min.  And, once PG fills its cache,
> it whips Mongo on repeats of the *same* query (vmstat shows no disk
> reads for PG).
>
> However, when different query on the same table is issued to both,
> vmstat shows that PG has to read the *entire* table again, and it takes
> ~30 min.  Mongo does a lot of reads initially but after about 5 minutes,
> it stops reading and completes the query, most likely because it is
> using its cache very effectively.
>
> Host:   Virtual Machine
> 4 CPUs
> 16 Gb RAM
> 200 Gb Disk
> RHEL 6.6
>
> PG: 9.5.1 compiled from source
> shared_buffers = 7GB
> effectve_cache_size = 12GB
>
> Mongo:  3.2 installed with RPM from Mongo
>
> In PG, I created the table by:
>
> CREATE TABLE json_tables
> (
> dataJSONB
> );
>
> After loading, it creates the index:
>
> CREATE INDEX json_tables_idx ON json_tables USING GIN (data
> jsonb_path_ops);
>

This would create one GIN index which is going to be a bit larger than
usual btree /n-tree index on a specific JSON field. And would be slower
too. I suggest that you create an index on the specific expression using
JSON operators. In my opinion that index would be much more nearer to
mongoDB indexes.



> After a lot of experimentation, I discovered that the benchmark was not
> using PG's index, so I modified the four queries to be:
>
> SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic
> Plan"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}';
> SELECT data FROM json_tables WHERE data @> '{"type": "service"}';
>
> Here are two consecutive explain analyze for PG, for the same query.
> No functional difference in the plans that I can tell, but the effect
> of PG's cache on the second is dramatic.
>
> If anyone has ideas on how I can get PG to more effectively use the cache
> for subsequent queries, I would love to hear them.
>
> ---
>
> benchmark=# explain analyze select data from json_tables where data @>
> '{"name": "AC3 Case Red"}';
>
>QUERY PLAN
>
>
>
> -
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=1
> width=1261)
> (actual time=2157.118..1259550.327 rows=909091 loops=1)
>Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>Rows Removed by Index Recheck: 4360296
>Heap Blocks: exact=37031 lossy=872059
>->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=1
> width =0) (actual time=2141.250..2141.250 rows=909091 loops=1)
>  Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 291.932 ms
> Execution time: 1259886.920 ms
> (8 rows)
>
> Time: 1261191.844 ms
>
> benchmark=# explain analyze select data from json_tables where data @>
> '{"name": "AC3 Case Red"}';
>   QUERY PLAN
>
>
> ---
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=1
> width=1261) (actual time=779.261..29815.262 rows=909091 loops=1)
>Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>    Rows Removed by Index Recheck: 4360296
>Heap Blocks: exact=37031 lossy=872059
>->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=1
> width =0) (actual time=769.081..769.081 rows=909091 loops=1)
>  Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 33.967 ms
> Execution time: 29869.381 ms
>
> (8 rows)
>
> Time: 29987.122 ms
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] pg_dump without any SET command in header of output plain text sql file

2016-07-20 Thread Sameer Kumar
On Thu, 21 Jul 2016, 1:17 a.m. Alex Ignatov, 
wrote:

> Hello everyone!
>
> Is there any way to make pg_dump(9.5) to dump table (with data) without
> any SET command in the header of output plain sql file?
>

Yeah you need to use specific switch/option to disable each set command


> P.S. Yeah I know about sedding =)
>
>
> --
> Alex Ignatov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] pg_dump without any SET command in header of output plain text sql file

2016-07-21 Thread Sameer Kumar
On Thu, Jul 21, 2016 at 7:14 PM Alex Ignatov 
wrote:

> And what is the options you mentioned about?
>

I stand corrected.
I think the only ones you can avoid are OWNERSHIP and TABLESPACE commands.

I think sed would be the best tool here.

> Alex Ignatov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
> On 20.07.2016 21:02, Sameer Kumar wrote:
>
>
>
> On Thu, 21 Jul 2016, 1:17 a.m. Alex Ignatov, 
> wrote:
>
>> Hello everyone!
>>
>> Is there any way to make pg_dump(9.5) to dump table (with data) without
>> any SET command in the header of output plain sql file?
>>
>
> Yeah you need to use specific switch/option to disable each set command
>
>
>> P.S. Yeah I know about sedding =)
>>
>>
>> --
>> Alex Ignatov
>> Postgres Professional: http://www.postgrespro.com
>> The Russian Postgres Company
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
> --
> --
> Best Regards
> Sameer Kumar | DB Solution Architect
> *ASHNIK PTE. LTD.*
>
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>
> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] PG user group in the Kuala Lumpur area?

2016-07-21 Thread Sameer Kumar
On Sun, Jan 18, 2015 at 10:14 PM Torsten Förtsch 
wrote:

> Hi,
>
> I was looking for a PG user group around KL. I know there is one in
> Singapore. As it happens, Chris Travers, the PG contact for Malaysia is
> a friend of mine. So, I asked him. He wasn't aware of one either.
> However, he very much appreciated the idea of founding one. I know there
> are lots of PG users in the area.
>
> But is there enough demand for a user group? If you are interested,
> please contact me.
>
> My idea behind this whole thing is to eventually have a regular PG
> conference South East Asia.


I picked it up a little late. (blame it on my travel)
We did a pgDay Asia in Singapore, last year and we would be doing another
one next year.

We have a User Group in Singapore and the next meetup is planned on 18th
Aug. Here is a link:
http://www.meetup.com/PUGS-Postgres-Users-Group-Singapore/events/232683291/

We can do a meetup in Malaysia (register a new group or use the same one).
Let me know when you are in KL and we would liaise to see how we can host a
meetup there. I can come down (and also tap on my employer's channels to
promote this meetup group).

If you have already setup a meetup group, I would like to be a part of it.

I have been to PGconf.eu several times and I
> know from experience that it is a great opportunity to learn new stuff,
> meet people and also have much fun. I think esp. Malaysia is a good
> place for such an event. There are many people out there that could
> never come to PGconf.eu or similar in the US and in many other places
> because of their passport. Getting a visa to Malaysia is possible for
> almost everyone. I don't know about North Korea, but there are many
> Iranians around here.
>
> About myself, I am German, currently traveling back and forth between
> Germany and Malaysia.
>
> Torsten
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


[GENERAL] Database and Table stats gets reset automatically

2016-07-24 Thread Sameer Kumar
Hi,

I have PostgreSQL v9.4.4 running in my environment. It has been up for over
2 years now. I noticed that suddenly the statistics have been reset and all
the stat tables/columns got restarted from.

This happened 2 weeks ago. I noticed only recently after I looked at the
plot over last week (which dipped suddenly for "number of tuples returned",
"number of conflicts" etc).

The columns which got reset are from pg_stat_database, pg_stat_user_tables,
pg_stat_bgwriter

As far I know no one has fired pg_stat_reset or pg_stat_reset_shared.

I noticed that the last largest value is from pg_stat_user_tables.
tup_returned (470440261405). Does the statistics get reset automatically
when the value for one of the statistics reaches the high number supported
by int4?

I am running on Red Hat 6.7.

Regards
Sameer
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Database and Table stats gets reset automatically

2016-07-25 Thread Sameer Kumar
On Mon, 25 Jul 2016, 10:10 p.m. Adrian Klaver, 
wrote:

> On 07/24/2016 09:58 PM, Sameer Kumar wrote:
> > Hi,
> >
> > I have PostgreSQL v9.4.4 running in my environment. It has been up for
> > over 2 years now. I noticed that suddenly the statistics have been reset
> > and all the stat tables/columns got restarted from.
>
> Any of these happen?:
>
> https://www.postgresql.org/docs/9.4/static/monitoring-stats.html
> ". When recovery is performed at server start (e.g. after immediate
> shutdown, server crash, and point-in-time recovery), all statistics
> counters are reset."
>


Nope. This has happened on the standby database. The counter on the master
database are still running fine.

There was a period of few days about a month ago when I had to reverse the
roles but then I switched them back to the way they were. But recently, I
have not done anything which brings the server out of recovery.


> >
> > This happened 2 weeks ago. I noticed only recently after I looked at the
> > plot over last week (which dipped suddenly for "number of tuples
> > returned", "number of conflicts" etc).
>
> Any change in procedures the last two weeks?
>

Nope. The scripts to capture this data to csv file is very much the same.


> >
> > The columns which got reset are from pg_stat_database,
> > pg_stat_user_tables, pg_stat_bgwriter
> >
> > As far I know no one has fired pg_stat_reset or pg_stat_reset_shared.
> >
> > I noticed that the last largest value is from pg_stat_user_tables.
> > tup_returned (470440261405). Does the statistics get reset automatically
>
> tup_returned is in pg_stat_database.
>

Sorry my bad (a copy paste mistake). Yes I was referring to tup_returned
from pg_stat_database. But the various stats in pg_stat_user_tables have
also been reset.


> What does the stats_reset field show in pg_stat_database?
>

It says a timestamp from 18th July 2016. Infact all the other statistics
views have the same timestamp.


> > when the value for one of the statistics reaches the high number
> > supported by int4?
> >
> > I am running on Red Hat 6.7.
> >
> > Regards
> > Sameer
> > --
> > --
> > Best Regards
> > Sameer Kumar | DB Solution Architect
> > *ASHNIK PTE. LTD.*
> >
> > 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
> >
> > T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Database and Table stats gets reset automatically

2016-07-25 Thread Sameer Kumar
On Mon, 25 Jul 2016, 10:35 p.m. Adrian Klaver, 
wrote:

> On 07/25/2016 07:28 AM, Sameer Kumar wrote:
> >
> >
> > On Mon, 25 Jul 2016, 10:10 p.m. Adrian Klaver,
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 07/24/2016 09:58 PM, Sameer Kumar wrote:
> > > Hi,
> > >
> > > I have PostgreSQL v9.4.4 running in my environment. It has been up
> for
> > > over 2 years now. I noticed that suddenly the statistics have been
> > reset
> > > and all the stat tables/columns got restarted from.
> >
> > Any of these happen?:
> >
> > https://www.postgresql.org/docs/9.4/static/monitoring-stats.html
> > ". When recovery is performed at server start (e.g. after immediate
> > shutdown, server crash, and point-in-time recovery), all statistics
> > counters are reset."
> >
> >
> >
> > Nope. This has happened on the standby database. The counter on the
> > master database are still running fine.
> >
> > There was a period of few days about a month ago when I had to reverse
> > the roles but then I switched them back to the way they were. But
> > recently, I have not done anything which brings the server out of
> recovery.
> >
> >
> > >
> > > This happened 2 weeks ago. I noticed only recently after I looked
> > at the
> > > plot over last week (which dipped suddenly for "number of tuples
> > > returned", "number of conflicts" etc).
> >
> > Any change in procedures the last two weeks?
> >
> >
> > Nope. The scripts to capture this data to csv file is very much the same.
> >
> >
> > >
> > > The columns which got reset are from pg_stat_database,
> > > pg_stat_user_tables, pg_stat_bgwriter
> > >
> > > As far I know no one has fired pg_stat_reset or
> pg_stat_reset_shared.
> > >
> > > I noticed that the last largest value is from pg_stat_user_tables.
> > > tup_returned (470440261405). Does the statistics get reset
> > automatically
> >
> > tup_returned is in pg_stat_database.
> >
> >
> > Sorry my bad (a copy paste mistake). Yes I was referring to tup_returned
> > from pg_stat_database. But the various stats in pg_stat_user_tables have
> > also been reset.
> >
> >
> > What does the stats_reset field show in pg_stat_database?
> >
> >
> > It says a timestamp from 18th July 2016. Infact all the other statistics
> > views have the same timestamp.
>
> Don't suppose you still have the logs from that date to see if there is
> a clue?
>

I don't have them on the server. I can fetch them from the archives. Ot
might tke a day or two. Let me get back. Thanks for helping.
Any idea about probable suspects apart from the ones you listed?


> >
> >
> > > when the value for one of the statistics reaches the high number
> > > supported by int4?
> > >
> >     > I am running on Red Hat 6.7.
> > >
> > > Regards
> > > Sameer
> > > --
> > > --
> > > Best Regards
> > > Sameer Kumar | DB Solution Architect
> > > *ASHNIK PTE. LTD.*
> > >
> > > 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
> > >
> > > T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
> > <http://www.ashnik.com>
> > >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
> > --
> > --
> > Best Regards
> > Sameer Kumar | DB Solution Architect
> > *ASHNIK PTE. LTD.*
> >
> > 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
> >
> > T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] How to stop script executions

2016-07-26 Thread Sameer Kumar
On Tue, 26 Jul 2016, 8:54 p.m. Dev Kumkar,  wrote:

> Hello Experts,
>
> I want to have my postgreSQL database to only execute SQLs and avoid
> execution of perl and python script executions.
>

You mean you don't want any routines/functions written in Pl/perl or
PL/pythin to get executed?

If that is what you are looking for them simply drop the extension for
these languages or deny privilege to users/public on these extensions.

Infact from what I know, these extensions are not there by default, so
don't create them at all.


> Can you please suggest ways to achieve this?
>
> Regards...
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] How to stop script executions

2016-07-26 Thread Sameer Kumar
On Tue, 26 Jul 2016, 9:20 p.m. Dev Kumkar,  wrote:

> On Tue, Jul 26, 2016 at 6:29 PM, Sameer Kumar 
> wrote:
>
> You mean you don't want any routines/functions written in Pl/perl or
>> PL/pythin to get executed?
>>
>> If that is what you are looking for them simply drop the extension for
>> these languages or deny privilege to users/public on these extensions.
>>
>> Infact from what I know, these extensions are not there by default, so
>> don't create them at all.
>>
>
> Thanks Sameer!
> Yeah these extensions are not present, are their any chances of running OS
> commands from database?
>

What do you mean by "from database"? I think you need to lay down your
requirement and goal more clearly.


> Regards..
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] RowExclusiveLock timeout while autovacuum

2016-08-14 Thread Sameer Kumar
/pgSQL function
> activesessiontriggerfunction() line 22 at SQL statement
> Aug 10 15:26:28 DB-1 postgres[3314]: [15-4] STATEMENT:  INSERT INTO
>
> ActiveRadiussession(AccountSessionId,StationMAC,StationIP,StationIPV6,ApMAC,SSID,Username,WlanMAC,ChargeableUserIdentity,NASIPAddress,SessionStartTime,Port,
> ZoneWlanInfo, AppVMInstanceIP)
> values($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14)
> Aug 10 15:28:36 DB-1 postgres[3314]: [16-1] LOG:  process 3314 still
> waiting
> for ExclusiveLock on extension of relation 19308 of database 18363 after
> 1000.065 ms
> Aug 10 15:28:36 DB-1 postgres[3314]: [16-2] STATEMENT:  INSERT INTO
>
> ActiveRadiussession(AccountSessionId,StationMAC,StationIP,StationIPV6,ApMAC,SSID,Username,WlanMAC,ChargeableUserIdentity,NASIPAddress,SessionStartTime,Port,
> ZoneWlanInfo, AppVMInstanceIP)
> values($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14)
> Aug 10 15:28:39 DB-1 postgres[3314]: [17-1] LOG:  process 3314 acquired
> ExclusiveLock on extension of relation 19308 of database 18363 after
> 3212.061 ms
> Aug 10 15:28:39 DB-1 postgres[3314]: [17-2] STATEMENT:  INSERT INTO
>
> ActiveRadiussession(AccountSessionId,StationMAC,StationIP,StationIPV6,ApMAC,SSID,Username,WlanMAC,ChargeableUserIdentity,NASIPAddress,SessionStartTime,Port,
> ZoneWlanInfo, AppVMInstanceIP)
> values($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14)
>
>
>
>
> --
> View this message in context:
> http://postgresql.nabble.com/RowExclusiveLock-timeout-while-autovacuum-tp5916437.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Critical failure of standby

2016-08-15 Thread Sameer Kumar
On Tue, Aug 16, 2016 at 1:10 PM James Sewell 
wrote:

> Hey,
>
> I understand that.
>
> But a hot standby should always be ready to promote (given it originally
> caught up) right?
>
> I think it's a moot point really as some sort of corruption has been
> introduced, the machines still won't wouldn't start after they could see
> the archive destination again.
>

Did you had a pending basebackup on the standby or a start backup (with no
matching stop backup)?

Was there a crash/immediate shutdown on the standby during this network
outage? Do you have full page writes/fsync disabled?


>
> Cheers,
>
> James Sewell,
> Solutions Architect
>
>
>
> Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
> *P *(+61) 2 8099 9000 <(+61)%202%208099%209000>  *W* www.jirotech.com
> *F *(+61) 2 8099 9099 <(+61)%202%208099%209000>
>
> On Tue, Aug 16, 2016 at 12:36 PM, John R Pierce 
> wrote:
>
>> On 8/15/2016 7:23 PM, James Sewell wrote:
>>
>>> Those are all good questions.
>>>
>>> Essentially this is a situation where DR is network separated from Prod
>>> - so I would expect the archive command to fail. I'll have to check the
>>> script it must not be passing the error back through to PostgreSQL.
>>>
>>> This still shouldn't cause database corruption though right? - it's just
>>> not getting WALs.
>>>
>>
>> if the slave database is asking for the WAL's, it needs them. if it needs
>> them and can't get them, then it can't catch up and start.
>>
>>
>>
>> --
>> john r pierce, recycling bits in santa cruz
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
> --
> The contents of this email are confidential and may be subject to legal or
> professional privilege and copyright. No representation is made that this
> email is free of viruses or other defects. If you have received this
> communication in error, you may not copy or distribute any part of it or
> otherwise disclose its contents to anyone. Please advise the sender of your
> incorrect receipt of this correspondence.

-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Running pg_dump from a slave server

2016-08-16 Thread Sameer Kumar
On Wed, Aug 17, 2016 at 10:34 AM Patrick B  wrote:

> Hi guys,
>
> I'm using PostgreSQL 9.2 and I got one master and one slave with streaming
> replication.
>
> Currently, I got a backup script that runs daily from the master, it
> generates a dump file with 30GB of data.
>
> I changed the script to start running from the slave instead the master,
> and I'm getting this errors now:
>
> pg_dump: Dumping the contents of table "invoices" failed: PQgetResult()
>> failed.
>> pg_dump: Error message from server: ERROR:  canceling statement due to
>> conflict with recovery
>> DETAIL:  User was holding a relation lock for too long.
>
>
Looks like while your pg_dump sessions were trying to fetch the data,
someone fired a DDL or REINDEX or VACUUM FULL on the master database.

>
> Isn't that possible? I can't run pg_dump from a slave?
>

Well you can do that, but it has some limitation. If you do this quite
often, it would be rather better to have a dedicated standby for taking
backups/pg_dumps. Then you can set max_standby_streaming_delay and
max_standby_archiving_delay to -1. But I would not recommend doing this if
you use your standby for other read queries or for high availability.

Another option would be avoid such queries which causes Exclusive Lock on
the master database during pg_dump.

Cheers
>
> Patrick
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Running pg_dump from a slave server

2016-08-16 Thread Sameer Kumar
On Wed, Aug 17, 2016 at 11:36 AM Patrick B  wrote:

>
>
> 2016-08-17 15:31 GMT+12:00 Sameer Kumar :
>
>>
>>
>> On Wed, Aug 17, 2016 at 10:34 AM Patrick B 
>> wrote:
>>
>>> Hi guys,
>>>
>>> I'm using PostgreSQL 9.2 and I got one master and one slave with
>>> streaming replication.
>>>
>>> Currently, I got a backup script that runs daily from the master, it
>>> generates a dump file with 30GB of data.
>>>
>>> I changed the script to start running from the slave instead the master,
>>> and I'm getting this errors now:
>>>
>>> pg_dump: Dumping the contents of table "invoices" failed: PQgetResult()
>>>> failed.
>>>> pg_dump: Error message from server: ERROR:  canceling statement due to
>>>> conflict with recovery
>>>> DETAIL:  User was holding a relation lock for too long.
>>>
>>>
>> Looks like while your pg_dump sessions were trying to fetch the data,
>> someone fired a DDL or REINDEX or VACUUM FULL on the master database.
>>
>>>
>>> Isn't that possible? I can't run pg_dump from a slave?
>>>
>>
>> Well you can do that, but it has some limitation. If you do this quite
>> often, it would be rather better to have a dedicated standby for taking
>> backups/pg_dumps. Then you can set max_standby_streaming_delay and
>> max_standby_archiving_delay to -1. But I would not recommend doing this if
>> you use your standby for other read queries or for high availability.
>>
>> Another option would be avoid such queries which causes Exclusive Lock on
>> the master database during pg_dump.
>>
>
>
> Sameer,
>
> yeah I was just reading this thread:
> https://www.postgresql.org/message-id/AANLkTinLg%2BbpzcjzdndsnGGNFC%3DD1OsVh%2BhKb85A-s%3Dn%40mail.gmail.com
>
> Well.. I thought it was possible, but as the DB is big, this dump takes a
> long time and it won't work.
>
> I also could increase those parameters you showed, but won't do that as I
> only have one slave.
>

But do you have statements which causes Exclusive Locks? Ignoring them in
OLTP won't make your life any easier.

(Keeping avoiding to get into 'recovery conflict' as your sole goal) If you
decide to  run pg_dump from master, it would block such statements which
have Exclusive locking. This would cause delays, deadlocks, livelocks etc
and it might take a while for your before you can figure out what is going
on.

I would say try to find out who is and why is someone creating Exclusive
locks.


> cheers
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Running pg_dump from a slave server

2016-08-16 Thread Sameer Kumar
On Wed, Aug 17, 2016 at 12:00 PM Venkata B Nagothi 
wrote:

> On Wed, Aug 17, 2016 at 1:31 PM, Sameer Kumar 
> wrote:
>
>>
>>
>> On Wed, Aug 17, 2016 at 10:34 AM Patrick B 
>> wrote:
>>
>>> Hi guys,
>>>
>>> I'm using PostgreSQL 9.2 and I got one master and one slave with
>>> streaming replication.
>>>
>>> Currently, I got a backup script that runs daily from the master, it
>>> generates a dump file with 30GB of data.
>>>
>>> I changed the script to start running from the slave instead the master,
>>> and I'm getting this errors now:
>>>
>>> pg_dump: Dumping the contents of table "invoices" failed: PQgetResult()
>>>> failed.
>>>> pg_dump: Error message from server: ERROR:  canceling statement due to
>>>> conflict with recovery
>>>> DETAIL:  User was holding a relation lock for too long.
>>>
>>>
>> Looks like while your pg_dump sessions were trying to fetch the data,
>> someone fired a DDL or REINDEX or VACUUM FULL on the master database.
>>
>>>
>>> Isn't that possible? I can't run pg_dump from a slave?
>>>
>>
>> Well you can do that, but it has some limitation. If you do this quite
>> often, it would be rather better to have a dedicated standby for taking
>> backups/pg_dumps. Then you can set max_standby_streaming_delay and
>> max_standby_archiving_delay to -1. But I would not recommend doing this if
>> you use your standby for other read queries or for high availability.
>>
>> Another option would be avoid such queries which causes Exclusive Lock on
>> the master database during pg_dump.
>>
>
> Another work around could be to pause the recovery, execute the pg_dump
> and then, resume the recovery process. Not sure if this work around has
> been considered.
>
> You can consider executing "pg_xlog_replay_pause()" before executing
> pg_dump and then execute "pg_xlog_replay_resume()" after the pg_dump
> process completes.
>

Ideally I would not prefer if I had only one standby. If I am right, it
would increase the time my standby would take to complete recovery and
become active during a promotion (if I need it during a failure of master).
It may impact high availability/uptime. Isn't it?




> Regards,
> Venkata B N
>
> Fujitsu Australia
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Running pg_dump from a slave server

2016-08-16 Thread Sameer Kumar
On Wed, Aug 17, 2016 at 11:51 AM Patrick B  wrote:

>
>>
>> But do you have statements which causes Exclusive Locks? Ignoring them in
>> OLTP won't make your life any easier.
>>
>> (Keeping avoiding to get into 'recovery conflict' as your sole goal) If
>> you decide to  run pg_dump from master, it would block such statements
>> which have Exclusive locking. This would cause delays, deadlocks, livelocks
>> etc and it might take a while for your before you can figure out what is
>> going on.
>>
>> I would say try to find out who is and why is someone creating Exclusive
>> locks.
>>
>
>
> Yeah! The pg_dump was already running on the master... it's been running
> for months.. I just wanted to change now to use the slave, but it seems I
> can't right?
>
>
Not unless you find what is causing the conflict or you are ready to live
with delay in promotion when master goes down.


> Exclusive locking - I probably have statements that causes this. Is there
> any way I could "track" them?
>

Do you have-

1. DML operations changing large portions of a table at once - it might
lead a page level lock which might conflict with SELECT/share locks on
standby
2. Any REINDEX operations
3. DDL operation
4. VACUUM FULL

-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Question about performance - Postgres 9.5

2016-08-16 Thread Sameer Kumar
On Wed, 17 Aug 2016, 1:36 p.m. Venkata B Nagothi,  wrote:

> On Mon, Jun 13, 2016 at 8:37 AM, Patrick B 
> wrote:
>
>> Hi guys,
>>
>> In the db I'm working one, it will be three tables:
>>
>> visits, work, others.
>>
>> Everything the customer do, will be logged. All inserts/updates/deletes
>> will be logged.
>>
>> Option 1: Each table would have its own log table.
>> visits_logs, work_logs, others_logs
>>
>> Option 2: All the logs would be stored here...
>> log_table
>>
>> Can you please guys tell me which option would be faster in your opinion,
>> and why?
>>
>
> Did you mean that, you will be auditing the activities happening on those
> 3 tables ? If yes, can you clarify on what you will be exactly logging ?
>
> What will be the volume of transactions all the 3 tables will be receiving
> over a business day ? if the volume is manageable, then one table for
> logging all the actions across 3 tables would be good.
>

It will also depends on what you would be using the log entries for. What
kind of queries? Retention period? If you will query most often on date
range and also purge by date, then better log all in one table and
partition by date.

You can log old and new records in json format in one column that way you
don't need to worry about changing structure of underlying tables.

In the triggers which you use for auditing, you can transform rows to a
json document.
You can have columns for tableName, Action (insert/update/delete),
NewRecord (json), oldRecord (json), datetime


> If you are auditing and size of the data is manageable then, even one
> table would also be good. A separate audit table for each table would
> generally be a good idea, which makes it easy for tracking activities.
>
> Regards,
> Venkata B N
>
> Fujitsu Australia
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Limit Heap Fetches / Rows Removed by Filter in Index Scans

2016-08-18 Thread Sameer Kumar
On Fri, 19 Aug 2016, 1:07 p.m. Victor Blomqvist,  wrote:

> Hi,
>
> Is it possible to break/limit a query so that it returns whatever results
> found after having checked X amount of rows in a index scan?
>
> For example:
> create table a(id int primary key);
> insert into a select * from generate_series(1,10);
>
> select * from a
> where id%2 = 0
> order by id limit 10
>
> In this case the query will "visit" 20 rows and filter out 10 of them. We
> can see that in the query plan:
> "Rows Removed by Filter: 10"
> "Heap Fetches: 20"
>
> Is it somehow possible to limit this query so that it only fetches X
> amount, in my example if we limited it to 10 Heap Fetches the query could
> return the first 5 rows?
>
>
> My use case is I have a table with 35 million rows with a geo index, and I
> want to do a KNN search but also limit the query on some other parameters.
> In some cases the other parameters restrict the query so much that Heap
> Fetches becomes several 100k or more, and in those cases I would like to
> have a limit to my query.
>

Have you checked the TABLESAMPLE clause in v9.5?

https://wiki.postgresql.org/wiki/TABLESAMPLE_Implementation


> Thanks!
> /Victor
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Limit Heap Fetches / Rows Removed by Filter in Index Scans

2016-08-19 Thread Sameer Kumar
On Fri, Aug 19, 2016 at 2:25 PM Victor Blomqvist  wrote:

> On Fri, Aug 19, 2016 at 1:31 PM, Sameer Kumar 
> wrote:
>
>>
>>
>> On Fri, 19 Aug 2016, 1:07 p.m. Victor Blomqvist,  wrote:
>>
>>> Hi,
>>>
>>> Is it possible to break/limit a query so that it returns whatever
>>> results found after having checked X amount of rows in a index scan?
>>>
>>> For example:
>>> create table a(id int primary key);
>>> insert into a select * from generate_series(1,10);
>>>
>>> select * from a
>>> where id%2 = 0
>>> order by id limit 10
>>>
>>> In this case the query will "visit" 20 rows and filter out 10 of them.
>>> We can see that in the query plan:
>>> "Rows Removed by Filter: 10"
>>> "Heap Fetches: 20"
>>>
>>> Is it somehow possible to limit this query so that it only fetches X
>>> amount, in my example if we limited it to 10 Heap Fetches the query could
>>> return the first 5 rows?
>>>
>>>


>
>>> My use case is I have a table with 35 million rows with a geo index, and
>>> I want to do a KNN search but also limit the query on some other
>>> parameters. In some cases the other parameters restrict the query so much
>>> that Heap Fetches becomes several 100k or more, and in those cases I would
>>> like to have a limit to my query.
>>>
>>
>> Have you checked the TABLESAMPLE clause in v9.5?
>>
>> https://wiki.postgresql.org/wiki/TABLESAMPLE_Implementation
>>
>>
> Unless I misunderstand what you mean or how it works I cant really see
> what it would help.
>
>
I stand corrected. TABLESAMPLE will not help you.


> I want my query to still return the "best" results, and I want it to use
> the index for that. Just randomly selecting out from the whole table will
> either have to sample a too small subset of the rows, or be too slow.
>
> So, given my query above, in the normal ("slow" case) I would find the 10
> first even rows:
> 2,4,6,8,10,12,14,16,18,20
> If I could restrict the heap fetches to 10 I would find
> 2,4,6,8,10
> However, with tablesample I might end up with for example these rows:
> 15024,71914,51682,7110,61802,63390,98278,8022,34256,49220
>
>
How about using the LIMIT ?
SELECT column_1, column_2, ... FROM my_table WHERE <>
ORDER BY my_column
LIMIT 10 ;



> In my use case I want the best rows (according to the order by), so just
> a random sample is not good enough.
>
> /Victor
>
>
>>
>>> Thanks!
>>> /Victor
>>>
>> --
>> --
>> Best Regards
>> Sameer Kumar | DB Solution Architect
>> *ASHNIK PTE. LTD.*
>>
>> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>>
>> T: +65 6438 3504 | M: +65 8110 0350
>>
>> Skype: sameer.ashnik | www.ashnik.com
>>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] PG vs ElasticSearch for Logs

2016-08-19 Thread Sameer Kumar
On Fri, Aug 19, 2016 at 4:58 PM Thomas Güttler 
wrote:

>
>
> Am 19.08.2016 um 09:42 schrieb John R Pierce:
> > On 8/19/2016 12:32 AM, Thomas Güttler wrote:
> >> What do you think?
> >
> > I store most of my logs in flat textfiles syslog style, and use grep for
> adhoc querying.
> >
> >  200K rows/day, thats 1.4 million/week, 6 million/month, pretty soon
> you're talking big tables.
> >
> > in fact thats several rows/second on a 24/7 basis
>
> There is no need to store them more then 6 weeks in my current use case.
>
> I think indexing in postgres is much faster than grep.
>
> And queries including json data are not possible with grep (or at least
> very hard to type)
>
> My concern is which DB (or indexing) to use ...
>

How will you be using the logs? What kind of queries? What kind of
searches?
Correlating events and logs from various sources could be really easy with
joins, count and summary operations.

The kind of volume you are anticipating should be fine with Postgres but
before you really decide which one, you need to figure out what would you
want to do with this data once it is in Postgres.


> Regards,
>   Thomas
>
>
> --
> Thomas Guettler http://www.thomas-guettler.de/
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] PG vs ElasticSearch for Logs

2016-08-19 Thread Sameer Kumar
On Sat, 20 Aug 2016, 2:00 a.m. Andy Colson,  wrote:

> On 8/19/2016 2:32 AM, Thomas Güttler wrote:
> > I want to store logs in a simple table.
> >
> > Here my columns:
> >
> >   Primary-key (auto generated)
> >   timestamp
> >   host
> >   service-on-host
> >   loglevel
> >   msg
> >   json (optional)
> >
> > I am unsure which DB to choose: Postgres, ElasticSearch or ...?
> >
> > We don't have high traffic. About 200k rows per day.
> >
> > My heart beats for postgres. We use it since several years.
> >
> > On the other hand, the sentence "Don't store logs in a DB" is
> > somewhere in my head.
> >
> > What do you think?
> >
> >
> >
>
> I played with ElasticSearch a little, mostly because I wanted to use
> Kibana which looks really pretty.  I dumped a ton of logs into it, and
> made a pretty dashboard ... but in the end it didn't really help me, and
> wasn't that useful.  My problem is, I don't want to have to go look at
> it.  If something goes bad, then I want an email alert, at which point
> I'm going to go run top, and tail the logs.
>

There are tools from Elastic Stack which could have helped you achieve
email alerts and gather top or tailing of logfile


> Another problem I had with kibana/ES is the syntax to search stuff is
> different than I'm used to.  It made it hard to find stuff in kibana.
>
> Right now, I have a perl script that reads apache logs and fires off
> updates into PG to keep stats.  But its an hourly summary, which the
> website turns around and queries the stats to show pretty usage graphs.
>
> In the end, PG or ES, all depends on what you want.
>
> -Andy
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] PG vs ElasticSearch for Logs

2016-08-22 Thread Sameer Kumar
On Mon, 22 Aug 2016, 3:40 p.m. Thomas Güttler, 
wrote:

>
>
> Am 19.08.2016 um 19:59 schrieb Andy Colson:
> > On 8/19/2016 2:32 AM, Thomas Güttler wrote:
> >> I want to store logs in a simple table.
> >>
> >> Here my columns:
> >>
> >>   Primary-key (auto generated)
> >>   timestamp
> >>   host
> >>   service-on-host
> >>   loglevel
> >>   msg
> >>   json (optional)
> >>
> >> I am unsure which DB to choose: Postgres, ElasticSearch or ...?
> >>
> >> We don't have high traffic. About 200k rows per day.
> >>
> >> My heart beats for postgres. We use it since several years.
> >>
> >> On the other hand, the sentence "Don't store logs in a DB" is
> >> somewhere in my head.
> >>
> >> What do you think?
> >>
> >>
> >>
> >
> > I played with ElasticSearch a little, mostly because I wanted to use
> Kibana which looks really pretty.  I dumped a ton
> > of logs into it, and made a pretty dashboard ... but in the end it
> didn't really help me, and wasn't that useful.  My
> > problem is, I don't want to have to go look at it.  If something goes
> bad, then I want an email alert, at which point
> > I'm going to go run top, and tail the logs.
> >
> > Another problem I had with kibana/ES is the syntax to search stuff is
> different than I'm used to.  It made it hard to
> > find stuff in kibana.
> >
> > Right now, I have a perl script that reads apache logs and fires off
> updates into PG to keep stats.  But its an hourly
> > summary, which the website turns around and queries the stats to show
> pretty usage graphs.
>
> You use Perl to read apache logs. Does this work?
>
> Forwarding logs reliably is not easy. Logs are streams, files in unix are
> not streams. Sooner or later
> the files get rotated. RELP exists, but AFAIK it's usage is not wide
> spread:
>
>https://en.wikipedia.org/wiki/Reliable_Event_Logging_Protocol
>
> Let's see how to get the logs into postgres 
>
> > In the end, PG or ES, all depends on what you want.
>
> Most of my logs start from a http request. I want a unique id per request
> in every log line which gets created. This way I can trace the request,
> even if its impact spans to several hosts and systems which do not receive
> http requests.
>

You may decide not to use Elasticsearch but take a look at other components
of Elastic Stack like logstash and beats. They can be helpful even when you
use Postgres as the end point. Otherwise (IMHO), you would spend a lot of
time writing scripts and jobs to capture and stream logs. If I were you, I
would not want to do that.




> Regards,
>Thomas Güttler
>
>
> --
> Thomas Guettler http://www.thomas-guettler.de/
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Get date timestamp(3) without time zone column - PGSQL 9.5

2016-09-04 Thread Sameer Kumar
On Mon, Sep 5, 2016 at 10:59 AM Patrick B  wrote:

> Hi guys,
>
> I got the tasks table that has the tasks_start column:
>
>> tasks_start| timestamp(3) without time zone
>
>
>
> select tasks_start from tasks LIMIT 1;
>
>> tasks_start
>> ---
>> 2016-08-10 00:30:00
>
>
>  I'm trying to cast the date, using this query:
>
>> SELECT cast(tasks_start as date) FROM "jobs" WHERE "tasks"."deleted" =
>> 'f' AND "tasks"."recurrence_id" = 1 AND (Date(tasks_start) in ('2016-08-10')
>
>
You might want to share the version of PostgreSQL you are using.

You might want to try date_trunc and AT TIMEZONE function/operators-

https://www.postgresql.org/docs/9.4/static/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC

SELECT date_trunc('day', tasks_start at TIME ZONE 'EST')

Note: I have not tried this statement

Is this something you are going to use often? If that is the case then
consider to re-model your query. The moment you use an expression on a
column it would not use a normal BTree index.


> but it doesn't work.. I get 0 rows... what am I doing wrong?
>
> cheers
> Patrick
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Query regarding deadlock

2016-11-24 Thread Sameer Kumar
On Fri, 25 Nov 2016, 9:45 a.m. Yogesh Sharma, <
yogesh1.sha...@nectechnologies.in> wrote:

> Dear All,
>
> Thanks in advance.
> I found below deadlock in postgresql logs.
> I cannot change calling of REINDEX and insert query sequence because it is
> execute automatically through some cron script.
>

Does this mean that you reindex quite often based on a schedule. Personally
I don't prefer that. To me it is like you are trying to fix something that
is not broken.

Ideally reindex only what needs to be reindexed. I would not want to
reindex a table in OLTP env.


> ERROR:  deadlock detected
>  DETAIL:  Process 2234 waits for AccessShareLock on relation 16459 of
> database 16385; blocked by process 4111.
>  Process 4111 waits for ShareLock on relation 16502 of database 16385;
> blocked by process 2234.
>  Process 2234: INSERT INTO table1 ( id , unique_id )VALUES( '1', '4')
>  Process 4111: REINDEX TABLE table1
>



> Could you please provide any solution to resolve this deadlock.
>
> Regards,
> Yogesh
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 

-- 

Best Regards,

*Sameer Kumar | DB Solution Architect*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

T: +65 6438 3504 | www.ashnik.com

Skype: sameer.ashnik |   T: +65 8110 0350


[image: www.ashnik.com] <http://www.ashnik.com/>​


Re: [GENERAL] Query regarding deadlock

2016-11-24 Thread Sameer Kumar
On Fri, 25 Nov 2016, 10:07 a.m. Yogesh Sharma, <
yogesh1.sha...@nectechnologies.in> wrote:

> Dear John,
>
> Thanks for your support.
> I mean to say, the REINDEX calls hourly and insert query executes every
> minute to update.
> So, it might be race condition that these queries can call at same time.
>

Reindex every hour? How did you end up with that. Looks like you got
another problem while fixing one with a dirty solution. Why do you need to
reindex every hour?


> If there is any solution like we can add some check before REINDEX
> operation performed.
> If it is possible?
>
> Regards,
> Yogesh
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
> Sent: Friday, November 25, 2016 10:55 AM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Query regarding deadlock
>
> On 11/24/2016 5:44 PM, Yogesh Sharma wrote:
> > I cannot change calling of REINDEX and insert query sequence because it
> is execute automatically through some cron script.
>
> any cron scripts are your own doing, so this statement makes no sense at
> all.
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 

-- 

Best Regards,

*Sameer Kumar | DB Solution Architect*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

T: +65 6438 3504 | www.ashnik.com

Skype: sameer.ashnik |   T: +65 8110 0350


[image: www.ashnik.com] <http://www.ashnik.com/>​


Re: [GENERAL] Migrating data from DB2 zOS to PostgreSQL

2016-12-06 Thread Sameer Kumar
On Tue, 6 Dec 2016, 9:27 p.m. Swapnil Vaze,  wrote:

>
> Hello Julien,
>
> We created DDLs from DB2 zOS system and tried to run script in below
> format:
>
>  ./db2topg.pl -f sql1.txt -o testdir
>
> It throws below error:
>
> I don't understand  > at ./db2topg.pl line 880,  line 24.
>
> For testing we used file with below content:
>
> cat sql1.txt
> -- This CLP file was created using DB2LOOK Version "10.5"
> -- Timestamp: Tue Dec  6 04:14:28 CST 2016
> -- Database Name: DB239
> -- Database Manager Version: DB2 Version 11.1.0
> -- Database Codepage: 1208
>
>
> 
> -- DDL Statements for Table "A90DVDT"."DLR_FAM_MRKTSHR_FY_END"
> 
>
> CREATE TABLE "A90DVDT"."DLR_FAM_MRKTSHR_FY_END"
> (
>  "DEALER_ID"  CHAR(6)  NOT NULL ,
>  "MKTSHR_MONTH"  DATE NOT NULL ,
>  "L12_DP_DLR_IN_AOR"  DECIMAL(15,6) ,
>  "L12_DP_DLR_OUT_AOR"  DECIMAL(15,6) ,
>  "L12_DP_DLR_SHARE"  DECIMAL(8,5) ,
>  "L12_SA_DLR_SHARE"  DECIMAL(8,5) ,
>  "L12_CA_DLR_SHARE"  DECIMAL(8,5) ,
>  "L12_U90_DLR_IN_AOR"  DECIMAL(15,6) ,
>  "L12_U90_DLR_OUT_AOR"  DECIMAL(15,6) ,
>  "L12_U90_DLR_SHARE"  DECIMAL(8,5)
> );
>

I will strongly recommend removing the quotes around table name and column
names. Else you may have challenges porting application to PG. As PG by
default refers to object names in small case unless you out quotes around
them.

So after migration a statement like

Select * from DLR_FAM_MRKTSHR_FY_END;

would fail.

Also about the error, see what is before this create table statement. Make
sure there is a semi colon to terminate the statement before the CREATE
table.

How are you running it? psql?


> Thanks,
> Swapnil
>
>
> On Tue, Dec 6, 2016 at 12:23 PM, Julien Rouhaud  > wrote:
>
> On Tue, Dec 06, 2016 at 11:06:12AM +0530, Swapnil Vaze wrote:
> > Hello,
> >
>
> Hello
>
> > We need some help on how we can migrate data from DB2 zOS database to
> > postgres database.
> >
> > Are there any utilities present? Any thoughts how we should approach?
>
> You can use this utility: https://github.com/dalibo/db2topg
>
> The README should provide all needed informations.
>
> --
> Julien Rouhaud
> http://dalibo.com - http://dalibo.org
>
>
>
>
> --
> Thanks & Regards,
> Swapnil Vaze
>
-- 

-- 

Best Regards,

*Sameer Kumar | DB Solution Architect*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

T: +65 6438 3504 | www.ashnik.com

Skype: sameer.ashnik |   T: +65 8110 0350


[image: www.ashnik.com] <http://www.ashnik.com/>​


[GENERAL] [pgsql-GENERAL] AUTOVACUUM and Streaming Replication in v9.5

2016-12-15 Thread Sameer Kumar
Hi,

I was wondering if there is some change in the way
max_standby_streaming_delay and hot_standby_feedback work in v9.5.

Below is a scenario which I had built to test out something but the
behavior confused me. I would like to check if there is something/some
parameter which I am missing or if my understanding is wrong.

I have a Standby Server with hot_standby_feedback set to off.

My max_standby_streaming_delay is set to 30s.

I started a transaction on standby with repeatable read isolation level. I
fired a select

select count(*) from table_a;

This gave me result 6;

I deleted a huge chunk of rows on Primary for table_a.

I can see that autovaccum lauched an autovacuum and autoanalyze (going by
the timestamp in pg_stat_user_tables for table_a) after these deletes. For
a short period of time I also saw that pg_class.reltuples was reduced to a
smaller number (I think because of deletions).

But when I checked again, pg_stat_user_tables on master revealed that
autovacuum has triggered a few times since the deletion for table_a. I also
noted that on master pg_class.reltuples are back to roughly 6 (which
was the case before deletion).

I thought if my hot_standby_feedback is off, I might face an issue with
rows being removed by vacuum. But that did not happen. Can someone help me
to understand how MVCC, autovacuum work with Streaming replication.



Regards
Sameer
-- 

-- 

Best Regards,

*Sameer Kumar | DB Solution Architect*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

T: +65 6438 3504 | www.ashnik.com

Skype: sameer.ashnik |   T: +65 8110 0350


[image: www.ashnik.com] <http://www.ashnik.com/>​


Re: [GENERAL] [pgsql-GENERAL] AUTOVACUUM and Streaming Replication in v9.5

2016-12-15 Thread Sameer Kumar
On Fri, 16 Dec 2016, 12:10 a.m. Sameer Kumar, 
wrote:

> Hi,
>
> I was wondering if there is some change in the way
> max_standby_streaming_delay and hot_standby_feedback work in v9.5.
>
> Below is a scenario which I had built to test out something but the
> behavior confused me. I would like to check if there is something/some
> parameter which I am missing or if my understanding is wrong.
>
> I have a Standby Server with hot_standby_feedback set to off.
>
> My max_standby_streaming_delay is set to 30s.
>
> I started a transaction on standby with repeatable read isolation level. I
> fired a select
>
> select count(*) from table_a;
>
> This gave me result 6;
>
> I deleted a huge chunk of rows on Primary for table_a.
>
> I can see that autovaccum lauched an autovacuum and autoanalyze (going by
> the timestamp in pg_stat_user_tables for table_a) after these deletes. For
> a short period of time I also saw that pg_class.reltuples was reduced to a
> smaller number (I think because of deletions).
>
> But when I checked again, pg_stat_user_tables on master revealed that
> autovacuum has triggered a few times since the deletion for table_a. I also
> noted that on master pg_class.reltuples are back to roughly 6 (which
> was the case before deletion).
>
> I thought if my hot_standby_feedback is off, I might face an issue with
> rows being removed by vacuum. But that did not happen. Can someone help me
> to understand how MVCC, autovacuum work with Streaming replication.
>
>
I am running Postgresql 9.5.1 on CentOS 7.2 64bit.

Pls let me know if I have missed any relevant imp detail.


>
> Regards
> Sameer
> --
>
> --
>
> Best Regards,
>
> *Sameer Kumar | DB Solution Architect*
>
> *ASHNIK PTE. LTD.*
>
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533
>
> T: +65 6438 3504 | www.ashnik.com
>
> Skype: sameer.ashnik |   T: +65 8110 0350
>
>
> [image: www.ashnik.com] <http://www.ashnik.com/>​
>
-- 

-- 

Best Regards,

*Sameer Kumar | DB Solution Architect*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

T: +65 6438 3504 | www.ashnik.com

Skype: sameer.ashnik |   T: +65 8110 0350


[image: www.ashnik.com] <http://www.ashnik.com/>​


Re: [GENERAL] pgDay Asia / talks / lightning talks

2016-02-25 Thread Sameer Kumar
On Tue, Feb 16, 2016 at 7:43 PM Daniel Pocock  wrote:

>
>
> Hi,
>
> Is this the place to ask questions about pgDay Asia[1] or is there
> another mailing list for it?  The mailing list link on the pgDay Asia
> web site just takes me to a marketing list[2].  The seasiapug list[3]
> looks very quiet.
>

Oh yeah. There has not been much activity there. We will formally use that
or a separate mailing list post-event (for preparations for next year)


>
> I was looking at the pgDay Asia schedule and I notice some gaps and you
> don't have details of the lightning talks yet.  Do you still need
> speakers for full talks or lightning talks?
>

We would be looking at using lightening slots with un-conference format.
i.e. people can pitch their idea and reserve a slot on the day of the
event. Those would not be like the longer talks which are part of the
schedule.


>
> Also, I notice pgDay Asia is missing from the events list on the
> postgresql.org main page:
> http://www.postgresql.org/about/events/


We have added it a few days ago-
http://www.postgresql.org/about/event/1958/


If you have more queries or feedback you can route them to pgday-asia [at]
googlegroups [dot] com


> Regards,
>
> Daniel
>
> 1. http://2016.pgday.asia
> 2.
>
> http://uptime.us2.list-manage.com/subscribe/post?u=8b6e2840d44be26e9f646b9f9&id=128a96a18a
> 3. http://www.postgresql.org/list/seasiapug/
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] template1 being accessed

2016-04-05 Thread Sameer Kumar
On Tue, 5 Apr 2016 14:35 Luca Ferrari,  wrote:

> (sorry, back to the list)
>
> On Tue, Apr 5, 2016 at 6:11 AM, John R Pierce  wrote:
> > its also possible some management software might use it as a default
> place
> > to connect so they can get a list of databases or whatever .
>
> This is probably the most common case for continuos access to template1.
>

Or if you are using pgpool for connection pooling/load balancing, it might
connect to template1 and do health checks.


>
> Luca
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] I can't see wal receiver process in one node

2016-04-05 Thread Sameer Kumar
On Wed, 6 Apr 2016 03:13 DrakoRod,  wrote:

> Hi everybody
>
> I have a question about the replication process, I have a cluster with
> three
> nodes A,B and C in PostgreSQL 9.3 with streaming replication in cascade
> mode
> MASTER->SLAVE-SLAVE. The question is a single process that I can't see in
> Node B but yes I can see it in the Node C.
>
> This process is this:
>
> Node B:
> *
> postgres: startup process recovering 00xxx *
>

What is your recovery.conf on Slave (NodeB)? Are you also/only using
restore_command?



> Node C:
> *
> postgres: startup process recovering 00xxx
> postgres: wal receiver process streaming /XX*
>
> The wal 00xxx is the same in both.
>
> Is a problem or is the case in the process of cascade replication?
>
> Best regards
> DrakoRod
>
>
>
>
> -
> Dame un poco de fe, eso me bastará.
> Rozvo Ware Solutions
> --
> View this message in context:
> http://postgresql.nabble.com/I-can-t-see-wal-receiver-process-in-one-node-tp5896950.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Invalid data read from synchronously replicated hot standby

2016-04-21 Thread Sameer Kumar
On Thu, 21 Apr 2016 04:05 , 
wrote:

> Hi,
>
> We are getting invalid data when reading from a synchronously replicated
> hot standby node in a 2-node setup. To better understand the situation, we
> have created a document that provides an overview. We are hoping that
> someone might be able to confirm whether or not the setup makes sense,
> i.e., whether we are using PostgreSQL correctly and experiencing a bug, or
> if we are using PostgreSQL incorrectly.
>
> Link to document that contains a step-by-step description of the
> situation:
> https://docs.google.com/document/d/1MuX8rq1gKw_WZ-HVflqxFslvXNTRGKa77A4NHto4ue0/edit?usp=sharing
>
>
>
>
>
> If the setup is sane (and expected to work), we will work on setting up a
> minimal reproduce that avoids our complete system. We are thinking that a
> scripted Ansible/Vagrant setup makes sense.
>

I am not sure if it is because of that but you are on an old patch. Upgrade
to latest (I guess 9.1.21).

Once you have upgraded, re-create the stand by from scratch using a
basebackup and then see if the error is still there.


> Best regards,
> Martin
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] [NOVICE] Fwd: Process scheduling in postgres

2016-04-22 Thread Sameer Kumar
On Fri, Apr 22, 2016 at 5:21 PM raghu vineel  wrote:

>
> Hi,
>
> I have a 4 core CPU for postgres and I have submitted 6 queries parallely
> in 6 different sessions. But I am seeing only 1 query per CPU and the
> remainig queries are queued. I am not sure why is it happening. Do you have
> any special scheduler for postgres? I can see that CPU has been shared with
> other non postgres processes but postgres processes are not sharing any CPU
> within a core. The following is the process states.
>
>
> *lms   2868  2209 98 07:20 ?00:02:30 postgres: lms controller
> [local] SELECTlms   2869  2209 98 07:20 ?00:02:30 postgres: lms
> controller [local] SELECT*
> lms   2870  2209  0 07:20 ?00:00:00 postgres: lms controller
> [local] SELECT
> lms   2871  2209  0 07:20 ?00:00:00 postgres: lms controller
> [local] SELECT
> lms   2872  2209  0 07:20 ?00:00:00 postgres: lms controller
> [local] SELECT
> *lms   2873  2209 98 07:20 ?00:02:30 postgres: lms controller
> [local] SELECT*
>
> If you see above, three Select queries have been queued. They are
> acquiring CPU only when there is a core without any postgres query running
> on it.
>
> I know the postgres limitation of one query using only one core. But is
> there any limitation of one CPU running only one query at a time?
>
>
Are all the queries related to the same table(s)?
Which version of PostgreSQL? What's the version of OS?

Check out the locks which are not granted yet - pg_locks.granted


> Your help is appreciated.
>
> Thanks and regards,
> Vineel.
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Error: no connection to the server

2016-04-25 Thread Sameer Kumar
On Mon, 25 Apr 2016 15:59 Marco Bambini,  wrote:

> Hi,
> I have a multithreaded C client and sometimes I receive the "no connection
> to the server" error message.
>

Please add more details of the

- version of Postgres.
- Platform and OS details

Are you using any Middleware like pgpool or pgbouncer?

I haven't found any documentation about it and about how to fix this issue.
>
> Anyone can point me to the right direction?
> How can I fix it?
>
> Thanks a lot.
> --
> Marco Bambini
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Slow join over three tables

2016-04-26 Thread Sameer Kumar
he cluster from an SSD drive, as a traditional HDD could not
> even manage the query in under 5 minutes. The system has a total memory of
> 24 GB, runs on Debian and uses an 4Ghz 8 core i7-4790 processor.
>
> Some important postgresql.conf readouts:
>
> - shared_buffers = 4GB
> - work_mem = 64MB
> - maintenance_work_mem = 1GB
> - checkpoint_segments = 50
> - checkpoint_completion_target = 0.9
> - autovacuum = on
>
> Is there something I am missing here? Any help on getting this join faster
> is much appreciated.
>
> Cheers,
> Tim
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Issue during postgresql startup

2016-04-28 Thread Sameer Kumar
On Thu, 28 Apr 2016 15:22 Yogesh Sharma, 
wrote:

> Dear All,
>
> Thanks for your support.
>
> Could you please tell me, how to recover my system.
>
> I am facing below errors.
> ---
> could not open relation 1663/16385/1299988: No such file or directory
> ---
>
> This message is continuous occurred in system .
>


Looks like someone has been playing around with the database files or some
disk issue.

Basically Postgres stores each table inside

$tablespace_dir/$database_dir/table_file

In this error it seems one of the files have been removed/lost



> I have tried below procedure for the same but after recover this system,
> all DB tables are destroyed.
> http://www.hivelogik.com/blog/?p=513
>
>
You might want to post the steps and backup file/date which you have used.

Please let me know if any solution.
>
> Regards,
> Yogesh
>
>
>
> DISCLAIMER:
>
> ---
> The contents of this e-mail and any attachment(s) are confidential and
> intended
> for the named recipient(s) only.
> It shall not attach any liability on the originator or NEC or its
> affiliates. Any views or opinions presented in
> this email are solely those of the author and may not necessarily reflect
> the
> opinions of NEC or its affiliates.
> Any form of reproduction, dissemination, copying, disclosure, modification,
> distribution and / or publication of
> this message without the prior written consent of the author of this
> e-mail is
> strictly prohibited. If you have
> received this email in error please delete it and notify the sender
> immediately. .
>
> ---
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] index question

2016-05-02 Thread Sameer Kumar
On Tue, 3 May 2016 03:46 drum.lu...@gmail.com,  wrote:

> The index that I've created and is working is:
>
> Index without typecasting:
>
>> CREATE INDEX CONCURRENTLY ix_clientids2 ON gorfs.inode_segments USING
>> btree (full_path);
>
>
> Thanks for the help, guys!
>
>
> Melvin, that Query you sent is very interesting..
>
> SELECT n.nspname as schema,
>>i.relname as table,
>>i.indexrelname as index,
>>i.idx_scan,
>>i.idx_tup_read,
>>i.idx_tup_fetch,
>>CASE WHEN idx.indisprimary
>> THEN 'pkey'
>> WHEN idx.indisunique
>> THEN 'uidx'
>> ELSE 'idx'
>> END AS type,
>>pg_get_indexdef(idx.indexrelid),
>>CASE WHEN idx.indisvalid
>> THEN 'valid'
>> ELSE 'INVALID'
>> END as statusi,
>>pg_relation_size(quote_ident(n.nspname)|| '.' ||
>> quote_ident(i.relname)) as size_in_bytes,
>>pg_size_pretty(pg_relation_size(quote_ident(n.nspname)|| '.' ||
>> quote_ident(i.relname))) as size
>>   FROM pg_stat_all_indexes i
>>   JOIN pg_class c ON (c.oid = i.relid)
>>   JOIN pg_namespace n ON (n.oid = c.relnamespace)
>>   JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
>>  WHERE n.nspname NOT LIKE 'pg_%'
>> ORDER BY 1, 2, 3;
>
>
>
> I've found more then 100 indexes that the columns:
>

I am not sure, but I think if the size of index is very huge and you
suspect they are not being used, you might want to check the bloats
percentage in the index. If this is true, perhaps a reindex might help.



> "idx_scan", "idx_tup_read" and "idx_tup_fetch" are 0.
> So, it's safe to say that they are not being used, is that right?
>
> But some indexes have almost 100GB on the size column. This means they are
> not being used now, but they could be used in the past?
>
> - Is it safe to remove them?
>
> Cheers
> Lucas
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] index question

2016-05-02 Thread Sameer Kumar
On Tue, 3 May 2016 08:45 David G. Johnston, 
wrote:

> I am not sure, but I think if the size of index is very huge and you
>> suspect they are not being used, you might want to check the bloats
>> percentage in the index. If this is true, perhaps a reindex might help.
>>
>
> If they aren't being used just drop them.  Its when they are used and get
> bloated that you are in trouble.
>


Yes you are right. My bad. It is when they are bloated and being used by
planner, the scans will be slower.

>
> There are recipes for bloat calculation out there that are worth
> investigating.​
>
>
> ​David J.​
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Inserting into a master table with partitions does not return rows affected.

2016-05-10 Thread Sameer Kumar
On Tue, May 10, 2016 at 3:53 AM rverghese  wrote:

> I am moving towards a partitioned schema. I use a function to insert into
> the
> table. If the INSERT fails because of duplicates I do an UPDATE. This works
> fine currently on the non-partitioned table because I can use GET
> DIAGNOSTICS to get the row count on the INSERT.
>
> But when I use the Master table to insert into the partitions, GET
> DIAGNOSTICS always returns 0. So there is no way of knowing whether a row
> was inserted (I am catching the unique violation exception to do the
> UPDATE).
>

If I were you, I would put the update logic in the Trigger Function which
is used for doing the insert in specific partitions.
Of course I am not aware of the exact logic and scenario so it may not work
best for you.


>
> What is a good alternative? We are on 9.4, so the UPSERT is not yet
> available to me.
> There should be some way to know if data was inserted into the partition.
>
> Thanks
> RV
>
>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Inserting-into-a-master-table-with-partitions-does-not-return-rows-affected-tp5902708.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] COPY command & binary format

2016-05-10 Thread Sameer Kumar
On Tue, May 10, 2016 at 4:26 PM Nicolas Paris  wrote:

> Hello,
>
> What is the way to build a binary format (instead of a csv) ? Is there
> specification for this file ?
> http://www.postgresql.org/docs/9.5/static/sql-copy.html
>

>
> Could I create such format from java ?
>

You can use COPY JDBC API to copy to STDOUT and then compress it before you
use usual Java file operations to write it to a file. You will have to
follow the reverse process while reading from this file and LOADING to a
table.

But why would you want to do that?


>
> I guess this would be far faster, and maybe safer than CSVs
>

I don't think assumption is right. COPY is not meant for backup, it is for
LOAD and UN-LOAD.

What you probably need is pg_dump with -Fc format.
http://www.postgresql.org/docs/current/static/app-pgdump.html


>
> Thanks by advance,
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] COPY command & binary format

2016-05-10 Thread Sameer Kumar
On Tue, May 10, 2016 at 4:36 PM Sameer Kumar 
wrote:

> On Tue, May 10, 2016 at 4:26 PM Nicolas Paris  wrote:
>
>> Hello,
>>
>> What is the way to build a binary format (instead of a csv) ? Is there
>> specification for this file ?
>> http://www.postgresql.org/docs/9.5/static/sql-copy.html
>>
>
>>
>> Could I create such format from java ?
>>
>
> You can use COPY JDBC API to copy to STDOUT and then compress it before
> you use usual Java file operations to write it to a file. You will have to
> follow the reverse process while reading from this file and LOADING to a
> table.
>
> But why would you want to do that?
>
>
>>
>> I guess this would be far faster, and maybe safer than CSVs
>>
>
> I don't think assumption is right. COPY is not meant for backup, it is for
> LOAD and UN-LOAD.
>
> What you probably need is pg_dump with -Fc format.
> http://www.postgresql.org/docs/current/static/app-pgdump.html
>
>

Like someone else suggested upthread you can use Binary format in COPY
command (default is text)


>
>> Thanks by advance,
>>
> --
> --
> Best Regards
> Sameer Kumar | DB Solution Architect
> *ASHNIK PTE. LTD.*
>
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>
> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Streaming replication, master recycling

2016-05-11 Thread Sameer Kumar
On Wed, May 11, 2016 at 4:35 PM  wrote:

> I apologise for the missing data.
>
> we are running 9.1.15 on debian servers.
>
>
I think there was a patch in v9.3 which makes sure that if the master has
been shutdown properly (smart or fast mode), it will ensure that pending
wals are replicated before it shutdown. Also, the timeline switch are
written in WAL files since v9.3

So I don't see a reason why a proper switchover with fast shutdown of
master and promotion of standby will cause troubles with v9.3 or greater.

Ofcourse I can be wrong (and naive!) and this does not apply for your case.


when we promote the old slave, it seems to go fine. Are you saying that it
> will cause issues down the line if the previous master is not shut down
> before promoting?
>

You might want to share your recovery.conf on standby node and the
recovery.conf which you add on the lost node (old master) while adding it
as a standby.


>
> I was actually more concerned with the fact that we (some times) recycle
> the old master without doing a full basebackup.
>

I have done with with v9.2 and v9.3 and seems to be working fine. As long
as you have not missed any transactions from master (controlled
switchover). In case you are in a situation where master went down before
it could replicate the last committed transaction, I don't think lost node
(old master) will be able to join the new timeline of standby so your
replication would not work (even though the node has been started up).


> Again, this seems to work, but this presentation seems to indicate that
> this can cause problems (while seeming to work):
> http://hlinnaka.iki.fi/presentations/NordicPGDay2015-pg_rewind.pdf
>
> The note is on page 14, under the headline: "Naive approach".
>
>
> thank you for your support,
> Fredrik
>
> On 11 May 2016 at 12:47:13 +02:00, Venkata Balaji N 
> wrote:
>
>
> On Wed, May 11, 2016 at 2:31 PM,  wrote:
>
> Hi All,
>
> we are currently using streaming replication on multiple node pairs. We
> are seeing some issues, but I am mainly interrested in clarification.
>
> When a failover occurs, we touch the trigger file, promoting the previous
> slave to master. That works perfectly.
>
> For recycling the previous master, we create a recovery.conf (with
> recovery_target_timeline = 'latest') and *try* to start up. If postgresql
> starts up, we accept it as a new slave. If it does not, we proceed with a
> full basebackup.
>
>
> Which version of postgresql you are using ?
>
> You need to shutdown master first, then promote slave and then other way
> round, but, this can be clarified only if you let us know the postgresql
> version. This is quite tricky in 9.2.x and from 9.3.x.
>
> Regards,
> Venkata B N
>
> Fujitsu Australia
>
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Update or Delete causes canceling of long running slave queries

2016-05-12 Thread Sameer Kumar
On Thu, 12 May 2016 21:56 Viswanath,  wrote:

> Hi,
> I have the following configurations.
>
> On master:
> autovacuum=off
>

Have you changed it recently or it has always been off? In case you changed
it recently, a restart would be needed for this setting to take effect.

vacuum_defer_cleanup_age=0
>
> On slave:
> hot_standby_feedback = off
> max_standby_streaming_delay = 30s
>
> Now when I try to run a long query on slave and do some update or delete on
> the table on master I am getting the following error.
>
> ERROR:  canceling statement due to conflict with recovery
> DETAIL:  User query might have needed to see row versions that must be
> removed.
>
> The error code was : 40001
>
> I am not running vacuum manually and autovacuum is 'off'
> What could possibly causing the conflict?
>

Please share mode details like version and standby DB conf etc.


>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Update-or-Delete-causes-canceling-of-long-running-slave-queries-tp5903250.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Use of array_agg and array string on inner query

2016-05-18 Thread Sameer Kumar
On Thu, May 19, 2016 at 1:09 AM shankha  wrote:

> I have the following piece of code:
>
> DROP SCHEMA IF EXISTS s CASCADE;
> CREATE SCHEMA s;
>
> CREATE TABLE "s"."t1"
> (
> "c1" BigSerial PRIMARY KEY,
> "c2" BigInt NOT NULL,
> "c3" BigInt
> )
> WITH (OIDS=FALSE);
>
> INSERT INTO s.t1 (c2) VALUES (10);
> INSERT INTO s.t1 (c2, c3) VALUES (20, 10);
> INSERT INTO s.t1 (c2, c3) VALUES (30, 10);
>
> /* 1. */ SELECT t3.c1 FROM s.t1 as t3 JOIN s.t1 as t2 ON t3.c3 = t2.c2;
>
> /* 2. */ SELECT t1.c1, ARRAY_TO_STRING(ARRAY_AGG((t2.c1)), ',')
> FROM s.t1 LEFT JOIN  s.t1 as t2
> ON t2.c3 = t1.c2 GROUP BY t1.c1;
>
> /* 3. */ SELECT c1, c2,
> ARRAY_TO_STRING(ARRAY_AGG((SELECT t3.c1 FROM s.t1 as t3 JOIN s.t1
> as t2 ON t3.c3 = t2.c2)), ',')
> FROM s.t1 t1
> GROUP BY c1;
> DROP SCHEMA s CASCADE;
>

The query

SELECT t3.c1 FROM s.t1 as t3 JOIN s.t1 as t2 ON t3.c3 = t2.c2 produces
multiple rows. Since you are calling the aggregate function on the result
set and not as part of the expression, you are not able to get single row
as an output.



> The output for 1 query:
>
>  c1
>  
>  2
>  3
> (2 rows)
>
> 2 Query:
>
>  c1 | array_to_string
>  +-
>1 | 2,3
>2 |
>3 |
>   (3 rows)
>
> 3 Query gives me a error:
>
>psql:/tmp/aggregate.sql:24: ERROR:  more than one row returned
> by a subquery used as an expression
>
>
> The 3 query uses 1 query as inner query. Is there a way to make Query
> 3 work with inner query as 1 rather than reverting to 2.
>
> 3 output should be same as 2.
>
> I understand that the error message says query 1 when used as sub
> query of 3 cannot return more than one row.
>
> Pardon my limited knowledge of database.
>
>
> I have tried out:
>
>  SELECT c1, c2,
> ARRAY_TO_STRING((SELECT ARRAY_AGG(t3.c1) FROM s.t1 as t3 JOIN s.t1
> as t2 ON t3.c3 = t2.c2), ',')
> FROM s.t1 t1
> GROUP BY c1;
>
>
This would work since the aggregate function has been used on the column.


> Output is :
>
>  c1 | c2 | array_to_string
> ++-
>   2 | 20 | 2,3
>   1 | 10 | 2,3
>   3 | 30 | 2,3
>
> Could one of you help me with the correct query.
>
>
>
May you should share some more details of exactly what you are expecting
and what is the output/corelation you want in the result of the query.


> Thanks
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Use of array_agg and array string on inner query

2016-05-18 Thread Sameer Kumar
On Thu, 19 May 2016, 2:07 a.m. shankha,  wrote:

> The original table is :
>
> c1 c2 c3
> 110
> 220 10
> 320 10
>
> So c3 of row 3 and row 2 are equal to c2 of row 1.
>
>
> The output I am looking for is :
>  c1 | array_to_string
>  +-
>1 | 2,3
>2 |
>3 |
>   (3 rows)
>
> How Can I modify this query :
>
> SELECT c1, c2,
> ARRAY_TO_STRING(ARRAY_AGG((SELECT t3.c1 FROM s.t1 as t3 JOIN s.t1
> as t2 ON t3.c3 = t2.c2)), ',')
> FROM s.t1 t1
> GROUP BY c1;
> DROP SCHEMA s CASCADE;
>

Move array_agg call around the column name instead of calling it on the
select output.

The 4th query you have used seems to be working except that it 'kind of'
does a cross product or lateral join. You might want to use a CTE instead
if bested select and use that with OUTER JOIN or may be in the inner query
use a correlated where clause (where t1.c2=t2.c2)


> to get me the output desired.
>
> Thanks
> Shankha Banerjee
>
>
> On Wed, May 18, 2016 at 1:57 PM, Sameer Kumar 
> wrote:
> >
> >
> > On Thu, May 19, 2016 at 1:09 AM shankha 
> wrote:
> >>
> >> I have the following piece of code:
> >>
> >> DROP SCHEMA IF EXISTS s CASCADE;
> >> CREATE SCHEMA s;
> >>
> >> CREATE TABLE "s"."t1"
> >> (
> >> "c1" BigSerial PRIMARY KEY,
> >> "c2" BigInt NOT NULL,
> >> "c3" BigInt
> >> )
> >> WITH (OIDS=FALSE);
> >>
> >> INSERT INTO s.t1 (c2) VALUES (10);
> >> INSERT INTO s.t1 (c2, c3) VALUES (20, 10);
> >> INSERT INTO s.t1 (c2, c3) VALUES (30, 10);
> >>
> >> /* 1. */ SELECT t3.c1 FROM s.t1 as t3 JOIN s.t1 as t2 ON t3.c3 =
> >> t2.c2;
> >>
> >> /* 2. */ SELECT t1.c1, ARRAY_TO_STRING(ARRAY_AGG((t2.c1)), ',')
> >> FROM s.t1 LEFT JOIN  s.t1 as t2
> >> ON t2.c3 = t1.c2 GROUP BY t1.c1;
> >>
> >> /* 3. */ SELECT c1, c2,
> >> ARRAY_TO_STRING(ARRAY_AGG((SELECT t3.c1 FROM s.t1 as t3 JOIN s.t1
> >> as t2 ON t3.c3 = t2.c2)), ',')
> >> FROM s.t1 t1
> >> GROUP BY c1;
> >> DROP SCHEMA s CASCADE;
> >
> >
> > The query
> >
> > SELECT t3.c1 FROM s.t1 as t3 JOIN s.t1 as t2 ON t3.c3 = t2.c2 produces
> > multiple rows. Since you are calling the aggregate function on the result
> > set and not as part of the expression, you are not able to get single
> row as
> > an output.
> >
> >
> >>
> >> The output for 1 query:
> >>
> >>  c1
> >>  
> >>  2
> >>  3
> >> (2 rows)
> >>
> >> 2 Query:
> >>
> >>  c1 | array_to_string
> >>  +-
> >>1 | 2,3
> >>2 |
> >>3 |
> >>   (3 rows)
> >>
> >> 3 Query gives me a error:
> >>
> >>psql:/tmp/aggregate.sql:24: ERROR:  more than one row returned
> >> by a subquery used as an expression
> >>
> >>
> >> The 3 query uses 1 query as inner query. Is there a way to make Query
> >> 3 work with inner query as 1 rather than reverting to 2.
> >>
> >> 3 output should be same as 2.
> >>
> >> I understand that the error message says query 1 when used as sub
> >> query of 3 cannot return more than one row.
> >>
> >> Pardon my limited knowledge of database.
> >>
> >>
> >> I have tried out:
> >>
> >>  SELECT c1, c2,
> >> ARRAY_TO_STRING((SELECT ARRAY_AGG(t3.c1) FROM s.t1 as t3 JOIN s.t1
> >> as t2 ON t3.c3 = t2.c2), ',')
> >> FROM s.t1 t1
> >> GROUP BY c1;
> >>
> >
> > This would work since the aggregate function has been used on the column.
> >
> >>
> >> Output is :
> >>
> >>  c1 | c2 | array_to_string
> >> ++-
> >>   2 | 20 | 2,3
> >>   1 | 10 | 2,3
> >>   3 | 30 | 2,3
> >>
> >> Could one of you help me with the correct query.
> >>
> >>
> >
> > May you should share some more details of exactly what you are expecting
> and
> > what is the output/corelation you want in the result of the query.
> >
> >>
> >> Thanks
> >>
> >>
> >> --
> >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-general
> >
> > --
> > --
> > Best Regards
> > Sameer Kumar | DB Solution Architect
> > ASHNIK PTE. LTD.
> >
> > 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
> >
> > T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] How to view creation date and time of a relation

2016-05-19 Thread Sameer Kumar
On Thu, May 19, 2016 at 3:29 PM John R Pierce  wrote:

> On 5/19/2016 12:18 AM, Shrikant Bhende wrote:
>
>
> Our application executes come scripts with the code consist of  DDL which
> creates lot of objects in the database in various schemas,also there are
> lot of connections firing the same code. I am able to locate the IP from
> where the script is initiated (which is causing more load on the database
> ), but I would like to know if I can pinpoint the relations which are
> created on a specific  date and time or else I can do something through
> which I get creation date and time of the objects.
>
>
> you would need to have postgres configured to log DDL, and set the log
> prefix to include timestamping, then you could scan those logs to get that
> information.its not otherwise stored in the database.
>
>
> Logging is the best way of capturing these events.

You can probably find out the relfilenode from pg_class for a given
relation name (relname) and then go to the datadirectory (or tablespace
directory) --> db directory (mapped to oid of pg_database) --> filename.
Check the date time of the file when it was created.

Though I don't think this infra has been built for this very purpose.


> --
> john r pierce, recycling bits in santa cruz
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Postgresql-MongoDb foreigndata wrapper

2016-05-22 Thread Sameer Kumar
On Mon, 23 May 2016, 1:03 p.m. aluka raju,  wrote:

> Hello,
>
> I want use my data source as  mongodb and connect through postgresql fdw.
> Is their any restrictions that the data present in mongodb to be structured
> or unstructured data?
>

For connecting to mongoDB you would need mongodb_fdw and not PostgreSQL
fdw. I don't think they have any restrictions with respect to the data
model.

BTW, why do you want to do that?


> Thanks
> aluka
>
> Sent with MailTrack
> <https://mailtrack.io/install?source=signature&lang=en&referral=alukaraju2...@gmail.com&idSignature=22>
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Postgresql-MongoDb foreigndata wrapper

2016-05-22 Thread Sameer Kumar
Cc pg-list

Pls avoiding reaching out to members via list. I may not know something
which someone else in the list may know.

On Mon, 23 May 2016, 1:22 p.m. aluka raju,  wrote:

> Hello,
>
> I have data storage in flat files (structured and unstructured) . I want
> to run sql queries on that , so i am looking in to postgresql to this on
> the data that i have. I want to prepare a software for running sql queries
> for the data in files,so i am trying to have separate postgresql engine and
> write an wrappper for connecting the data that are in files.
>

Do these files change?

>
> Do you have any idea how can we do ?
>

Have you considering uploading this data to PostgreSQL? It can store
relational and JSOn data both.



>
> Thanks,
> aluka
>
>
>
> Sent with MailTrack
> <https://mailtrack.io/install?source=signature&lang=en&referral=alukaraju2...@gmail.com&idSignature=22>
>
> On Mon, May 23, 2016 at 10:37 AM, Sameer Kumar 
> wrote:
>
>>
>>
>> On Mon, 23 May 2016, 1:03 p.m. aluka raju, 
>> wrote:
>>
>>> Hello,
>>>
>>> I want use my data source as  mongodb and connect through postgresql
>>> fdw. Is their any restrictions that the data present in mongodb to be
>>> structured or unstructured data?
>>>
>>
>> For connecting to mongoDB you would need mongodb_fdw and not PostgreSQL
>> fdw. I don't think they have any restrictions with respect to the data
>> model.
>>
>> BTW, why do you want to do that?
>>
>>
>>> Thanks
>>> aluka
>>>
>>> Sent with MailTrack
>>> <https://mailtrack.io/install?source=signature&lang=en&referral=alukaraju2...@gmail.com&idSignature=22>
>>>
>> --
>> --
>> Best Regards
>> Sameer Kumar | DB Solution Architect
>> *ASHNIK PTE. LTD.*
>>
>> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>>
>> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>>
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] archive_command during database shutdown

2016-05-25 Thread Sameer Kumar
On Thu, 26 May 2016, 1:25 a.m. Jeff Janes,  wrote:

> I've recently wanted to run a different archive_command during
> database shutdown than during normal operations.  In particular, if
> the normal archive process fails during normal operations, I want it
> to be retried later (as it currently does).  But if it fails during
> shutdown, I want it to run a fallback archive_command.
>

What version of PostgreSQL are you using?


> The only way I can see to accomplish this is to have the
> archive_command try to connect back to the database and see if it gets
> an error.  That seems pretty ugly.  Is there a better way?
>

What's your goal here?

During a shutdown, if you don't so much care about checkpoint and fsync of
buffers to disk, you can do an immediate shutdown.


> Cheers,
>
> Jeff
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] 9.6beta, parallel execution and cpu_tuple_cost

2016-05-27 Thread Sameer Kumar
On Fri, 27 May 2016, 9:26 p.m. Thomas Kellerer,  wrote:

> Hello,
>
> while playing around with the parallel aggregates and seq scan in 9.6beta
> I noticed that Postgres will stop using parallel plans when cpu_tuple_cost
> is set to a very small number.
>
> When using the defaults and max_parallel_degree = 4, the following (test)
> query will be executed with 4 workers
>
>explain (analyze, verbose)
>select o.customer_id,
>   count(*) num_orders,
>   sum(ol.price) as total_price,
>   sum(p.purchase_price) as total_purchase_price
>from orders o
>  join order_line ol on o.id = ol.order_id
>  join product p ON ol.product_id = p.id
>group by o.customer_id;
>
> The execution plan is: https://explain.depesz.com/s/C7g
>
> After setting cpu_tuple_cost to something small:
>
>set cpu_tuple_cost = 0.0001;
>
> No parallel wokers are used: https://explain.depesz.com/s/q1zb
>
>
> I am not sure I understand why this is happening. Why would lowering the
> CPU cost for a tuple result in not using a parallel plan?
>
> Is this an oversight, a bug or intended?
>
>

This is expected. You have modified cost of processing the tuple by CPu but
you have not modified the cost of parallel tuple processing
(parallel_tuple_cost, if I recall it right). That forces optimizer to
evaluate parallel processing of tuples to be more costly, hence chooses a
plan without parallel worker.

Perhaps if you reduce the both parameters by same factor/margin, you can
see that parallel execution will be preffered when it is of lower cost.



> Regards
> Thomas
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


[GENERAL] Online recovery of Tablespace

2014-05-21 Thread Sameer Kumar
Hi,

I am designing backup strategy for a PostgreSQL database (v9.3). I have a
scenario for recovery of tablespaces:

1. Backup of whole database (including individual tablespaces which are
stored on different disks) has been taken at 11AM

2. My disk which stores tablespace- tblspc1 crashed at 2:00PM

3. Can I restore the backup of 11AM (only for one tablespace) and then
recover that tablespace to 2:00PM state?


Is this possible? I have attached the steps I tried (I believe logically my
steps are wrong, since I am using recovery.conf but I am not replacing data
directory).

But is there any way to specify in recovery.conf or otherwise that I would
allow me to do recovery of transactions of a particular tablespace? A
custom solution which occurs to me is using pg_xlogdump contrib. Has anyone
tried something similar?

Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350 <%2B65%208110%200350>*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] <http://www.ashnik.com/>



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).
Step1:

List of Tablespace:

   Name |  Owner   | Location 
-+--+--
 acct_tbsp   | postgres | /opt/PostgreSQL/tbspc
 pg_default  | postgres | 
 pg_global   | postgres | 
 test_tblspc | postgres | /opt/PostgresPlus/9.2AS/tblspc_1


Step2:

postgres=# select pg_start_backup('online_backup');
 pg_start_backup 
-
 0/1528
(1 row)


Step3:
Take backup of each tablespace location
cd /opt/PostgresPlus/9.2AS
tar -xzvf tblspc_1.tar.gz tblspc_1

cd /opt/PostgreSQL
tar -xzvf tbspc.tar.gz tbspc


Step4:
Take Backup of data directory.

cd $PGDATA/..
tar -czvf data.tar.gz data


Step5:
postgres=# select pg_stop_backup()
postgres-# ;
NOTICE:  pg_stop_backup complete, all required WAL segments have been archived
 pg_stop_backup 

 0/15015228
(1 row)


Step6: Create test tables on both the tablespaces

postgres=# create table after_online_backup_at_acct_tbsp(col1 int) tablespace 
acct_tbsp;
CREATE TABLE

postgres=# create table after_online_backup_at_test_tblspc(col1 int) tablespace 
test_tblspc;
CREATE TABLE



Step7:
Remove the directory for tablespace in another window:

rm -rf /opt/PostgreSQL/tbspc


Step8: Try to access the table which points to removed file

postgres=# select * from after_online_backup_at_tblspc1;
ERROR:  relation "after_online_backup_at_tblspc1" does not exist
LINE 1: select * from after_online_backup_at_tblspc1;


Step9: Restart the server- pg_ctl restart -m fast 

Check logs: Error noted-

2014-05-20 20:57:24 SGT LOG:  database system was shut down at 2014-05-20 
20:57:23 SGT
2014-05-20 20:57:24 SGT LOG:  could not open tablespace directory 
"pg_tblspc/41918/PG_9.3_201306121": No such file or directory
2014-05-20 20:57:24 SGT LOG:  autovacuum launcher started
2014-05-20 20:57:24 SGT LOG:  database system is ready to accept connections


List the tablesapces:
postgres=# \db
List of tablespaces
Name |  Owner   | Location 
-+--+--
 acct_tbsp   | postgres | /opt/PostgreSQL/tbspc
 pg_default  | postgres | 
 pg_global   | postgres | 
 test_tblspc | postgres | /opt/PostgresPlus/9.2AS/tblspc_1


Step10: Prepare for a recovery:
Stop the server
pg_ctl stop -m fast


go to data directory
cd $PGDATA
create recovery.conf with below content

restore_command = 'cp /opt/PostgresPlus/arch_dir/%f %p'


restore the tablespace backup:
cd /opt/PostgreSQL
tar -xzvf tbspc.tar.gz 


start PostgreSQL 
pg_ctl start

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] (Relatively) Oversized Checkpoint

2014-06-17 Thread Sameer Kumar
As pointed out you may want to look at some of your kernel parameters which
control the writeback (from cache) to disk.

Additionally you may want to take a look bg write parameters. Specially
bgwriter delay. I guess tuning that should help as well.

It might be useful if you can share your parameter file here.

Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] <http://www.ashnik.com/>



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).


Re: [GENERAL] HOT standby with ONLY WAL shipping?

2014-06-17 Thread Sameer Kumar
On Fri, Jun 13, 2014 at 1:01 AM, CS_DBA  wrote:

> Is it possible to setup a hot standby based ONLY on WAL shipping and NOT
> use streaming replication?


If I am not wrong then what you are looking for is Archive Shipping​.
Any specific reason why you want to do that?

Anyways, you can use pg_receivexlog as well for setting up archives or for
setting up a replica. But if you are trying to avoid some other issue
w.r.t. streaming mechanism in PostgreSQL (which I assume will be very
specific to your environment), then this may not be very helpful as
pg_receivexlog too works on same streaming protocol and needs wal-sender
processes.



Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] <http://www.ashnik.com/>



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).


[GENERAL] NOT IN and NOT EXIST

2014-07-03 Thread Sameer Kumar
Hi,

Postgres optimizer automatically tries to convert an IN clause to Hash Join
(something similar to EXISTS equivalent of that query).

Does a similar translation happen for NOT IN as well? Given that the column
used is NOT NUL.

Select * from emp where deptid not in (select deptid from dept where
deptLocation='New York');

Will this above statement be automatically converted to a plan which would
match below statement?

Select * from emp where not exists (select 1 from dept where
deptLocation='New York' and dept.deptid=emp.deptid);

 Regards
Sameer

PS: Sent from my Mobile device. Pls ignore typo n abb


Re: [GENERAL] NOT IN and NOT EXIST

2014-07-03 Thread Sameer Kumar
On Thu, Jul 3, 2014 at 11:52 PM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:

> You can easily test this for yourself using explain.

​I tried it out.

NOT EXISTS translates to HASH ANTI JOIN and NOT IN translates to NOT
(HASHED) operation.

Given that the columns used in NOT IN clause (for outer as well as inner)
are NOT NULL, should not it translate a NOT IN plan similar to NOT EXISTS
plan?


Also note that depending on the version of PostgreSQL and the nature of
> your data it could be preferable to use WHERE IN... or to use EXISTS.
> Fortunately as the planner has been imbued with ever increasing smarts, the
> need to test and choose between the two seems to have diminished.


Postgres already does that for IN and EXISTS. Both would try to use HASH.
But does not seem to be the case with NOT IN and NOT EXISTS.



Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] <http://www.ashnik.com/>



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).


Re: [GENERAL] NOT IN and NOT EXIST

2014-07-07 Thread Sameer Kumar
On Sun, Jul 6, 2014 at 7:14 AM, Vik Fearing  wrote:

> > NOT EXISTS translates to HASH ANTI JOIN and NOT IN translates to NOT
> > (HASHED) operation.
> >
> > Given that the columns used in NOT IN clause (for outer as well as
> > inner) are NOT NULL, should not it translate a NOT IN plan similar to
> > NOT EXISTS plan?
>
> It will, hopefully in 9.5.
>
> https://commitfest.postgresql.org/action/patch_view?id=1487


​Cool!
Will be looking forward to it! But seems a long way from now...​


Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] <http://www.ashnik.com/>



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).


[GENERAL] Monitoring PostgreSQL with BMC Patrol and Sentry Monitoring Studio

2014-07-28 Thread Sameer Kumar
Hi,

I plan to use PostgreSQL on RHEL for one of our customers. They are already
using BMC Patrol for monitoring various applications and databases (Oracle
and SQL Server).

They would like to continue using the same tool for Postgres too. It looks
like Postgres is not supported out of box for BMC Patrol, going through a
few threads tells me that it can integrated with BMC patrol via sentry
monitoring studio and customer script.

Has anyone tried it?

Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] <http://www.ashnik.com/>



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).


  1   2   >