Re: Postgres PAF setup

2018-04-24 Thread Adrien Nayrat
On 04/23/2018 08:09 PM, Andrew Edenburn wrote:
> I am having issues with my PAF setup.  I am new to Postgres and have setup the
> cluster as seen below. 
> 
> I am getting this error when trying to start my cluster resources.
> 
>  
> 
> Master/Slave Set: pgsql-ha [pgsqld]
> 
>  pgsqld (ocf::heartbeat:pgsqlms):   FAILED dcmilphlum224 
> (unmanaged)
> 
>  pgsqld (ocf::heartbeat:pgsqlms):   FAILED dcmilphlum223 
> (unmanaged)
> 
> pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started dcmilphlum223
> 
>  
> 
> Failed Actions:
> 
> * pgsqld_stop_0 on dcmilphlum224 'unknown error' (1): call=239, 
> status=complete,
> exitreason='Unexpected state for instance "pgsqld" (returned 1)',
> 
>     last-rc-change='Mon Apr 23 13:11:17 2018', queued=0ms, exec=95ms
> 
> * pgsqld_stop_0 on dcmilphlum223 'unknown error' (1): call=248, 
> status=complete,
> exitreason='Unexpected state for instance "pgsqld" (returned 1)',
> 
>     last-rc-change='Mon Apr 23 13:11:17 2018', queued=0ms, exec=89ms
> 
>  
> 
> cleanup and clear is not fixing any issues and I am not seeing anything in the
> logs.  Any help would be greatly appreciated.
> 
>  

Hello Andrew,

Could you enable debug logs in Pacemaker?

With Centos you have to edit PCMK_debug variable in /etc/sysconfig/pacemaker :

PCMK_debug=crmd,pengine,lrmd

This should give you more information in logs. Monitor action in PAF should
report why the cluster doesn't start :
https://github.com/ClusterLabs/PAF/blob/master/script/pgsqlms#L1525

Regards,

-- 
Adrien NAYRAT



signature.asc
Description: OpenPGP digital signature


Re: Adding AVG to a JOIN

2018-04-24 Thread Alexander Farber
Thank you for the explanation. I have rearranged my query and it works now
(surprisingly fast too) -

On Mon, Apr 23, 2018 at 9:58 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Apr 23, 2018 at 12:47 PM, Alexander Farber <
> alexander.far...@gmail.com> wrote:
>
>> SELECT
>> u.elo,
>> AVG(c.played - c.prev_played) AS
>> avg_time_per_move,
>> (SELECT ROUND(AVG(score), 1) FROM words_moves
>> WHERE uid = u.uid) AS score,
>>
>> And I don't understand why adding a CTE has caused it, because without
>> the CTE the GROUP BY u.elo was not required...
>>
>>
> ​Adding "AVG(c.played - c.prev_played)" directly to the top-level select
> statement​ column list is what turned it into a "GROUP BY" query.  When you
> embedded the "AVG(score)" in a subquery the GROUP BY was limited to just
> that subquery, and it had no other columns besides the aggregate and so
> didn't require a GROUP BY clause.
>
>
WITH cte AS (
SELECT
m.gid,
m.uid,
m.played,
LAG(m.played) OVER (PARTITION BY m.gid
ORDER BY played) AS prev_played
FROM words_moves m
JOIN words_games g ON (m.gid = g.gid AND m.uid in
(g.player1, g.player2))
WHERE m.played > CURRENT_TIMESTAMP - interval '1
month'
)
SELECT
u.elo,
(SELECT TO_CHAR(AVG(played - prev_played),
'HH24:MI') FROM cte WHERE uid = u.uid) AS avg_time,
(SELECT ROUND(AVG(score), 1) FROM words_moves WHERE
uid = u.uid) AS avg_score,
s.given,
s.photo
FROM words_users u
JOIN words_social s USING (uid)
WHERE u.elo > 1500
-- take the most recent record from words_social
AND NOT EXISTS (SELECT 1
FROM words_social x
WHERE s.uid = x.uid
AND x.stamp > s.stamp)
-- only show players who where active in the last week
AND EXISTS (SELECT 1
FROM words_moves
WHERE played > CURRENT_TIMESTAMP - INTERVAL '1
week'
AND action IN ('play', 'skip', 'swap',
'resign'))
ORDER BY u.elo DESC
LIMIT 10;

Best regards
Alex


Re: Strange error in Windows 10 Pro

2018-04-24 Thread Moreno Andreo

  
  
Il 24/04/2018 04:09, Dale Seaburg ha
  scritto:


  
  Thanks to Moreno and Igor for the Event Viewer suggestions. 
Here are a few lines of log file where they differ between a
good install and a bad incomplete install.  The good install was
from my Shop PC with Windows 10 Pro.
  BTW, I had success with another Dell PC with Win 10 Pro about 4
months ago.  This problem appears to be a one-off.
  
  Good Install:
  Called AclCheck(C:\Program Files\PostgreSQL\9.6\data)
  Called IsVistaOrNewer()...
      'winmgmts' object initialized...
      Version:10.
      MajorVersion:10
  Executing icacls to ensure the SHOP-PC\Dale account can
read the path C:\Program Files\PostgreSQL\9.6\data
      Executing batch file 'rad0510A.bat'...
      processed file: C:\Program Files\PostgreSQL\9.6\data
  Successfully processed 1 files; Failed processing 0 files
  
  Bad Install:
  Called AclCheck(D:\PostgreSQL\9.6\data)
  Called IsVistaOrNewer()...
      'winmgmts' object initialized...
      Version:10.
      MajorVersion:10
  Executing icacls to ensure the WINDOWS-6BEGVO1\don king
account can read the path D:\PostgreSQL\9.6\data
      Executing batch file 'rad6DBC7.bat'...
  

It would be interesting to have the next 2 lines in "Bad install",
like in "good install", to know if rad6DBC7.bat excecution
terminated with or without errors so if there's an error it
would be nice to see it.
I assume D: to be a secondary local drive or anyway a local
partition not a network or external one...

Thanks
Moreno.-
  





Re: Rationale for aversion to the central database?

2018-04-24 Thread Tony Shelver
I have done some big contracts for large financial companies, and for most
of them, ANY changes to the DB structure required extensive 3rd party
testing and a change control process that sometimes took weeks.

But we did get a waiver for the use of DB 'code' like stored procedures and
views, which only had to follow the standard development test / acceptance
procedure by separate developer, end user and third party test teams.

For me, the database is more immutable than the application logic and
especially the GUI, so it pays to spend a lot of time up front on DB
design.  Past experience has also lead me to expect that the DBMS will have
a much longer shelf life than the application language / toolsets used
against it, or at least, over time the languages / toolsets tend to
multiply.

For my part, I like to spend a lot of tie in getting an optimal DB design,
and also putting a lot of validation logic into the DB.

I also like making expensive use of stored procedures, my experience is
that for a data-intensive multi-tool application they are faster and more
secure...



On 23 April 2018 at 19:22, Sven R. Kunze  wrote:

> So far, I have nothing to add, but just one thing. See below:
>
>
> On 09.04.2018 00:37, g...@luxsci.net wrote:
>
>> One advantage to using logic and functions in  the db is that you can fix
>> things immediately without having to make new application builds. That in
>> itself is a huge advantage, IMO.
>>
>
> This is actually not the case. You want to have those logic tested as
> thoroughly as possible being so close to your precious data.
>
> So, you write migration code that substitutes the old logic, test the
> whole package, if successful, deploy (and thus run the migration).
>
> Cheers,
> Sven
>
>


Questions on user defined data types

2018-04-24 Thread a
Hi there:


I want to ask the if I could define an object like type in postgresql, it may 
be better to illustrate with actual examples.


Background information:


1, I would like to pack group of data into 1 entry of in a table.


2, The table consist some financial product information including names, 
public data and etc.


3, Each product need to be valued, and the packed entry contains all the 
assumptions (interest rate, etc.) to perform valuation calculation of the 
products.


Furthermore, for company management purpose, the packed data would have the 
following properties:


1, Elements of the entry can be easily queried, using SQL;


2, Change of element value is not allowed by simple update, only using 
pre-complied "methods" functions can update them;


3, Changes using methods functions would be recorded into a specified 
sub-object called "LOG", which record down update information including update 
times and etc.


I am asking if this could be realized and if yes, how should I do it. 


Thank you so much ~~


Shore

Re: Questions on user defined data types

2018-04-24 Thread David G. Johnston
On Tue, Apr 24, 2018 at 4:53 AM, a <372660...@qq.com> wrote:

> Hi there:
>
> I want to ask the if I could define an object like type in postgresql, it
> may be better to illustrate with actual examples.
>
> Background information:
>
> 1, I would like to pack group of data into 1 entry of in a table.
>
> 2, The table consist some financial product information including
> names, public data and etc.
>
> 3, Each product need to be valued, and the packed entry contains all
> the assumptions (interest rate, etc.) to perform valuation calculation of
> the products.
>
> Furthermore, for company management purpose, the packed data would have
> the following properties:
>
> 1, Elements of the entry can be easily queried, using SQL;
>
> 2, Change of element value is not allowed by simple update, only using
> pre-complied "methods" functions can update them;
>
> 3, Changes using methods functions would be recorded into a specified
> sub-object called "LOG", which record down update information including
> update times and etc.
>
> I am asking if this could be realized and if yes, how should I do it.
>
>
What you describe seems doable, at least to some ​degree.  See the
documentation for:

​CREATE TYPE
CREATE TABLE
CREATE FUNCTION
GRANT
​
David J.


Re: rh-postgresql96 vs community postgresql-9.6

2018-04-24 Thread Adrian Klaver

On 04/23/2018 08:59 AM, yogeshr wrote:

Our client has provided us Red Hat machine for setting up PostgreSQL. We are
allowed to install red hat provided packages only.


What version of RH?



So we have to install red hat provided postgresql (rh-postgresql96) instead
of postgresql-9.6.

Is it a good approach to go with red hat specific PostgreSQL in production
instead of community PostgreSQL?


It is the same code so that should not be a problem. The issues would be 
support and version life:


So do you have RH support for longer then 5 years of community support 
for a version?


Will you be able to upgrade to new version in future?


Also, we were trying to set up repmgr for this PostgreSQL in the local
environment but we didn't find any suitable package for this PostgreSQL.
That is something you are going to have to talk over with your client. A 
RH-provided package only policy is going to prevent them from 
participating in the diverse third party package community that exists 
around Postgres.





How can we install any high availability solution like repmgr for Red Hat
provided PostgreSQL?


That is probably some you will have to discuss with RH.



any help would be appreciated.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



RE: Strange error in Windows 10 Pro

2018-04-24 Thread Igor Neyman
From: Dale Seaburg [mailto:kg...@verizon.net]
Sent: Monday, April 23, 2018 10:10 PM
To: pgsql-general@lists.postgresql.org
Subject: Re: Strange error in Windows 10 Pro

Thanks to Moreno and Igor for the Event Viewer suggestions.  Here are a few 
lines of log file where they differ between a good install and a bad incomplete 
install.  The good install was from my Shop PC with Windows 10 Pro.

BTW, I had success with another Dell PC with Win 10 Pro about 4 months ago.  
This problem appears to be a one-off.
Good Install:
Called AclCheck(C:\Program Files\PostgreSQL\9.6\data)
Called IsVistaOrNewer()...
'winmgmts' object initialized...
Version:10.
MajorVersion:10
Executing icacls to ensure the SHOP-PC\Dale account can read the path 
C:\Program Files\PostgreSQL\9.6\data
Executing batch file 'rad0510A.bat'...
processed file: C:\Program Files\PostgreSQL\9.6\data
Successfully processed 1 files; Failed processing 0 files

Bad Install:
Called AclCheck(D:\PostgreSQL\9.6\data)
Called IsVistaOrNewer()...
'winmgmts' object initialized...
Version:10.
MajorVersion:10
Executing icacls to ensure the WINDOWS-6BEGVO1\don king account can read the 
path D:\PostgreSQL\9.6\data
Executing batch file 'rad6DBC7.bat'...

Notice the last four lines of the Good Install vs the last two lines of the Bad 
Install.  There is no indication of processing, or whether it was successful or 
not in the bad install.

As you can see, I even took Igor's suggestion to install in a non-system 
(Program Files) path, to no success.

Again, not sure what to do.  Open for suggestions...  I'm almost ready to call 
Dell and complain about a "bad" Win 10 Pro install.

Dale

Dale,
The problem isn’t in “bad” Win10 Pro install.
The problem is that Win10 on this machine is  configured differently from Win10 
machine where your install worked fine.
In Windows there is a “default” association between file types/extensions and 
programs that execute these files by default.  Batch files like 'rad6DBC7.bat'  
 shouldn’t have any association, Win OS knows that this type should be executed 
by OS, like .exe types.  But sometimes Windows users change it, and associate 
.bat files with Notepad (or some other text editor) in order to be able to edit 
batch scripts. By the way, batch file names are different in “good” and “bad” 
installations because those are temporary files created by vbs scripts that are 
running during install.
So, in order to fix this you need to dissociate .bat type/extension from 
Notepad, or whatever text editor it is associated on the “bad” Windows. If you 
don’t know where to look for this feature in Win OS, ask someone who’s more 
familiar with windows environment.
Pretty sure that’s the solution of your problem.
Regards,
Igor Neyman


Re: Rationale for aversion to the central database?

2018-04-24 Thread Sam Gendler
On Sun, Apr 8, 2018 at 15:37 g...@luxsci.net  wrote:

>
>
> On April 8, 2018 02:40:46 pm PDT, "Guyren Howe"  wrote:
>
> One advantage to using logic and functions in  the db is that you can fix
> things immediately without having to make new application builds. That in
> itself is a huge advantage, IMO.
>


I doubt most of us would consider this any kind of advantage outside of the
momentary temptation to do it when an app is completely broken and needs to
be up in a hurry. Application changes, whether in the dB or in application
logic, need to be tested, and they need to be revision controlled and
released in a manner that can be easily rolled back in an automated manner.
The fact that putting logic in the database can effectively allow
developers to make unreleased changes to production apps is specifically
one of the problems that I am trying to avoid when I keep most logic in the
app instead of the dB. It’s a whole lot harder to make arbitrary manual
changes to code in the app, whether interpreted or compiled, if it is
running inside a container that cannot be updated. Even if you go in with a
shell and update an interpreted file, the next time that container is
launched the change will be lost, which is usually sufficient motivation to
keep devs from doing that kind of thing.

I’ll put some things in the db, either for performance or because I want
that logic to be built into the data and not be part of the application,
but I choose those contexts carefully and I write them in as portable a
manner as possible. And for those who say migrations don’t happen, I’ve
certainly been through a few, usually as part of an acquisition or the
like, but sometimes simply because another dB server better meets our needs
after a time. And migrating stored procs can be really difficult. Such code
usually has less complete unit and integration tests, which makes
validating those changes more difficult, too.

But the biggest reason is that databases often have to scale up rather than
out, so keeping as much logic in the application code allows my scaling
requirements for the dB server to be as minimal as possible. Sure, there
are workloads where pushing raw data across the wire will be more work than
processing it in the dB, and in those cases, I may do that, but I consider
it premature optimization to just assume that is necessary without hard
evidence from production examples to suggest otherwise.

Finally, there’s the consistency argument. I want to find all of the logic
in one place. Either entirely in the source code or entirely in the dB.
Having to trace things from the code to the dB and back again can make it a
whole lot harder to see, at a glance, what is happening in the code. Having
logic in the dB also means it can be difficult or impossible to have two
releases talking to the same schema version at the same time - so canary
builds and rolling deployments can be difficult. Of course, schema changes
can cause this problem, regardless of whether there are stored procs, but
the more of your logic that lives in the db, the more likely it is that
your releases will conflict over the db. So I’m more likely to be able to
do a rolling release if I keep the db as a dumb data store and keep logic
in the application code.


Backup Strategy Advise

2018-04-24 Thread David Gauthier
Hi:  I need some advise on how best to backup a PG DB.
PG 9.5.2 on RHEL6

The requirement is to be able to restore the DB after catastrophic failure
and lose no more than the last 15 minutes worth of data.  Also, we would
like to be able to do the backups on-line (no down time).  There is no need
for PITR other than the point in time being the latest possible.

Typically, I would think doing a weekly full backup, daily incremental
backups and turn on journaling to capture what goes on since the last
backup.  When DB recovery is needed, restore up to the last daily, then
reply the journal to restore up to the last time the journal was flushed to
disk (checkpoint=15 minutes).  I'm not quite sure if something like this is
possible with PG.  I've read about the WAL file and wonder if it could be
used together with the on-line logical backups (pg_dump) to achieve the 15
minute requirement without needing downtime for physical backups..

Any advise?

Thanks in Advance.


Re: Postgres PAF setup

2018-04-24 Thread Jehan-Guillaume (ioguix) de Rorthais
On Mon, 23 Apr 2018 18:09:43 +
Andrew Edenburn  wrote:

> I am having issues with my PAF setup.  I am new to Postgres and have setup
> the cluster as seen below. I am getting this error when trying to start my
> cluster resources.
> [...]
> 
> cleanup and clear is not fixing any issues and I am not seeing anything in
> the logs.  Any help would be greatly appreciated.

This lack a lot of information.

According to the PAF ressource agent, your instances are in an "unexpected
state" on both nodes while PAF was actually trying to stop it.

Pacemaker might decide to stop a ressource if the start operation fails.
Stopping it when the start failed give some chances to the resource agent to
stop the resource gracefully if still possible.

I suspect you have some setup mistake on both nodes, maybe the exact same one...

You should probably provide your full logs from pacemaker/corosync with timing
information so we can check all the messages coming from PAF from the very
beginning of the startup attempt.


> have-watchdog=false \

you should probably consider to setup watchdog in your cluster.

> stonith-enabled=false \

This is really bad. Your cluster will NOT work as expected. PAF **requires**
Stonith to be enabled and to properly working. Without it, soon or later, you
will experience some unexpected reaction from the cluster (freezing all
actions, etc).

> no-quorum-policy=ignore \

You should not ignore quorum, even in a two node cluster. See "two_node"
parameter in the manual of corosync.conf.

> migration-threshold=1 \
> rsc_defaults rsc_defaults-options: \
> migration-threshold=5 \

The later is the supported way to set migration-threshold. Your
"migration-threshold=1" should not be a cluster property but a default
ressource option.

> My pcs Config
> Corosync Nodes:
> dcmilphlum223 dcmilphlum224
> Pacemaker Nodes:
> dcmilphlum223 dcmilphlum224
> 
> Resources:
> Master: pgsql-ha
>   Meta Attrs: notify=true target-role=Stopped

This target-role might have been set by the cluster because it can not fence
nodes (which might be easier to deal with in your situation btw). That means
the cluster will keep this resource down because of previous errors.

> recovery_template=/pgsql/data/pg7000/recovery.conf.pcmk

You should probably not put your recovery.conf.pcmk in your PGDATA. Both files
are different between each nodes. As you might want to rebuild the standby or
old master after some failures, you would have to correct it each time. Keep it
outside of the PGDATA to avoid this useless step.

> dcmilphlum224: pgsqld-data-status=LATEST

I suppose this comes from the "pgsql" resource agent, definitely not from PAF...

Regards,



Re: Backup Strategy Advise

2018-04-24 Thread Adrian Klaver

On 04/24/2018 07:50 AM, David Gauthier wrote:

Hi:  I need some advise on how best to backup a PG DB.
PG 9.5.2 on RHEL6

The requirement is to be able to restore the DB after catastrophic 
failure and lose no more than the last 15 minutes worth of data.  Also, 
we would like to be able to do the backups on-line (no down time).  
There is no need for PITR other than the point in time being the latest 
possible.


Typically, I would think doing a weekly full backup, daily incremental 
backups and turn on journaling to capture what goes on since the last 
backup.  When DB recovery is needed, restore up to the last daily, then 
reply the journal to restore up to the last time the journal was flushed 
to disk (checkpoint=15 minutes).  I'm not quite sure if something like 
this is possible with PG.  I've read about the WAL file and wonder if it 


Pretty sure that won't work. Once you do the restore from the last daily 
you will change the Postgres files to a state different from the ones 
captured by the journaling.  Replaying the journal will result in a 
mixture of old and new.


could be used together with the on-line logical backups (pg_dump) to 
achieve the 15 minute requirement without needing downtime for physical 
backups..


Any advise?


Take a look here:

https://www.postgresql.org/docs/9.5/static/continuous-archiving.html



Thanks in Advance.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Backup Strategy Advise

2018-04-24 Thread Vick Khera
On Tue, Apr 24, 2018 at 10:50 AM, David Gauthier 
wrote:

> Typically, I would think doing a weekly full backup, daily incremental
> backups and turn on journaling to capture what goes on since the last
> backup.
>

This is almost the whole concept of the streaming replication built into
postgres, except you are not applying the stream but archiving it. If you
have atomic file system snapshots, you can implement this strategy along
the lines of marking the DB snapshot for binary backup, snapshot the file
system, then copy that snapshot file system off to another system (locally
or off-site), meanwhile you accumulate the log files just as you would for
streaming replication. Once the copy is done, you can release the file
system snapshot and continue to archive the logs similarly to how you would
send them to a remote system for being applied. You just don't apply them
until you need to do the recovery.

Or just set up streaming replication to a hot-standby, because that's the
right thing to do. For over a decade I did this with twin servers and
slony1 replication. The cost of the duplicate hardware was nothing compared
to not having downtime.


Re: Using the public schema

2018-04-24 Thread Laurenz Albe
Charlin Barak wrote:

> We will be developing three new applications in PostgreSQL, each having its 
> own
> database instance running on different hosts. We will only have one schema per
> Postgres instance. The data is read-write only by one application/schema in 
> the
> DB instance and the data is published to other applications via API calls.
> 
> In such a standalone database configuration, are there any security 
> implications
> or any downsides to creating the application in the public schema?

No, that's fine.

Just REVOKE CREATE ON SCHEMA public FROM PUBLIC.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



答复: [ClusterLabs] Postgres PAF setup

2018-04-24 Thread 范国腾
I have meet the similar issue when the postgres is not stopped normally. 
 
You could run pg_controldata to check if your postgres status is 
shutdown/shutdown in recovery.

I change the /usr/lib/ocf/resource.d/heartbeat/pgsqlms to avoid this problem:

elsif ( $pgisready_rc == 2 ) {
# The instance is not listening.
# We check the process status using pg_ctl status and check
# if it was propertly shut down using pg_controldata.
ocf_log( 'debug', 'pgsql_monitor: instance "%s" is not listening',
$OCF_RESOURCE_INSTANCE );
# return _confirm_stopped();   # remove this line
return $OCF_NOT_RUNNING; 
}


-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Adrien Nayrat
发送时间: 2018年4月24日 16:16
收件人: Andrew Edenburn ; pgsql-gene...@postgresql.org; 
us...@clusterlabs.org
主题: Re: [ClusterLabs] Postgres PAF setup

On 04/23/2018 08:09 PM, Andrew Edenburn wrote:
> I am having issues with my PAF setup.  I am new to Postgres and have 
> setup the cluster as seen below.
> 
> I am getting this error when trying to start my cluster resources.
> 
>  
> 
> Master/Slave Set: pgsql-ha [pgsqld]
> 
>  pgsqld (ocf::heartbeat:pgsqlms):   FAILED dcmilphlum224 
> (unmanaged)
> 
>  pgsqld (ocf::heartbeat:pgsqlms):   FAILED dcmilphlum223 
> (unmanaged)
> 
> pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
> dcmilphlum223
> 
>  
> 
> Failed Actions:
> 
> * pgsqld_stop_0 on dcmilphlum224 'unknown error' (1): call=239, 
> status=complete, exitreason='Unexpected state for instance "pgsqld" 
> (returned 1)',
> 
>     last-rc-change='Mon Apr 23 13:11:17 2018', queued=0ms, exec=95ms
> 
> * pgsqld_stop_0 on dcmilphlum223 'unknown error' (1): call=248, 
> status=complete, exitreason='Unexpected state for instance "pgsqld" 
> (returned 1)',
> 
>     last-rc-change='Mon Apr 23 13:11:17 2018', queued=0ms, exec=89ms
> 
>  
> 
> cleanup and clear is not fixing any issues and I am not seeing 
> anything in the logs.  Any help would be greatly appreciated.
> 
>  

Hello Andrew,

Could you enable debug logs in Pacemaker?

With Centos you have to edit PCMK_debug variable in /etc/sysconfig/pacemaker :

PCMK_debug=crmd,pengine,lrmd

This should give you more information in logs. Monitor action in PAF should 
report why the cluster doesn't start :
https://github.com/ClusterLabs/PAF/blob/master/script/pgsqlms#L1525

Regards,

--
Adrien NAYRAT



Re: rh-postgresql96 vs community postgresql-9.6

2018-04-24 Thread Ian Barwick

On 04/24/2018 10:06 PM, Adrian Klaver wrote:
> On 04/23/2018 08:59 AM, yogeshr wrote:
(...)
>> Also, we were trying to set up repmgr for this PostgreSQL in the local
>> environment but we didn't find any suitable package for this PostgreSQL.
>
> That is something you are going to have to talk over with your client.
> A RH-provided package only policy is going to prevent them from participating
> in the diverse third party package community that exists around Postgres.

Indeed; the RedHat packages seem to use a completely different file system 
layout
to the community packages, so many if not most community packages are unlikely
to work even if force-installed.

>> How can we install any high availability solution like repmgr for Red Hat
>> provided PostgreSQL?
>
> That is probably some you will have to discuss with RH.

Other possibilities would be: a) compile the package from source
(not an efficient method of deployment, not desirable to do on production
machines and often not possible), b) create your own RPMs compatible with
the RedHat ones (community spec files are available here:
https://git.postgresql.org/gitweb/?p=pgrpms.git ), or c) persuade your
client to switch to a support provider who provides support based
around community packages (there are a number, including the one mentioned
in my email signature).


Regards

Ian Barwick


--
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Strange Index sizes

2018-04-24 Thread Mohsen Bande
Hi all,
I have a user table with ~8M records, with different type of fields b-tree
indexed.
by investigating index sizes, i found two strange things:
1- all index sizes are almost the same, regardless of field type (boolean,
string, bigint)
2 - all of them are much bigger that my expectation, e.g. for a boolean
field, even after reindexing, the size is 217MB, so each record takes ~27
bytes.

could some one shed some light on this please? or refer any user oriented
documentation about index size estimations


Questions on input function of TOAST enabled user defined types

2018-04-24 Thread a
The normal input function for variables with properties of {internallength!= 
variable, storage=plain/main}, the return type will be Datum.


The definition of Datum is "unsigned __int64" which may only consist the 
address of pointed data.


For data with variable length, the length of the data is also needed, so Datum 
type would be not enough since it only provide a pointer to the data.


So what should I do if I would have a TOAST enabled user defined data types, 
should I use other type such as struct varlena as return type??


Thanks so much!!