Re: pg9.6: no backup history file (*.backup) created on hot standby

2018-10-09 Thread Michael Paquier
On Tue, Oct 09, 2018 at 03:26:35PM +0800, magodo wrote:
> Yes, but does this differ whether I'm archiving a general WAL or
> archiving the backup history? I mean if user doesn't handle duplicate
> archive, then pg_wal will still be filled up when archiving WAL.

A WAL segment has a unique name, and would be finished to be used once.
The problem with backup history files on standbys is that the *same*
file can could finish by being generated *multiple* times with base
backups taken in parallel.  That's a completely different story, and the
window to those backup history files having the same name gets larger
the more the window between two checkpoints is.  That's a problem I
studied a couple of months back.
--
Michael


signature.asc
Description: PGP signature


Re: pg9.6: no backup history file (*.backup) created on hot standby

2018-10-09 Thread magodo


On Tue, 2018-10-09 at 11:45 +0900, Michael Paquier wrote:
> (Please do not forget to add the community mailing list in CC.)
> 
> On Tue, Oct 09, 2018 at 10:33:56AM +0800, magodo wrote:
> > Since the backup history aims to aid administrator to identify the
> > point from which wal archive should be kept and before which the
> > archive could be cleaned. It is very helpful in kinds of use cases.
> > 
> > Why not also create it when do basebackup on standby?
> 
> The backup_label file is equally helpful, and backup history files
> are
> not mandatory for backups.  One of the reasons behind why we cannot
> have
> those on standbys is that there is no actual way to ensure the
> uniqueness of this file, as two backups could be started in parallel
> with the same start location and the *same* file name.  If an archive
> command is not able to handle correctly the duplicates, you could
> bloat
> pg_wal.  And that's a real problem.
> --
> Michael

Oh, I almost forget backup_label.. Thank you for point it out!

Yet, I am still not so clear how does the bloat of pg_wal happen? Do
you mean pg_wal will be filled up by many .backup(s)?

Magodo





Re: pg9.6: no backup history file (*.backup) created on hot standby

2018-10-09 Thread magodo


On Tue, 2018-10-09 at 12:53 +0900, Michael Paquier wrote:
> On Tue, Oct 09, 2018 at 11:45:50AM +0800, magodo wrote:
> > Yet, I am still not so clear how does the bloat of pg_wal happen?
> > Do
> > you mean pg_wal will be filled up by many .backup(s)?
> 
> If your archive_command is for example a simple cp (which it should
> not
> be by the way), and if you try to archive twice the same file, then
> the
> archive command would continuously fail and prevent existing WAL
> segments to be archived.  Segments are continuously created, and
> pg_wal
> grows in size.
> --
> Michael

Yes, but does this differ whether I'm archiving a general WAL or
archiving the backup history? I mean if user doesn't handle duplicate
archive, then pg_wal will still be filled up when archiving WAL.

Magodo





Re: pg9.6: no backup history file (*.backup) created on hot standby

2018-10-09 Thread magodo


On Tue, 2018-10-09 at 16:55 +0900, Michael Paquier wrote:
> On Tue, Oct 09, 2018 at 03:26:35PM +0800, magodo wrote:
> > Yes, but does this differ whether I'm archiving a general WAL or
> > archiving the backup history? I mean if user doesn't handle
> > duplicate
> > archive, then pg_wal will still be filled up when archiving WAL.
> 
> A WAL segment has a unique name, and would be finished to be used
> once.
> The problem with backup history files on standbys is that the *same*
> file can could finish by being generated *multiple* times with base
> backups taken in parallel.  That's a completely different story, and
> the
> window to those backup history files having the same name gets larger
> the more the window between two checkpoints is.  That's a problem I
> studied a couple of months back.
> --
> Michael

I just realized that doing basebackup on standby have other different
behaviors than it on primary, for example, it will not switch wal on
begin or end.

So do you just mean if there is no wal switch on primary(suppose
currently on 00010002), then each basebackup made on standby
will always generate a same named backup history(e.g.
00010002.backup)?

---
Magodo





SELECT UNION into a text[]

2018-10-09 Thread Alexander Farber
Good evening,

I am trying to SELECT ARRAY_AGG into an array from 2 tables.

But unfortunately here is what I get in PostgreSQL 10.5:

SELECT ARRAY_AGG(hashed)
   FROM words_nouns
   WHERE added > TO_TIMESTAMP(1539100913)
UNION
SELECT ARRAY_AGG(hashed)
FROM words_verbs
WHERE added > TO_TIMESTAMP(1539100913);
  array_agg
-
 {noun1,noun2,noun3}
 {verb1,verb2}

And thus I can not assign it to the _added variable in my custom function:

CREATE OR REPLACE FUNCTION words_get_added(
in_visited integer,
OUT out_json jsonb
) RETURNS jsonb AS
$func$
DECLARE
_added text[];
BEGIN
-- create array with words added to dictionary since in_visited
timestamp
IF in_visited > 0 THEN
_added := (
SELECT ARRAY_AGG(hashed)
FROM words_nouns
WHERE added > TO_TIMESTAMP(in_visited)
UNION
SELECT ARRAY_AGG(hashed)
FROM words_verbs
WHERE added > TO_TIMESTAMP(in_visited)
);

IF CARDINALITY(_added) > 0 THEN
out_json := jsonb_build_object('added', _added);
END IF;
END IF;
END

And the assignment results in the error message:

 www java[1082]: org.postgresql.util.PSQLException: ERROR: more than one
row returned by a subquery used as an expression|  Where: SQL statement
"SELECT (|SELECT ARRAY_AGG(hashed) |
FROM words_nouns |WHERE added >
TO_TIMESTAMP(in_visited)|UNION|
SELECT ARRAY_AGG(hashed) |FROM words_verbs |
WHERE added > TO_TIMESTAMP(in_visited)|
)"|PL/pgSQL function words_get_added(integer) line 7 at assignment

Please help me to modify my SELECT UNION so that I get just 1 row as result:

 {noun1,noun2,noun3,verb1,verb2}

Regards
Alex


Re: SELECT UNION into a text[]

2018-10-09 Thread David G. Johnston
On Tue, Oct 9, 2018 at 9:32 AM Alexander Farber 
wrote:

>
> SELECT ARRAY_AGG(hashed)
>FROM words_nouns
>WHERE added > TO_TIMESTAMP(1539100913)
> UNION
> SELECT ARRAY_AGG(hashed)
> FROM words_verbs
> WHERE added > TO_TIMESTAMP(1539100913);
>
>
SELECT array_agg(words) FROM (
SELECT hashed FROM words_nouns
UNION ALL
SELECT hashed FROM words_verbs
) AS src

David J.


tds_fdw binary column

2018-10-09 Thread Aleš Zelený
Hello,

I've an issue with foreign table using tds_fdw from PostgreSQL 10 to Sybase
ASE 15.7.

The issue is, that primary key, which I have to use for my predicate is in
Sybase data type binary.

Foreign table maps ID column from Sybase binary to PostgreSQL BYTEA.

Simple select is smooth:
[local]:5432 postgres@postgres:7650
=# select branch_id from ase.tds_tbl limit 1;
NOTICE:  tds_fdw: Query executed correctly
NOTICE:  tds_fdw: Getting results
┌┐
│ branch_id  │
├┤
│ \x000246000944242d │
└┘
(1 row)

Whereas select using the ID fails:

[local]:5432 postgres@postgres:7650
=# select * from ase. tds_tbl where branch_id =
E'\\x000246000944242d'::bytea;
ERROR:  DB-Library error: DB #: 102, DB Msg: General SQL Server error:
Check messages from the SQL Server, OS #: -1, OS Msg: , Level: 15
Time: 0.213 ms

Thanks for any hints.
Kind regards Ales Zeleny


Privilege mess?

2018-10-09 Thread Thiemo Kellner



Hi all

I installed pglogger (https://sourceforge.net/projects/pglogger/) and  
try to insert into the "level" table as user "act" but it fails  
claiming insufficient privileges even though insert is granted to  
public (see below). What am I missing?


Kind regards Thiemo


thiemo @ thiemos-toshi ~/external_projects/pg-scd-code/trunk :-( %  
psql -U act 
 psql (10.5 (Debian  
10.5-1.pgdg90+1))

Type "help" for help.

act=> insert into logger.LEVEL (
act(> SCOPE,
act(> LEVEL
act(> ) values (
act(> 'inline_code_block',
act(> 'INFO'
act(> );
ERROR:  permission denied for schema logger
LINE 1: insert into logger.LEVEL (
^
act=> SELECT grantor, grantee, table_catalog, table_schema,  
table_name, privilege_type

act->   FROM information_schema.table_privileges
act->  WHERE grantor = 'logger'
act->AND table_schema = 'logger'
act->AND table_name = 'level';
 grantor | grantee | table_catalog | table_schema | table_name |  
privilege_type

-+-+---+--++
 logger  | PUBLIC  | act   | logger   | level  | INSERT
 logger  | PUBLIC  | act   | logger   | level  | SELECT
 logger  | PUBLIC  | act   | logger   | level  | UPDATE
 logger  | PUBLIC  | act   | logger   | level  | DELETE
(4 rows)




Re: Privilege mess?

2018-10-09 Thread Christoph Moench-Tegeder
## Thiemo Kellner (thi...@gelassene-pferde.biz):

> I installed pglogger (https://sourceforge.net/projects/pglogger/) and
> try to insert into the "level" table as user "act" but it fails
> claiming insufficient privileges even though insert is granted to
> public (see below). What am I missing?

Schema privileges.

> ERROR:  permission denied for schema logger
> LINE 1: insert into logger.LEVEL (

It says "permission denied for schema", so this is not about table
privileges (GRANT INSERT/UPDATE/... ON TABLE ...), but about schema
provileges. I'd guess you miss USAGE on schema logger.
See https://www.postgresql.org/docs/current/static/sql-grant.html

AFAIK those privileges are not found in information_schema, you'll
have to use pg_catalog for that.

Regards,
Christoph

-- 
Spare Space.



Re: tds_fdw binary column

2018-10-09 Thread Adrian Klaver

On 10/9/18 12:07 PM, Aleš Zelený wrote:

Hello,

I've an issue with foreign table using tds_fdw from PostgreSQL 10 to 
Sybase ASE 15.7.


The issue is, that primary key, which I have to use for my predicate is 
in Sybase data type binary.


Foreign table maps ID column from Sybase binary to PostgreSQL BYTEA.

Simple select is smooth:
[local]:5432 postgres@postgres:7650
=# select branch_id from ase.tds_tbl limit 1;
NOTICE:  tds_fdw: Query executed correctly
NOTICE:  tds_fdw: Getting results
┌┐
│     branch_id      │
├┤
│ \x000246000944242d │
└┘
(1 row)

Whereas select using the ID fails:

[local]:5432 postgres@postgres:7650
=# select * from ase. tds_tbl where branch_id = 
E'\\x000246000944242d'::bytea;
ERROR:  DB-Library error: DB #: 102, DB Msg: General SQL Server error: 
Check messages from the SQL Server, OS #: -1, OS Msg: , Level: 15

  ^^
Have you done the above, look at the error log for the server?


Time: 0.213 ms

Thanks for any hints.




Kind regards Ales Zeleny



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: FTS trigger works 1 at a time, but fails with bulk insert script

2018-10-09 Thread Malik Rumi
I have several different logs. I'm not sure which is the 'right' one. None
of them show anything other than routine start and stop activity, even
after I added a single Entry document. However, I did notice despite the
'incomplete startup', the last one is the only one that mentions listening
on any port. That might be just a version change. I originally installed
9.4. I tried 9.5 but knew it didn't go right. I tried 10 when I was still
on Ubuntu 16.04, which is not supposed to be compatible with 10. I have
since gone up to Ubuntu 18.04 but have not tried to re-install pg 10.
Bottom line, this seems to be a completely different problem.


malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-9.4-main.log.1

2018-10-07 11:09:53 CDT [1223-1] [unknown]@[unknown] LOG:  incomplete
startup packet
2018-10-07 11:09:53 CDT [1222-1] LOG:  database system was shut down at
2018-10-06 17:41:15 CDT
2018-10-07 11:09:53 CDT [1224-1] postgres@postgres FATAL:  the database
system is starting up

2018-10-07 11:09:54 CDT [1149-1] LOG:  database system is ready to accept
connections
2018-10-07 was two days ago!

malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-9.4-main.log

2018-10-08 22:22:17 CDT [1266-2] LOG:  database system is shut down
2018-10-09 07:59:49 CDT [1216-1] LOG:  database system was shut down at
2018-10-08 22:22:17 CDT
2018-10-09 07:59:49 CDT [1217-1] [unknown]@[unknown] LOG:  incomplete
startup packet
2018-10-09 07:59:50 CDT [1227-1] postgres@postgres FATAL:  the database
system is starting up

2018-10-09 07:59:51 CDT [1154-1] LOG:  database system is ready to accept
connections
What's with the fatal and the incomplete startup? Given that, how ready is
it really to accept connections?

malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-9.5-main.log

2018-10-08 22:22:18 CDT [1271-2] LOG:  database system is shut down
2018-10-09 07:59:49 CDT [1210-1] LOG:  database system was shut down at
2018-10-08 22:22:18 CDT
2018-10-09 07:59:49 CDT [1211-1] [unknown]@[unknown] LOG:  incomplete
startup packet
2018-10-09 07:59:50 CDT [1226-1] postgres@postgres FATAL:  the database
system is starting up

2018-10-09 07:59:50 CDT [1153-1] LOG:  database system is ready to accept
connections
Same question / observation

malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-9.5-main.log.1

2018-10-07 11:09:53 CDT [1220-1] LOG:  database system was shut down at
2018-10-06 17:41:15 CDT
2018-10-07 11:09:53 CDT [1221-1] [unknown]@[unknown] LOG:  incomplete
startup packet
2018-10-07 11:09:53 CDT [1225-1] postgres@postgres FATAL:  the database
system is starting up

2018-10-07 11:09:54 CDT [1150-1] LOG:  database system is ready to accept
connections
Same - and this was two days ago, too

malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-10-main.log
2018-10-08 22:22:16.047 CDT [1159] LOG:  received fast shutdown request
2018-10-08 22:22:17.337 CDT [1159] LOG:  aborting any active transactions
2018-10-08 22:22:17.927 CDT [1159] LOG:  worker process: logical
replication launcher (PID 1281) exited with exit code 1
2018-10-08 22:22:17.988 CDT [1276] LOG:  shutting down
2018-10-08 22:22:19.327 CDT [1159] LOG:  database system is shut down
2018-10-09 07:59:48.574 CDT [1155] LOG:  listening on IPv4 address
"127.0.0.1", port 5434
2018-10-09 07:59:48.727 CDT [1155] LOG:  listening on Unix socket
"/var/run/postgresql/.s.PGSQL.5434"
2018-10-09 07:59:50.590 CDT [1223] LOG:  database system was shut down at
2018-10-08 22:22:19 CDT
2018-10-09 07:59:51.058 CDT [1155] LOG:  database system is ready to accept
connections
2018-10-09 07:59:51.617 CDT [1274] [unknown]@[unknown] LOG:  incomplete
startup packet


*“None of you has faith until he loves for his brother or his neighbor what
he loves for himself.”*


On Mon, Oct 8, 2018 at 6:36 PM Adrian Klaver 
wrote:

> On 10/8/18 3:54 PM, Adrian Klaver wrote:
> > On 10/8/18 1:58 PM, Malik Rumi wrote:
> >> So what is the script you used to do the bulk INSERT?
> >>
> >> There's actually three, but they are all basically the same. The
> >> differences have to do with the source material being inserted:
> >>
> >> # usr/local/bin/python3.6
> >> # coding: utf-8
> >>
> >> from os import environ
> >> environ['DJANGO_SETTINGS_MODULE'] = 'chronicle.settings'
> >> import django
> >> django.setup()
> >> from ktab.models import Entry, Tag
> >
> > So I am going to assume Entry and Tag map to the tables ktab_entry and
> > public.ktab_entry_tags respectively.
> >
> >> from django.utils.text import slugify
> >> import csv
> >>
> >>
> >> filename = '/home/malikarumi/Projects/Progress_Logs/edit_blogpost.csv'
> >>
> >> with open(filename, 'rt') as text:
> >>  reader = csv.DictReader(text, delimiter=',')
> >>  # next(reader, None)
> >>  for row in reader:
> >>  my_entry = Entry.objects.create(
> >>  title=row['title'], slug=row['slug'],
> >> chron_date=row['created'],
> >>  clock=row['clock'], content=row['content'])
> >>  my_entry.tags.a

Re: FTS trigger works 1 at a time, but fails with bulk insert script

2018-10-09 Thread Adrian Klaver

On 10/9/18 3:25 PM, Malik Rumi wrote:
I have several different logs. I'm not sure which is the 'right' one. 
None of them show anything other than routine start and stop activity, 
even after I added a single Entry document. However, I did notice 
despite the 'incomplete startup', the last one is the only one that 
mentions listening on any port. That might be just a version change. I 
originally installed 9.4. I tried 9.5 but knew it didn't go right. I 
tried 10 when I was still on Ubuntu 16.04, which is not supposed to be 
compatible with 10. I have since gone up to Ubuntu 18.04 but have not 
tried to re-install pg 10. Bottom line, this seems to be a completely 
different problem.


The current logs will be the ones without the number suffix so:

postgresql-9.4-main.log -- Is the current log

postgresql-9.4-main.log.1 -- Is a past log. The lower the number the 
more recent it was rotated out. So this is the last log before the 
current one.


The information in the logs is controlled by settings in the 
postgresql.conf file:


https://www.postgresql.org/docs/10/static/runtime-config-logging.html

Assuming you are using the Debian/Ubuntu packing of Postgres do:

pg_lsclusters

This will return something like:

Ver Cluster Port Status OwnerData directory   Log file

9.6 main5422 down   postgres /var/lib/postgresql/9.6/main 
/var/log/postgresql/postgresql-9.6-main.log


10  main5432 online postgres /var/lib/postgresql/10/main 
/var/log/postgresql/postgresql-10-main.log



The first two fields above(Ver, Cluster) are the important ones for now. 
Use them to navigate to:


/etc/postgresql/Ver/Cluster

to find the postgresql.conf file for the cluster.

Look at see how your logging is configured and make adjustments using 
the information in the logging conf link above. You will need to reload 
the server to get the changes to take. At the least I would set 
log_connections and log_disconnections to 'on'. This will at least allow 
you to see which server you are connecting to. I would also set 
log_statement to at least 'mod' so you can see your INSERT's.









malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-9.4-main.log.1

2018-10-07 11:09:53 CDT [1223-1] [unknown]@[unknown] LOG:  incomplete 
startup packet
2018-10-07 11:09:53 CDT [1222-1] LOG:  database system was shut down at 
2018-10-06 17:41:15 CDT
2018-10-07 11:09:53 CDT [1224-1] postgres@postgres FATAL:  the database 
system is starting up


2018-10-07 11:09:54 CDT [1149-1] LOG:  database system is ready to 
accept connections

2018-10-07 was two days ago!

malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-9.4-main.log

2018-10-08 22:22:17 CDT [1266-2] LOG:  database system is shut down
2018-10-09 07:59:49 CDT [1216-1] LOG:  database system was shut down at 
2018-10-08 22:22:17 CDT
2018-10-09 07:59:49 CDT [1217-1] [unknown]@[unknown] LOG:  incomplete 
startup packet
2018-10-09 07:59:50 CDT [1227-1] postgres@postgres FATAL:  the database 
system is starting up


2018-10-09 07:59:51 CDT [1154-1] LOG:  database system is ready to 
accept connections
What's with the fatal and the incomplete startup? Given that, how ready 
is it really to accept connections?


malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-9.5-main.log

2018-10-08 22:22:18 CDT [1271-2] LOG:  database system is shut down
2018-10-09 07:59:49 CDT [1210-1] LOG:  database system was shut down at 
2018-10-08 22:22:18 CDT
2018-10-09 07:59:49 CDT [1211-1] [unknown]@[unknown] LOG:  incomplete 
startup packet
2018-10-09 07:59:50 CDT [1226-1] postgres@postgres FATAL:  the database 
system is starting up


2018-10-09 07:59:50 CDT [1153-1] LOG:  database system is ready to 
accept connections

Same question / observation

malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-9.5-main.log.1

2018-10-07 11:09:53 CDT [1220-1] LOG:  database system was shut down at 
2018-10-06 17:41:15 CDT
2018-10-07 11:09:53 CDT [1221-1] [unknown]@[unknown] LOG:  incomplete 
startup packet
2018-10-07 11:09:53 CDT [1225-1] postgres@postgres FATAL:  the database 
system is starting up


2018-10-07 11:09:54 CDT [1150-1] LOG:  database system is ready to 
accept connections

Same - and this was two days ago, too

malikarumi@Tetuoan2:/var/log/postgresql$ tail -f postgresql-10-main.log
2018-10-08 22:22:16.047 CDT [1159] LOG:  received fast shutdown request
2018-10-08 22:22:17.337 CDT [1159] LOG:  aborting any active transactions
2018-10-08 22:22:17.927 CDT [1159] LOG:  worker process: logical 
replication launcher (PID 1281) exited with exit code 1

2018-10-08 22:22:17.988 CDT [1276] LOG:  shutting down
2018-10-08 22:22:19.327 CDT [1159] LOG:  database system is shut down
2018-10-09 07:59:48.574 CDT [1155] LOG:  listening on IPv4 address 
"127.0.0.1", port 5434
2018-10-09 07:59:48.727 CDT [1155] LOG:  listening on Unix socket 
"/var/run/postgresql/.s.PGSQL.5434"
2018-10-09 07:59:50.590 CDT [1223] LOG:  database system was shut down 
at 2018-10-08 22:22:19 CDT
201

COPY threads

2018-10-09 Thread Rob Sargent
Can anyone here tell me whether or not the CopyManager facility in JDBC 
via org.postgresql:postgresql:42.1.4 is internally multithreaded? 
Running on CentOS 7 (all participants), java8, postgres 10.5






Re: Privilege mess?

2018-10-09 Thread Thiemo Kellner



Quoting Christoph Moench-Tegeder :


Schema privileges.
provileges. I'd guess you miss USAGE on schema logger.


Thanks for the hint. I did not know about a usage grant coming from  
Oracle, so I completely ignored the possibility of the absence of a  
different grant. I just read the documentation but I cannot get grip  
on the sense behind it:


'For schemas, ... this allows the grantee to “look up” objects within  
the schema. Without this permission, it is still possible to see the  
object names, e.g. by querying the system tables.'


Does it not say you do not need the usage privilege as you can query  
the data catalog anyway to get the object's details? And in deed,  
DBeaver queries the details of the object without the usage privilege.


To carry out actions on objects one needs the specific grant like  
select anyway. I do not see the point of usage privilege.


I would be grateful I some could shed some more light for me.

Kind regards Thiemo




Re: Privilege mess?

2018-10-09 Thread David G. Johnston
On Tuesday, October 9, 2018, Thiemo Kellner 
wrote:
>
> Does it not say you do not need the usage privilege as you can query the
> data catalog anyway to get the object's details? And in deed, DBeaver
> queries the details of the object without the usage privilege.


Basically lacking USAGE does not prevent someone from knowing objects
within the schema exist, it just prevents queries from referencing them as
named objects.


> To carry out actions on objects one needs the specific grant like select
> anyway. I do not see the point of usage privilege.


Layers of security.  But yes it is generally sufficient enough to simply
allow usage on scheme without much thought while ensuring contained objects
are sufficiently secured.

David J.


RHEL 7 (systemd) reboot

2018-10-09 Thread Bryce Pepper
I am running three instances (under different users) on a RHEL 7 server to 
support a vendor product.

In the defined services, the start & stop scripts work fine when invoked with 
systemctl {start|stop} whatever.service  but we have automated monthly patching 
which does a reboot.

Looking in /var/log/messages and the stop scripts do not get invoked on reboot, 
therefore I created a new shutdown service as described 
here.

It appears that PostGreSQL is receiving a signal from somewhere prior to my 
script running...

Oct 05 14:18:56 kccontrolmt01 NetworkManager[787]:   [1538767136.0967] 
manager: NetworkManager state is now DISCONNECTED
Oct 05 14:18:56 kccontrolmt01 dbus[740]: [system] Activating via systemd: 
service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispa
Oct 05 14:18:56 kccontrolmt01 dbus[740]: [system] Activation via systemd failed 
for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation
Oct 05 14:18:56 kccontrolmt01 network[29310]: Shutting down interface eth0:  
Device 'eth0' successfully disconnected.
Oct 05 14:18:56 kccontrolmt01 network[29310]: [  OK  ]
Oct 05 14:18:56 kccontrolmt01 stop_ctmlinux_server.sh[29185]: 

Oct 05 14:18:56 kccontrolmt01 stop_ctmlinux_server.sh[29185]: Shutting down 
CONTROL-M.
Oct 05 14:18:56 kccontrolmt01 stop_ctmlinux_server.sh[29185]: 

Oct 05 14:18:56 kccontrolmt01 stop_ctmlinux_server.sh[29185]: Waiting ...
Oct 05 14:18:56 kccontrolmt01 stop_ctmlinux_server.sh[29185]: psql action 
failed. cannot perform sql command in /data00/ctmlinux/ctm_server/tmp/upd_CMS_SY
Oct 05 14:18:56 kccontrolmt01 stop_ctmlinux_server.sh[29185]: db_execute_sql 
failed while processing /data00/ctmlinux/ctm_server/tmp/upd_CMS_SYSPRM_29448.
Oct 05 14:18:56 kccontrolmt01 stop_ctmlinux_server.sh[29185]: Failed to update 
CMS_SYSPRM table.
Oct 05 14:18:56 kccontrolmt01 stop_ctmlinux_server.sh[29185]: Be aware that the 
Configuration Agent might start the CONTROL-M/Server

The database must be available for the product to shut down in a consistent 
state.

I am open to suggestions.

Thanks,
Bryce

Bryce Pepper
Sr. Unix Applications Systems Engineer
The Kansas City Southern Railway Company
114 West 11th Street  |  Kansas City,  MO 64105
Office:  816.983.1512
Email:  bpep...@kcsouthern.com



Pg_logical without subscription. Can we log the operations ?

2018-10-09 Thread Dilshan
Hi Team,
 I am working on a requirement, where I need to log each and every
operation on the master db. So I was thinking if I could do with pg_logical
by setting my master db as publisher and setting a new db as subscriber and
dropping the subscription there after. I am wondering, how to get the
operations that a subscription would receive just into logs. Could you
guide me on that? Otherwise is there a possibility to receive all the
operation without dropping subscription and logging the details and
deleting the subscription tables to save space. I am planning to have logs
rotated and purging logs every month. Could you please guide me about the
possibility of this approach?
Thanks in advance,
Dilshan


Re: RHEL 7 (systemd) reboot

2018-10-09 Thread Adrian Klaver

On 10/9/18 11:06 AM, Bryce Pepper wrote:
I am running three instances (under different users) on a RHEL 7 server 
to support a vendor product.


In the defined services, the start & stop scripts work fine when invoked 
with systemctl {start|stop} whatever.service  but we have automated 
monthly patching which does a reboot.


Looking in /var/log/messages and the stop scripts do not get invoked on 
reboot, therefore I created a new shutdown service as described here 
.


It appears that PostGreSQL is receiving a signal from somewhere prior to 
my script running…






The database must be available for the product to shut down in a 
consistent state.


I am open to suggestions.


What is the below doing or coming from?:

db_execute_sql failed while processing 
/data00/ctmlinux/ctm_server/tmp/upd_CMS_SYSPRM_29448.




Thanks,

Bryce

*Bryce Pepper*

Sr. Unix Applications Systems Engineer

*The Kansas City Southern Railway Company *

114 West 11^th Street  |  Kansas City,  MO 64105

Office:  816.983.1512

Email: bpep...@kcsouthern.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Privilege mess?

2018-10-09 Thread Thiemo Kellner

 Quoting "David G. Johnston" :

Layers of security.  But yes it is generally sufficient enough to  
simply allow usage on scheme without much

thought while ensuring contained objects are sufficiently secured.


Thanks :-)