[GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread CN
I have a feeling that slight enhancement to commands "SET ROLE" or "SET
SESSION AUTHORIZATION" can obsolete and outperform external connection
pooling tools in some use cases.

Assume we are in the following situation:

- There are a million schemas each owned by a distinct role.
- Every role is not allowed to access any other schema except its own.

If command "SET SESSION AUTHORIZATION" is enhanced to accept two
additional arguments

PASSWORD 

, then a client simply establishes only one connection to server and do
jobs for a million roles.

Say I want to gain full access to "schema2", I simply issue these two
commands

SET SESSION AUTHORIZATION user2 PASSWORD p2;
SET SEARCH_PATH TO schema2,pg_category;

, where "p2" is the password associated with role "user2".

If the current role is superuser "postgres" and it wants to downgrade
itself to role "user3", then it simply sends these commands:

SET SESSION AUTHORIZATION user3;
SET SEARCH_PATH TO schema3,pg_category;

Does my points make sense?
Is it eligible for feature request?

Best Regards,
CN

-- 
http://www.fastmail.com - Accessible with your email software
  or over the web



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] After replication failover: could not read block X in file Y read only 0 of 8192 bytes

2016-05-31 Thread Brian Sutherland
On Tue, May 31, 2016 at 04:49:26PM +1000, Venkata Balaji N wrote:
> On Mon, May 30, 2016 at 11:37 PM, Brian Sutherland 
> wrote:
> 
> > I'm running a streaming replication setup with PostgreSQL 9.5.2 and have
> > started seeing these errors on a few INSERTs:
> >
> > ERROR:  could not read block 8 in file "base/3884037/3885279": read
> > only 0 of 8192 bytes
> >
> 
> These errors are occurring on master or slave ?

On the master (which was previously a slave)

> > on a few tables. If I look at that specific file, it's only 6 blocks
> > long:
> >
> > # ls -la base/3884037/3885279
> > -rw--- 1 postgres postgres 49152 May 30 12:56 base/3884037/3885279
> >
> > It seems that this is the case on most tables in this state. I havn't
> > seen any error on SELECT and I can SELECT * on the all tables I know
> > have this problem. The database is machine is under reasonable load.
> >
> 
> So, the filenodes generating this error belong to a Table ? or an Index ?

So far I have found 3 tables with this issue, 2 were pg_statistic in
different databases. The one referenced above is definitely a table:
"design_file".

The usage pattern on that table is to DELETE and later INSERT a few
hundred rows at a time on an occasional basis. The table is very small,
680 rows.

> > On some tables an "ANALYZE tablename" causes the error.

I discovered why ANALYZE raised an error, it was because pg_statistic
was affected. "vacuum full verbose pg_statistic;" fixed it. Hoping any
missing statistics get re-generated.

> > We recently had a streaming replication failover after loading a large
> > amount of data with pg_restore. The problems seem to have started after
> > that, but I'm not perfectly sure.
> 
> pg_restore has completed successfully ? 

pg_restore did complete successfully 

> When pg_restore was running, did
> you see anything suspicious in the postgresql logfiles ?

The restore happened on the old master. The logfile was long since
deleted :(

> I have data_checksums switched on so am suspecting a streaming
> > replication bug.  Anyone know of a recent bug which could have caused
> > this?
> >
> 
> I cannot conclude at this point. I encountered these kind of errors with
> Indexes and re-indexing fixed them.

This is actually the second time I am seeing these kinds of errors, in
the past, after verifying that no data was lost I used VACUUM FULL to
recover the ability to INSERT. There was no pitchfork uprising...

> Regards,
> Venkata B N
> 
> Fujitsu Australia

-- 
Brian Sutherland


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to hide JDBC connection credentials from git?

2016-05-31 Thread Alexander Farber
Hello,

I work on several small Java projects (using Maven+NetBeans) and store them
in a public git repository.

I would like to use PostgreSQL JDBC in some of the projects, but don't want
to make the connection credentials of my database public.

Surely there are other developers out there, who have already faced similar
problem.

Please share your solutions.

It is probably possible to put the database name/user/password into a
*.properties file and then ignore it in git repository, but keep it in the
workarea?

Thank you
Alex


Re: [GENERAL] How to hide JDBC connection credentials from git?

2016-05-31 Thread Szymon Lipiński
On 31 May 2016 at 11:32, Alexander Farber 
wrote:

> Hello,
>
> I work on several small Java projects (using Maven+NetBeans) and store
> them in a public git repository.
>
> I would like to use PostgreSQL JDBC in some of the projects, but don't
> want to make the connection credentials of my database public.
>
> Surely there are other developers out there, who have already faced
> similar problem.
>
> Please share your solutions.
>
> It is probably possible to put the database name/user/password into a
> *.properties file and then ignore it in git repository, but keep it in the
> workarea?
>
> Thank you
> Alex
>
>
Usually I use either a separate config file, or environment variables
wrapped into a separate runner file. The properties file you mentioned is
quite a good idea.
Just remember to provide a template file, so other programmers could create
their own properties file without debugging the application. And document
how I should make my own properties file, and run the app.

-- 
regards Szymon Lipiński


Re: [GENERAL] BDR to ignore table exists error

2016-05-31 Thread Nikhil
Thanks a lot Martin for your replies.

On Sun, May 29, 2016 at 11:50 PM, Martín Marqués 
wrote:

> Hi,
>
> El 29/05/16 a las 06:01, Nikhil escribió:
> >
> > *​Nik>> skip_ddl_locking is set to True in my configuration. As this
> > was preventing single*
> >
> > *​node from doing DDL operation (if one is down majority is not there
> > for doing DDL on available node)*​
>
> Well, you have to be prepared to deal with burn wounds if you play with
> fire. ;)
>
> If you decide to have skip_ddl_locking on you have to be sure all DDLs
> happen on one node, else you end up with conflicts like this.
>
> I suggest you find out why the table was already created on the
> downstream node (as a forensics task so you can avoid bumping into the
> same issue).
>
> > ​Nik>> DDL used is
> >
> > ​
> > ERROR:  relation "af_npx_l3_16_146_10" already exists
> > <596802016-05-29 08:53:07 GMT%CONTEXT:  during DDL replay of ddl
> > statement: CREATE  TABLE  public.af_npx_license_l3_16_146_
> > 10 (CONSTRAINT af_npx_license_l3_16_146_10_rpt_sample_time_check CHECK
> > (((rpt_sample_time OPERATOR(pg_catalog.>=) 146417040
> > 0) AND (rpt_sample_time OPERATOR(pg_catalog.<=) 1464173999))) ) INHERITS
> > (public.af_npx_l3) WITH (oids=OFF)
> > <554132016-05-29 08:53:07 GMT%LOG:  worker process: bdr
> > (6288512113617339435,2,16384,)->bdr (6288505144157102317,1, (PID 59
> > 680) exited with exit code 1
>
> On the node where the CREATE TABLE is trying to get applied run this:
>
> BEGIN;
> SET LOCAL bdr.skip_ddl_replication TO 'on';
> SET LOCAL bdr.skip_ddl_locking TO 'on';
> DROP TABLE af_npx_l3_16_146_10;
> END;
>
> After that, the DDL that's stuck will get applied and the stream of
> changes will continue.
>
> By the looks of what you're dealing with, I wouldn't be surprised if the
> replication gets stuck again on another DDL conflict.
>
> I suggest rethinking the locking strategy, because this shows that
> there's something fishy there.
>
> Regards,
>
> --
> Martín Marquéshttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [GENERAL] swarm of processes in BIND state?

2016-05-31 Thread hubert depesz lubaczewski
On Mon, May 30, 2016 at 11:05:17AM -0700, Jeff Janes wrote:
> So my theory is that you deleted a huge number of entries off from
> either end of the index, that transaction committed, and that commit
> became visible to all.  Planning a mergejoin needs to dig through all
> those tuples to probe the true end-point.  On master, the index
> entries quickly get marked as LP_DEAD so future probes don't have to
> do all that work, but on the replicas those index hint bits are, for
> some unknown to me reason, not getting set.  So it has to scour the
> all the heap pages which might have the smallest/largest tuple, on
> every planning cycle, and that list of pages is very large leading to
> occasional IO stalls.

This I get, but why was the same backend reading data for all 3 largest
tables, while I know for sure (well, 99.9% sure) that no query touches
all of them?

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slides for PGCon2016; "FTS is dead ? Long live FTS !"

2016-05-31 Thread Stefan Keller
Hi Oleg

2016-05-29 19:54 GMT+02:00 Oleg Bartunov :

> We chose RUM just because there are GIN and VODKA :)
> But some people already suggested several meanings like Really Useful
iMdex :)
> We are open for suggestion.

iMdex LOL :-)

Ok. What's new about the index?
* AFAIK it's using methods as extension
* it's inspired by inverted index
* and uses position information to calculate rank and order results

So I propose: "Ranking UMdex" ;-)

:Stefan


2016-05-30 22:33 GMT+02:00 Andreas Joseph Krogh :

> På mandag 30. mai 2016 kl. 22:27:11, skrev Oleg Bartunov <
> obartu...@gmail.com>:
>
>
>
> On Sun, May 29, 2016 at 12:59 AM, Oleg Bartunov 
> wrote:
>>
>>
>>
>> On Thu, May 26, 2016 at 11:26 PM, Andreas Joseph Krogh <
>> andr...@visena.com> wrote:
>>>
>>> Hi.
>>>
>>> Any news about when slides for $subject will be available?
>>>
>>
>> I submitted slides to pgcon site, but it usually takes awhile, so you can
>> download our presentation directly
>> http://www.sai.msu.su/~megera/postgres/talks/pgcon-2016-fts.pdf
>>
>>
>
> Please, download new version of slides. I added CREATE INDEX commands in
> examples.
>
>
> Great!
>
> --
> *Andreas Joseph Krogh*
> CTO / Partner - Visena AS
> Mobile: +47 909 56 963
> andr...@visena.com
> www.visena.com
> 
>
>


Re: [GENERAL] empty pg_stat_replication when replication works fine?

2016-05-31 Thread Andrej Vanek
I've found the reason: inconsistent catalog data.
This state did not disappear after restart of all postgres instances.

pg_authid.oid does not match the one in pg_stat_get_activity(NULL::integer).

I wonder how this inconsistency could happen.. Maybe some error during
cloning database (binary database copy, or some older WAL logs left over
either in archive or pg_xlog, or some corrupted index in pg_catalog?)
during several test-cycles..

Any other idea how can oid of users could be different from the one
appearing in pg_stat_get_activity()? (see details below)

Andrej
--details:
postgres=# \d+ pg_stat_replication;
  View "pg_catalog.pg_stat_replication"
  Column  |   Type   | Modifiers | Storage  |
Description
--+--+---+--+-
 pid  | integer  |   | plain|
 usesysid | oid  |   | plain|
 usename  | name |   | plain|
 application_name | text |   | extended |
 client_addr  | inet |   | main |
 client_hostname  | text |   | extended |
 client_port  | integer  |   | plain|
 backend_start| timestamp with time zone |   | plain|
 state| text |   | extended |
 sent_location| text |   | extended |
 write_location   | text |   | extended |
 flush_location   | text |   | extended |
 replay_location  | text |   | extended |
 sync_priority| integer  |   | plain|
 sync_state   | text |   | extended |
View definition:
 SELECT s.pid,
s.usesysid,
u.rolname AS usename,
s.application_name,
s.client_addr,
s.client_hostname,
s.client_port,
s.backend_start,
w.state,
w.sent_location,
w.write_location,
w.flush_location,
w.replay_location,
w.sync_priority,
w.sync_state
   FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid,
application_name, state, query, waiting, xact_start, query_start,
backend_start, state_change, client_addr, client_hostname, client_port),
pg_authid u,
pg_stat_get_wal_senders() w(pid, state, sent_location, write_location,
flush_location, replay_location, sync_priority, sync_state)
  WHERE s.usesysid = u.oid AND s.pid = w.pid;

postgres=# \dv+ pg_stat_replication;
 List of relations
   Schema   |Name | Type |  Owner   |  Size   | Description
+-+--+--+-+-
 pg_catalog | pg_stat_replication | view | postgres | 0 bytes |
(1 row)

postgres=# select * from pg_stat_get_wal_senders();
 pid |   state   | sent_location | write_location | flush_location |
replay_location | sync_priority | sync_state
-+---+---+++-+---+
 707 | streaming | 0/B590| 0/B590 | 0/B590 |
0/B590  | 0 | async
(1 row)

postgres=# select * from pg_stat_get_activity(NULL::integer);
 datid | pid | usesysid | application_name | state  |
query| waiting |  xact_start   |
   query_star
t  | backend_start | state_change
 |  client_addr   | client_hostname | client_port
---+-+--+--+++-+---+
---+---+---++-+-
 0 | 707 |34456 | l2amain  | idle   |
 | f   |   |
   | 2016-05-31 13:19:04.875468+02 | 2016-05-31 13:19:04.877894+02
| 192.168.101.11 | |   33329
 12896 | 709 |   10 | psql | active | select * from
pg_stat_get_activity(NULL::integer); | f   | 2016-05-31
13:22:23.090373+02 | 2016-05-31 13:22:23
.090373+02 | 2016-05-31 13:19:08.719215+02 | 2016-05-31 13:22:23.090382+02
|| |  -1
(2 rows)

postgres=# select * from pg_authid where oid = 34456;
 rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb |
rolcatupdate | rolcanlogin | rolreplication | rolconnlimit | rolpassword |
rolvaliduntil
-+--++---+-+--+-++--+-+---
(0 rows)


postgres=#  select oid, * from pg_authid;
  oid  |  rolname  | rolsuper | rolinherit | rolcreaterole | rolcrea

Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-31 Thread Francisco Olarte
Hi:

On Tue, May 31, 2016 at 8:58 AM, Daniel Westermann
 wrote:
> for completeness: same issue with data checksums enabled:
...
> => rm the table files
> => select count(*) still works

And ? I would vote for not doing anything on this, after all you are
working outside the 'envelope' on this. Is like saying 'I commit with
fsync enabled, but then I zero the disk and the data is lost'. As I
said is like undefined behaviour in C. Currently *0 tends to SIGSEGV
on both reads and writes, but I've worked in OSs where it did only on
writes, and where it worked fine ( like CP/M ). For you the correct
behaviour maybe to fail fast and loose a little speed, for others the
current one may be OK. Normally for this things you go for the path
with less code, as deleted code is bug free.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Drop/Re-Creating database extremely slow + doesn't lose data

2016-05-31 Thread Thalis Kalfigkopoulos
Intention: to drop a database and recreate it.
Expectation: the newly created db should be empty
What happens: dropping is fast, creation is slow, and when I reconnect, all
the data objects are still there.

Commands (tried both through command line with dropdb/createdb and through
psql)

pgdba@template1[[local]:5952] # vacuum full;
VACUUM
Time: 61292.151 ms
pgdba@template1[[local]:5952] # \l
  List of databases
   Name| Owner | Encoding |   Collate   |Ctype| Access
privileges
---+---+--+-+-+---
 postgres  | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 template0 | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba
  +
   |   |  | | | pgdba=CTc/pgdba
 template1 | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba
  +
   |   |  | | | pgdba=CTc/pgdba
(3 rows)

pgdba@template1[[local]:5952] # CREATE DATABASE dafodb;
CREATE DATABASE
Time: 35776.047 ms
pgdba@template1[[local]:5952] #


And the corresponding lines from pg_log:

2016-05-31 15:29:46 CEST [4591]:
user=pgdba,db=template1,app=psql,client=[local] LOG:  statement: CREATE
DATABASE dafodb;
2016-05-31 15:29:46 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint
starting: immediate force wait flush-all
2016-05-31 15:29:46 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint
complete: wrote 241 buffers (1.5%); 0 transaction log file(s) added, 0
removed, 0 recycled; write=0.002 s, sync=0.035 s, total=0.045 s; sync
files=54, longest=0.003 s, average=0.000 s; distance=67120 kB,
estimate=67120 kB
2016-05-31 15:29:52 CEST [4596]: user=,db=,app=,client= LOG:  process 4596
still waiting for RowExclusiveLock on object 1 of class 1262 of database 0
after 1000.138 ms
2016-05-31 15:29:52 CEST [4596]: user=,db=,app=,client= DETAIL:  Process
holding the lock: 4591. Wait queue: 4596.
2016-05-31 15:30:22 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint
starting: immediate force wait
2016-05-31 15:30:22 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint
complete: wrote 0 buffers (0.0%); 0 transaction log file(s) added, 4
removed, 0 recycled; write=0.000 s, sync=0.000 s, total=0.004 s; sync
files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=60408 kB
2016-05-31 15:30:22 CEST [4591]:
user=pgdba,db=template1,app=psql,client=[local] LOG:  duration: 35775.909 ms
2016-05-31 15:30:22 CEST [4596]: user=,db=,app=,client= LOG:  process 4596
acquired RowExclusiveLock on object 1 of class 1262 of database 0 after
31471.839 ms
2016-05-31 15:30:22 CEST [4596]: user=,db=,app=,client= LOG:  automatic
vacuum of table "template1.pg_catalog.pg_statistic": index scans: 1
pages: 0 removed, 54 remain, 0 skipped due to pins
tuples: 108 removed, 724 remain, 0 are dead but not yet removable
buffer usage: 106 hits, 39 misses, 62 dirtied
avg read rate: 2.044 MB/s, avg write rate: 3.250 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.14 sec
2016-05-31 15:30:51 CEST [4614]: user=,db=,app=,client= LOG:  automatic
analyze of table "template1.pg_catalog.pg_shdepend" system usage: CPU
0.00s/0.00u sec elapsed 0.02 sec


Then I continue to check the newly created database is there:
pgdba@template1[[local]:5952] # \l
  List of databases
   Name| Owner | Encoding |   Collate   |Ctype| Access
privileges
---+---+--+-+-+---
 dafodb| pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 postgres  | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 template0 | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba
  +
   |   |  | | | pgdba=CTc/pgdba
 template1 | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba
  +
   |   |  | | | pgdba=CTc/pgdba
(4 rows)

pgdba@template1[[local]:5952] # \c dafodb
You are now connected to database "dafodb" as user "pgdba".
pgdba@dafodb[[local]:5952] # \d
 List of relations
 Schema | Name  |   Type
| Owner
+---+--+---
 public | XXX  | table|
pgdba
 public | Y  | sequence | pgdba




So all the data is still there.

Connected processes at the moment:

pgdba@dafodb[[local]:5952] # select * from pg_stat_activity ;
 datid  | datname | pid  | usesysid | usename | application_name |
client_addr | client_hostname | client_port | backend_start
|  xact_start   |  query_start  |
state_change  | waiting | state  | backend_xid
+-+--+--+-+--+-+-+-+-

Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Francisco Olarte
On Tue, May 31, 2016 at 9:45 AM, CN  wrote:
...
> If command "SET SESSION AUTHORIZATION" is enhanced to accept two
> additional arguments
> PASSWORD 
...
> SET SESSION AUTHORIZATION user2 PASSWORD p2;
> SET SEARCH_PATH TO schema2,pg_category;
> Does my points make sense?

It does, but I feel it must be greatly expanded. If it does the same
as a reconnect it must accept the same kind of checks a login does (
pg_hba.conf ), which I think means putting some complicated and
somehow critical code in another place. And also it must specify how
it interacts with open transactions ( i.e. does it work like the
current command or like a reconnection ). It also means you have to
use passwords in your DDL/DML code, instead of keeping them hidden in
your connection setup code ( which makes it less atractive, for me at
least ).

> Is it eligible for feature request?

This is not for me to say, but I think it would complicate things too
much for a narrow use case ( and I doubt poolers are used for this
kind of things anyway ).

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Melvin Davidson
On Tue, May 31, 2016 at 3:45 AM, CN  wrote:

> I have a feeling that slight enhancement to commands "SET ROLE" or "SET
> SESSION AUTHORIZATION" can obsolete and outperform external connection
> pooling tools in some use cases.
>
> Assume we are in the following situation:
>
> - There are a million schemas each owned by a distinct role.
> - Every role is not allowed to access any other schema except its own.
>
> If command "SET SESSION AUTHORIZATION" is enhanced to accept two
> additional arguments
>
> PASSWORD 
>
> , then a client simply establishes only one connection to server and do
> jobs for a million roles.
>
> Say I want to gain full access to "schema2", I simply issue these two
> commands
>
> SET SESSION AUTHORIZATION user2 PASSWORD p2;
> SET SEARCH_PATH TO schema2,pg_category;
>
> , where "p2" is the password associated with role "user2".
>
> If the current role is superuser "postgres" and it wants to downgrade
> itself to role "user3", then it simply sends these commands:
>
> SET SESSION AUTHORIZATION user3;
> SET SEARCH_PATH TO schema3,pg_category;
>
> Does my points make sense?
> Is it eligible for feature request?
>
> Best Regards,
> CN
>
> --
> http://www.fastmail.com - Accessible with your email software
>   or over the web
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Your points make no sense. You can accomplish the same with:
GRANT ROLE user2 TO user1;

Then user2 simply does
SET ROLE user2;
SET SEARCH_PATH TO schema2,pg_category;

No need to reconnect.

This has been available in PostgreSQL since 8.1

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Drop/Re-Creating database extremely slow + doesn't lose data

2016-05-31 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Thalis Kalfigkopoulos
Sent: Tuesday, May 31, 2016 9:49 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Drop/Re-Creating database extremely slow + doesn't lose data

Intention: to drop a database and recreate it.
Expectation: the newly created db should be empty
What happens: dropping is fast, creation is slow, and when I reconnect, all the 
data objects are still there.

Commands (tried both through command line with dropdb/createdb and through psql)

pgdba@template1[[local]:5952] # vacuum full;
VACUUM
Time: 61292.151 ms
pgdba@template1[[local]:5952] # \l
  List of databases
   Name| Owner | Encoding |   Collate   |Ctype| Access privileges
---+---+--+-+-+---
 postgres  | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 template0 | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba +
   |   |  | | | pgdba=CTc/pgdba
 template1 | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba +
   |   |  | | | pgdba=CTc/pgdba
(3 rows)

pgdba@template1[[local]:5952] # CREATE DATABASE dafodb;
CREATE DATABASE
Time: 35776.047 ms
pgdba@template1[[local]:5952] #


And the corresponding lines from pg_log:

2016-05-31 15:29:46 CEST [4591]: 
user=pgdba,db=template1,app=psql,client=[local] LOG:  statement: CREATE 
DATABASE dafodb;
2016-05-31 15:29:46 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint 
starting: immediate force wait flush-all
2016-05-31 15:29:46 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint 
complete: wrote 241 buffers (1.5%); 0 transaction log file(s) added, 0 removed, 
0 recycled; write=0.002 s, sync=0.035 s, total=0.045 s; sync files=54, 
longest=0.003 s, average=0.000 s; distance=67120 kB, estimate=67120 kB
2016-05-31 15:29:52 CEST [4596]: user=,db=,app=,client= LOG:  process 4596 
still waiting for RowExclusiveLock on object 1 of class 1262 of database 0 
after 1000.138 ms
2016-05-31 15:29:52 CEST [4596]: user=,db=,app=,client= DETAIL:  Process 
holding the lock: 4591. Wait queue: 4596.
2016-05-31 15:30:22 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint 
starting: immediate force wait
2016-05-31 15:30:22 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint 
complete: wrote 0 buffers (0.0%); 0 transaction log file(s) added, 4 removed, 0 
recycled; write=0.000 s, sync=0.000 s, total=0.004 s; sync files=0, 
longest=0.000 s, average=0.000 s; distance=0 kB, estimate=60408 kB
2016-05-31 15:30:22 CEST [4591]: 
user=pgdba,db=template1,app=psql,client=[local] LOG:  duration: 35775.909 ms
2016-05-31 15:30:22 CEST [4596]: user=,db=,app=,client= LOG:  process 4596 
acquired RowExclusiveLock on object 1 of class 1262 of database 0 after 
31471.839 ms
2016-05-31 15:30:22 CEST [4596]: user=,db=,app=,client= LOG:  automatic vacuum 
of table "template1.pg_catalog.pg_statistic": index scans: 1
pages: 0 removed, 54 remain, 0 skipped due to pins
tuples: 108 removed, 724 remain, 0 are dead but not yet removable
buffer usage: 106 hits, 39 misses, 62 dirtied
avg read rate: 2.044 MB/s, avg write rate: 3.250 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.14 sec
2016-05-31 15:30:51 CEST [4614]: user=,db=,app=,client= LOG:  automatic analyze 
of table "template1.pg_catalog.pg_shdepend" system usage: CPU 0.00s/0.00u sec 
elapsed 0.02 sec


Then I continue to check the newly created database is there:
pgdba@template1[[local]:5952] # \l
  List of databases
   Name| Owner | Encoding |   Collate   |Ctype| Access privileges
---+---+--+-+-+---
 dafodb| pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 postgres  | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 template0 | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba +
   |   |  | | | pgdba=CTc/pgdba
 template1 | pgdba | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba +
   |   |  | | | pgdba=CTc/pgdba
(4 rows)

pgdba@template1[[local]:5952] # \c dafodb
You are now connected to database "dafodb" as user "pgdba".
pgdba@dafodb[[local]:5952] # \d
 List of relations
 Schema | Name  |   Type   | 
Owner
+---+--+---
 public | XXX  | table| 
pgdba
 public | Y  | sequence | pgdba




So all the data is still there.

Connected processes at the moment:

pgdba@dafodb[[local]:5952] # select * from pg_stat_activity ;
 datid  | datname | pid  | usesysid | usename | application_na

Re: [GENERAL] Drop/Re-Creating database extremely slow + doesn't lose data

2016-05-31 Thread Francisco Olarte
Hi Thalis

On Tue, May 31, 2016 at 3:49 PM, Thalis Kalfigkopoulos 
wrote:

> Intention: to drop a database and recreate it.
> Expectation: the newly created db should be empty
> What happens: dropping is fast, creation is slow, and when I reconnect,
> all the data objects are still there.
>
> Commands (tried both through command line with dropdb/createdb and through
> psql)
>

​Creation is normally slower then dropping. This is normal, as it is a more
involved process, and normally an unusual one, so I suspect it's being
developed ​thinking more on correctness and verifiability than speed.

Onto the other​ problem. I did not see the drop commands in your examples.
And neither did I see how the / data got to the first dafodb in the
first place. ​And I saw you connect to template1. ¿ Are you aware databases
in postgres are made by copying a template database ? Maybe you modified
the default template database ( this tends to be template1 ) and this is
the reason they are all equal ( read
https://www.postgresql.org/docs/9.5/static/manage-ag-templatedbs.html
​, and also think newly created postgres database are never empty, they
have the system catalogs inside them ).

Francisco Olarte.​


Re: [GENERAL] Drop/Re-Creating database extremely slow + doesn't lose data

2016-05-31 Thread David G. Johnston
On Tue, May 31, 2016 at 9:49 AM, Thalis Kalfigkopoulos 
wrote:

> Intention: to drop a database and recreate it.
> Expectation: the newly created db should be empty
> What happens: dropping is fast, creation is slow, and when I reconnect,
> all the data objects are still there.
>
 ​[...]​

Even weirder, created a new DB with a completely unrelated name. Again
> "create database" took long time, but then connected to it and it has all
> the data from the "dafodb".
>
> Also tried: renaming dafodb to dafodb_OLD and again "create database
> dafodb". Both contain the same data.
>
> All this on Pg 9.5.2 on 64bit Ubuntu with 3.13.0-74-generic.
>
> Any idea what's going on or how to recover?
>

​Working as designed.  Database creation in PostgreSQL operates by copying
a template database and then making minor modifications as specified in the
options to CREATE DATABASE.

https://www.postgresql.org/docs/9.6/static/sql-createdatabase.html

Note that the default template is "template1"

​Typically template0 is basically empty so the behavior you are expecting
to see can be had by explicitly specifying template0 in the command.

Oddly, the notes on the aforementioned page state: "The principal
limitation is that no other sessions can be connected to the template
database while it is being copied." yet in your example you appear to be
connected to template1 when you execute the CREATE DATABASE command...

​David J.​


Re: [GENERAL] Drop/Re-Creating database extremely slow + doesn't lose data

2016-05-31 Thread Thalis Kalfigkopoulos
Hi all.

Ok, Igor nailed it. That was lame on my behalf. I apparently "contaminated"
my template1 db at some point (restored into it instead of into the target
"dafodb"). A simple \d confirmed this immediately.

Apologies for the false alarm.

@DavidJohnston, I don't know why, but yes, I am doing all operations
connected from template1.

BR,
Thalis K.

On Tue, May 31, 2016 at 4:02 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, May 31, 2016 at 9:49 AM, Thalis Kalfigkopoulos  > wrote:
>
>> Intention: to drop a database and recreate it.
>> Expectation: the newly created db should be empty
>> What happens: dropping is fast, creation is slow, and when I reconnect,
>> all the data objects are still there.
>>
>  ​[...]​
>
> Even weirder, created a new DB with a completely unrelated name. Again
>> "create database" took long time, but then connected to it and it has all
>> the data from the "dafodb".
>>
>> Also tried: renaming dafodb to dafodb_OLD and again "create database
>> dafodb". Both contain the same data.
>>
>> All this on Pg 9.5.2 on 64bit Ubuntu with 3.13.0-74-generic.
>>
>> Any idea what's going on or how to recover?
>>
>
> ​Working as designed.  Database creation in PostgreSQL operates by copying
> a template database and then making minor modifications as specified in the
> options to CREATE DATABASE.
>
> https://www.postgresql.org/docs/9.6/static/sql-createdatabase.html
>
> Note that the default template is "template1"
>
> ​Typically template0 is basically empty so the behavior you are expecting
> to see can be had by explicitly specifying template0 in the command.
>
> Oddly, the notes on the aforementioned page state: "The principal
> limitation is that no other sessions can be connected to the template
> database while it is being copied." yet in your example you appear to be
> connected to template1 when you execute the CREATE DATABASE command...
>
> ​David J.​
>


Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Francisco Olarte
Hi Melvin:

On Tue, May 31, 2016 at 3:55 PM, Melvin Davidson  wrote:
> On Tue, May 31, 2016 at 3:45 AM, CN  wrote:
>> SET SESSION AUTHORIZATION user2 PASSWORD p2;

> Your points make no sense. You can accomplish the same with:
> GRANT ROLE user2 TO user1;

I'm not discussing wether it makes sense, but you do not acomplish the
same. In his case you cannot swict to the other role unless you know
the password for it, in yours you can. Also I suspect he wants it to
work like a login, i.e., if you have N roles and you add another one
he wants it to be like a new login user, and apps/people could have a
set of X role+password combos different for each one. I think it's a
bizarre thing, but not the same as granting some roles to other ( of
course if you have M people and N schemas ( in his example )you can
have M login roles and grant combos of N 'schema roles' to them to
achieve this, but if N is, say, a hundred, and you have a huge M, like
ten thousand, with a different combo for each one, his solution may
make sense ( I do not think such a bizarre case justifies the bug-risk
of including the feature, but it can make sense ) )

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slides for PGCon2016; "FTS is dead ? Long live FTS !"

2016-05-31 Thread Oleg Bartunov
On Sun, May 29, 2016 at 8:53 PM, Andreas Joseph Krogh 
wrote:

> På søndag 29. mai 2016 kl. 19:49:06, skrev Oleg Bartunov <
> obartu...@gmail.com>:
>
> [snip]
>>
>> I want to run 9.6 beta in production right now because of this:-)
>>
>
> wait-wait :)  We'd be happy to have feedback from production, of course,
> but please, wait a bit. We are adding support of sorting posting list/tree
> not by item pointer as in gin, but make use of additional information, for
> example, timestamp, which will provide additional speedup to the existing
> one.
>
>
> Awesome!
>
>
>
> Also, we are sure there are some bugs :)
>
>
> He he, I reported 1st issue: https://github.com/postgrespro/rum/issues/1
>
> Would be cool to see this fixed so I actually could have a sip of the
> rum:-)
>


It's not easy to fix this. We don't want rum depends on  btree_gin, so
probably the easiest way is to have separate operator <=> in rum.

>
> --
> *Andreas Joseph Krogh*
> CTO / Partner - Visena AS
> Mobile: +47 909 56 963
> andr...@visena.com
> www.visena.com
> 
>
>


Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Achilleas Mantzios

On 31/05/2016 10:45, CN wrote:

I have a feeling that slight enhancement to commands "SET ROLE" or "SET
SESSION AUTHORIZATION" can obsolete and outperform external connection
pooling tools in some use cases.

Assume we are in the following situation:

- There are a million schemas each owned by a distinct role.
- Every role is not allowed to access any other schema except its own.

If command "SET SESSION AUTHORIZATION" is enhanced to accept two
additional arguments

PASSWORD 

, then a client simply establishes only one connection to server and do
jobs for a million roles.

Say I want to gain full access to "schema2", I simply issue these two
commands

SET SESSION AUTHORIZATION user2 PASSWORD p2;
SET SEARCH_PATH TO schema2,pg_category;

, where "p2" is the password associated with role "user2".

If the current role is superuser "postgres" and it wants to downgrade
itself to role "user3", then it simply sends these commands:

SET SESSION AUTHORIZATION user3;
SET SEARCH_PATH TO schema3,pg_category;

Does my points make sense?
Is it eligible for feature request?


I believe your thoughts are on the same line with an idea some people had about using connection pools on Java EE environments, in a manner that does not use a generic "catch all" user, but uses the 
individual users sharing the security context from the app server.
This way one could have the benefits of the connection pool, and the benefits of fine-grained and rich PostgreSQL security framework, the ability to log user's activity, debug the system easier, see 
real users on pg_stat_activity, on ps(1), on top(1) etc etc.
The way we do it currently is by having personalized connection pools for pgsql in jboss. It does the job from every aspect, except one : it sucks as far as performance is concerned. Every user is 
tied to his/her number of connections. It creates a sandbox around each user, so that a "malicious" greedy user (with the help of a poorly designed app of course) can only bring down his own pool, 
while others run unaffected, but still performance suffers. The idea would be to use a common pool of connections and assign users on demand as they are taken from the common pool, and later also 
return them to the common pool, once closed.
Whenever I talked to PG ppl about it, they told me that redesigning the SET ROLE functionality so that it correctly applies all the security checks and also so that it results in reflecting the 
effective user in all references in logs, sys views, OS (ps, top, etc) etc.. was hard to do, and the convo stopped right there.


With all the new and modern cloud-inspired paradigms out there, our traditional 
architecture might not of much interest any more, still I would love to make 
the above happen some time.


Best Regards,
CN




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drop/Re-Creating database extremely slow + doesn't lose data

2016-05-31 Thread David G. Johnston
On Tue, May 31, 2016 at 10:06 AM, Thalis Kalfigkopoulos 
wrote:

> @DavidJohnston, I don't know why, but yes, I am doing all operations
> connected from template1.
>
>>
>> Oddly, the notes on the aforementioned page state: "The principal
>> limitation is that no other sessions can be connected to the template
>> database while it is being copied." yet in your example you appear to be
>> connected to template1 when you execute the CREATE DATABASE command...
>>
>>
​The convention on these lists is to bottom-post (or inline).  Top-posting
is greatly disliked as it makes it difficult for observers to follow and
catch-up on an existing thread.

As to my observation - I failed to process the word "other"; the issuing
session can be connected to the template but that is all.  This is OK
because the current session would be unable to affect any other changes
while the CREATE DATABASE is in progress.

David J.
​


Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Melvin Davidson
Actually, you do not need to SWITCH, you just need permission to change to
path and gain access to all user2 privs, which is exactly what SET ROLE
user2 does.
There is no need for a password, since user1 is already connected to the
DB. Any superuser can give the GRANT ROLE to any other user.


That being said, IMHO, I believe having a separate schema for every user is
poor database design

On Tue, May 31, 2016 at 10:18 AM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 31/05/2016 10:45, CN wrote:
>
>> I have a feeling that slight enhancement to commands "SET ROLE" or "SET
>> SESSION AUTHORIZATION" can obsolete and outperform external connection
>> pooling tools in some use cases.
>>
>> Assume we are in the following situation:
>>
>> - There are a million schemas each owned by a distinct role.
>> - Every role is not allowed to access any other schema except its own.
>>
>> If command "SET SESSION AUTHORIZATION" is enhanced to accept two
>> additional arguments
>>
>> PASSWORD 
>>
>> , then a client simply establishes only one connection to server and do
>> jobs for a million roles.
>>
>> Say I want to gain full access to "schema2", I simply issue these two
>> commands
>>
>> SET SESSION AUTHORIZATION user2 PASSWORD p2;
>> SET SEARCH_PATH TO schema2,pg_category;
>>
>> , where "p2" is the password associated with role "user2".
>>
>> If the current role is superuser "postgres" and it wants to downgrade
>> itself to role "user3", then it simply sends these commands:
>>
>> SET SESSION AUTHORIZATION user3;
>> SET SEARCH_PATH TO schema3,pg_category;
>>
>> Does my points make sense?
>> Is it eligible for feature request?
>>
>
> I believe your thoughts are on the same line with an idea some people had
> about using connection pools on Java EE environments, in a manner that does
> not use a generic "catch all" user, but uses the individual users sharing
> the security context from the app server.
> This way one could have the benefits of the connection pool, and the
> benefits of fine-grained and rich PostgreSQL security framework, the
> ability to log user's activity, debug the system easier, see real users on
> pg_stat_activity, on ps(1), on top(1) etc etc.
> The way we do it currently is by having personalized connection pools for
> pgsql in jboss. It does the job from every aspect, except one : it sucks as
> far as performance is concerned. Every user is tied to his/her number of
> connections. It creates a sandbox around each user, so that a "malicious"
> greedy user (with the help of a poorly designed app of course) can only
> bring down his own pool, while others run unaffected, but still performance
> suffers. The idea would be to use a common pool of connections and assign
> users on demand as they are taken from the common pool, and later also
> return them to the common pool, once closed.
> Whenever I talked to PG ppl about it, they told me that redesigning the
> SET ROLE functionality so that it correctly applies all the security checks
> and also so that it results in reflecting the effective user in all
> references in logs, sys views, OS (ps, top, etc) etc.. was hard to do, and
> the convo stopped right there.
>
> With all the new and modern cloud-inspired paradigms out there, our
> traditional architecture might not of much interest any more, still I would
> love to make the above happen some time.
>
> Best Regards,
>> CN
>>
>>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Tom Lane
CN  writes:
> If command "SET SESSION AUTHORIZATION" is enhanced to accept two
> additional arguments
> PASSWORD 
> , then a client simply establishes only one connection to server and do
> jobs for a million roles.

I'm pretty sure this has been proposed before, and rejected before.
Two big problems with it are 1) it doesn't work for installations that
use non-password authentication methods, and 2) it leaves all the
passwords exposed in the postmaster log, if log_statement is on.

There's also a bunch of issues having to do with the fact that the
semantics of SET SESSION AUTHORIZATION are defined by the SQL standard
and don't exactly match what you'd want, in many cases, for "become
this other role".  Some of them include
* You retain the original login role's abilities to issue SET SESSION
AUTHORIZATION, either back to itself or to a third role.
* You can also get back to the original role with DISCARD ALL.
* Any session-level settings specified for the new role with ALTER
USER SET don't get adopted.
While you could imagine that specific applications might be okay with
these things, they're pretty fatal for a general-purpose connection
pooler; the first two in particular would be unacceptable security
holes.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slides for PGCon2016; "FTS is dead ? Long live FTS !"

2016-05-31 Thread Andreas Joseph Krogh
På tirsdag 31. mai 2016 kl. 16:12:52, skrev Oleg Bartunov mailto:obartu...@gmail.com>>:
[snip] He he, I reported 1st issue: https://github.com/postgrespro/rum/issues/1
 
 
Would be cool to see this fixed so I actually could have a sip of the rum:-)


  
It's not easy to fix this. We don't want rum depends on  btree_gin, so 
probably the easiest way is to have separate operator <=> in rum.



 
+1 for separate operator!
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread David G. Johnston
On Tue, May 31, 2016 at 10:20 AM, Tom Lane  wrote:

> CN  writes:
> > If command "SET SESSION AUTHORIZATION" is enhanced to accept two
> > additional arguments
> > PASSWORD 
> > , then a client simply establishes only one connection to server and do
> > jobs for a million roles.
>
> * Any session-level settings specified for the new role with ALTER
> USER SET don't get adopted.
> While you could imagine that specific applications might be okay with
> these things, they're pretty fatal for a general-purpose connection
> pooler; the first two in particular would be unacceptable security
> holes.
>

Is there a reason something "SET ROLE ... WITH SETTINGS" couldn't be
implemented?

David J.​


Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Achilleas Mantzios

On 31/05/2016 17:23, Melvin Davidson wrote:

Actually, you do not need to SWITCH, you just need permission to change to path 
and gain access to all user2 privs, which is exactly what SET ROLE user2 does.
There is no need for a password, since user1 is already connected to the DB. 
Any superuser can give the GRANT ROLE to any other user.

Still, PgSQL logs report the original user everywhere. Not useful for auditing, 
debugging, etc



That being said, IMHO, I believe having a separate schema for every user is 
poor database design

I agree about this, there are much better ways to utilize schemata.


On Tue, May 31, 2016 at 10:18 AM, Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 31/05/2016 10:45, CN wrote:

I have a feeling that slight enhancement to commands "SET ROLE" or "SET
SESSION AUTHORIZATION" can obsolete and outperform external connection
pooling tools in some use cases.

Assume we are in the following situation:

- There are a million schemas each owned by a distinct role.
- Every role is not allowed to access any other schema except its own.

If command "SET SESSION AUTHORIZATION" is enhanced to accept two
additional arguments

PASSWORD 

, then a client simply establishes only one connection to server and do
jobs for a million roles.

Say I want to gain full access to "schema2", I simply issue these two
commands

SET SESSION AUTHORIZATION user2 PASSWORD p2;
SET SEARCH_PATH TO schema2,pg_category;

, where "p2" is the password associated with role "user2".

If the current role is superuser "postgres" and it wants to downgrade
itself to role "user3", then it simply sends these commands:

SET SESSION AUTHORIZATION user3;
SET SEARCH_PATH TO schema3,pg_category;

Does my points make sense?
Is it eligible for feature request?


I believe your thoughts are on the same line with an idea some people had about using 
connection pools on Java EE environments, in a manner that does not use a generic 
"catch all" user, but uses
the individual users sharing the security context from the app server.
This way one could have the benefits of the connection pool, and the 
benefits of fine-grained and rich PostgreSQL security framework, the ability to 
log user's activity, debug the system easier,
see real users on pg_stat_activity, on ps(1), on top(1) etc etc.
The way we do it currently is by having personalized connection pools for 
pgsql in jboss. It does the job from every aspect, except one : it sucks as far 
as performance is concerned. Every user
is tied to his/her number of connections. It creates a sandbox around each user, so 
that a "malicious" greedy user (with the help of a poorly designed app of 
course) can only bring down his own
pool, while others run unaffected, but still performance suffers. The idea 
would be to use a common pool of connections and assign users on demand as they 
are taken from the common pool, and
later also return them to the common pool, once closed.
Whenever I talked to PG ppl about it, they told me that redesigning the SET 
ROLE functionality so that it correctly applies all the security checks and 
also so that it results in reflecting the
effective user in all references in logs, sys views, OS (ps, top, etc) 
etc.. was hard to do, and the convo stopped right there.

With all the new and modern cloud-inspired paradigms out there, our 
traditional architecture might not of much interest any more, still I would 
love to make the above happen some time.

Best Regards,
CN



-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org )

To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread Tom Lane
"David G. Johnston"  writes:
> Is there a reason something "SET ROLE ... WITH SETTINGS" couldn't be
> implemented?

Unless there's something underlying that proposal that I'm not seeing,
it only deals with one of the problems in this area.  The security-
related issues remain unsolved.

AFAICS there's a pretty fundamental tension here around the question
of how hard it is to revert to the original role.  If it's not possible
to do that then a connection pooler can't serially reuse a connection for
different users, which largely defeats the point.  If it is possible, how
do you keep that from being a security hole, ie one of the pool users can
gain privileges of another one?

(And, btw, I repeat that all of this has been discussed before on our
lists.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Switching roles as an replacement of connection pooling tools

2016-05-31 Thread David G. Johnston
On Tue, May 31, 2016 at 10:48 AM, Tom Lane  wrote:

> "David G. Johnston"  writes:
> > Is there a reason something "SET ROLE ... WITH SETTINGS" couldn't be
> > implemented?
>
> Unless there's something underlying that proposal that I'm not seeing,
> it only deals with one of the problems in this area.  The security-
> related issues remain unsolved.
>
> AFAICS there's a pretty fundamental tension here around the question
> of how hard it is to revert to the original role.  If it's not possible
> to do that then a connection pooler can't serially reuse a connection for
> different users, which largely defeats the point.  If it is possible, how
> do you keep that from being a security hole, ie one of the pool users can
> gain privileges of another one?
>
> (And, btw, I repeat that all of this has been discussed before on our
> lists.)
>

​Understood.  ​

​My motivation is to at least make SET ROLE more friendly by allowing easy
access to the pg_role_database_settings associated with it.  I think the
main concern is inheritance handling (or non-handling as the case may be).
This particular complaint seems like an improvement generally even if the
larger functionality has undesirable security implications.

David J.


Re: [GENERAL] recordings of pgconf us 2016

2016-05-31 Thread Igal @ Lucee.org
Funny.  I was just looking for that myself.  I would expect it to go on 
their channel at 
https://www.youtube.com/channel/UCsJkVvxwoM7R9oRbzvUhbPQ but so far 
nothing from this year.


PGCon has published their 2016 recordings on their channel: 
https://www.youtube.com/playlist?list=PLuJmmKtsV1dNE5y1gu1xpbIl3M2b7AW4D



Igal Sapir
Lucee Core Developer
Lucee.org 

On 5/28/2016 9:36 PM, Johannes wrote:

I guess I have seen all video recording from pgconf us 2015 at youtube.
Are there any recording from this year available?

Best regards Johannes





Re: [GENERAL] recordings of pgconf us 2016

2016-05-31 Thread Jim Mlodgenski
On Sun, May 29, 2016 at 12:36 AM, Johannes  wrote:

> I guess I have seen all video recording from pgconf us 2015 at youtube.
> Are there any recording from this year available?
>

We are still waiting to have them edited by the video company. Hopefully it
will be soon.


[GENERAL] Triggers not being fired with pglogical

2016-05-31 Thread Jaime Rivera
Hi,

I have a basic replica with pglogical 1.1 and postgres 9.5 on both servers
publisher and subscriber.

I have  triggers on publisher and subscriber tables, but the trigger on
subscriber table is not being fired.

Is it possible to fire the trigger with pglogical replication?

Thanks in advance


[GENERAL] Checkpoint Err on Startup of Rsynced System

2016-05-31 Thread Jim Longwill
I am trying to setup a 2nd, identical, db server (M2) for development 
and I've run into a problem with starting up the 2nd Postgres installation.


Here's what I've done:
  1) did a 'clone' of 1st (production) machine M1 (so both machines on 
Cent OS 7.2)

  2) setup an rsync operation, did a complete 'rsync' from M1 to M2
  3) did a final 'CHECKPOINT' command on M1 postgres
  4) shutdown postgres on M1 with 'pg_ctl stop'
  5) did final 'rsync' operation  (then restarted postgres on M1 with 
'pg_ctl start')

  6) tried to startup postgres on M2

It won't start, & in the log file gives the error message:
...
< 2016-05-31 09:02:52.337 PDT >LOG:  invalid primary checkpoint record
< 2016-05-31 09:02:52.337 PDT >LOG:  invalid secondary checkpoint record
< 2016-05-31 09:02:52.337 PDT >PANIC:  could not locate a valid 
checkpoint record
< 2016-05-31 09:02:53.184 PDT >LOG:  startup process (PID 26680) was 
terminated by signal 6: Aborted
< 2016-05-31 09:02:53.184 PDT >LOG:  aborting startup due to startup 
process failure


I've tried several times to do this but always get this result.  So, do 
I need to do a new 'initdb..' operation on machine M2 + restore from M1 
backups?  Or is there another way to fix this?


--o--o--o--o--o--o--o--o--o--o--o--o--
Jim Longwill
PSMFC Regional Mark Processing Center
jlongw...@psmfc.org
--o--o--o--o--o--o--o--o--o--o--o--o--


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checkpoint Err on Startup of Rsynced System

2016-05-31 Thread Scott Mead
On Tue, May 31, 2016 at 1:13 PM, Jim Longwill  wrote:

> I am trying to setup a 2nd, identical, db server (M2) for development and
> I've run into a problem with starting up the 2nd Postgres installation.
>
> Here's what I've done:
>   1) did a 'clone' of 1st (production) machine M1 (so both machines on
> Cent OS 7.2)
>   2) setup an rsync operation, did a complete 'rsync' from M1 to M2
>   3) did a final 'CHECKPOINT' command on M1 postgres
>   4) shutdown postgres on M1 with 'pg_ctl stop'
>   5) did final 'rsync' operation  (then restarted postgres on M1 with
> 'pg_ctl start')
>   6) tried to startup postgres on M2
>
> It won't start, & in the log file gives the error message:
> ...
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid primary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid secondary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >PANIC:  could not locate a valid checkpoint
> record
> < 2016-05-31 09:02:53.184 PDT >LOG:  startup process (PID 26680) was
> terminated by signal 6: Aborted
> < 2016-05-31 09:02:53.184 PDT >LOG:  aborting startup due to startup
> process failure
>
> I've tried several times to do this but always get this result.  So, do I
> need to do a new 'initdb..' operation on machine M2 + restore from M1
> backups?  Or is there another way to fix this?
>

You should have stopped M1 prior to taking the backup.  If you can't do
that, it can be done online via:

   1. Setup archiving
   2. select pg_start_backup('some label');
   3. 
   4. select pg_stop_backup();

  Without archiving and the pg_[start|stop]_backup, you're not guaranteed
anything.  You could use an atomic snapshot (LVM, storage, etc...), but
it's got to be a true snapshot.  Without that, you need archiving + start /
stop backup.

Last section of:
https://wiki.postgresql.org/wiki/Simple_Configuration_Recommendation#Physical_Database_Backups
will take you to:
https://www.postgresql.org/docs/current/static/continuous-archiving.html

--Scott


--o--o--o--o--o--o--o--o--o--o--o--o--
> Jim Longwill
> PSMFC Regional Mark Processing Center
> jlongw...@psmfc.org
> --o--o--o--o--o--o--o--o--o--o--o--o--
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG *
http://openscg.com


[GENERAL] Log Shipping

2016-05-31 Thread Joseph Kregloh
It is my understanding that if PostgeSQL has log shipping enabled, if for
whatever reason it cannot ship the file the master server will hold it. But
for how long?

Secondly, I have 2 servers I ship log files to using the following script:

#!/usr/local/bin/bash

# Slave 1
rsync -a $1 pgi@192.168.1.105:archive/$2 < /dev/null;

# Slave 2
rsync -a $1 pg@192.168.1.93:archive/$2 < /dev/null;

In this case if Slave 1 is up but Slave 2 is down. It will ship the log
file to Slave 1 but not Slave 2 and move one. Thereby Slave 2 will now be
out of sync, correct?

To allow both slaves to remain in sync when one is restarted I would need
to modify my script to return an error or false to PostgreSQL, this way it
will hold the WAL files until both Slaves are online. Correct?

Thanks,
-Joseph


Re: [GENERAL] Log Shipping

2016-05-31 Thread Alvaro Herrera
Joseph Kregloh wrote:
> It is my understanding that if PostgeSQL has log shipping enabled, if for
> whatever reason it cannot ship the file the master server will hold it. But
> for how long?

Forever (which means it dies because of running out of space in the
partition containing pg_xlog).

> Secondly, I have 2 servers I ship log files to using the following script:
> 
> #!/usr/local/bin/bash
> 
> # Slave 1
> rsync -a $1 pgi@192.168.1.105:archive/$2 < /dev/null;
> 
> # Slave 2
> rsync -a $1 pg@192.168.1.93:archive/$2 < /dev/null;
> 
> In this case if Slave 1 is up but Slave 2 is down. It will ship the log
> file to Slave 1 but not Slave 2 and move one. Thereby Slave 2 will now be
> out of sync, correct?

You could cause the script to return failure if either of these copies
fail, and return success if once both replicas have the file
(considering that one replica might already have the file from a
previous run of your script); that way, the master will retain the file
until both replicas have it, and remove the file once both replicas have
it.  Of course, you want to avoid copying the file again to the replica
that already had the file, without getting confused by a partially
written file.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checkpoint Err on Startup of Rsynced System

2016-05-31 Thread Alan Hodgson
On Tuesday, May 31, 2016 10:13:14 AM Jim Longwill wrote:
> I am trying to setup a 2nd, identical, db server (M2) for development
> and I've run into a problem with starting up the 2nd Postgres installation.
> 
> Here's what I've done:
>1) did a 'clone' of 1st (production) machine M1 (so both machines on
> Cent OS 7.2)
>2) setup an rsync operation, did a complete 'rsync' from M1 to M2
>3) did a final 'CHECKPOINT' command on M1 postgres
>4) shutdown postgres on M1 with 'pg_ctl stop'
>5) did final 'rsync' operation  (then restarted postgres on M1 with
> 'pg_ctl start')
>6) tried to startup postgres on M2
> 
> It won't start, & in the log file gives the error message:
> ...
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid primary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid secondary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >PANIC:  could not locate a valid
> checkpoint record
> < 2016-05-31 09:02:53.184 PDT >LOG:  startup process (PID 26680) was
> terminated by signal 6: Aborted
> < 2016-05-31 09:02:53.184 PDT >LOG:  aborting startup due to startup
> process failure
> 
> I've tried several times to do this but always get this result.  So, do
> I need to do a new 'initdb..' operation on machine M2 + restore from M1
> backups?  Or is there another way to fix this?

What you describe should work fine. In order of likelihood of why it doesnt, I 
could guess:

1 - you're not waiting for the database to fully shut down before running the 
last rsync
2 - you're not in fact rsync'ing the entire data directory
3 - the target server is running a different version of PostgreSQL or has a 
different machine architecture



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Log Shipping

2016-05-31 Thread Joseph Kregloh
On Tue, May 31, 2016 at 4:12 PM, Alvaro Herrera 
wrote:

> Joseph Kregloh wrote:
> > It is my understanding that if PostgeSQL has log shipping enabled, if for
> > whatever reason it cannot ship the file the master server will hold it.
> But
> > for how long?
>
> Forever (which means it dies because of running out of space in the
> partition containing pg_xlog).
>
> > Secondly, I have 2 servers I ship log files to using the following
> script:
> >
> > #!/usr/local/bin/bash
> >
> > # Slave 1
> > rsync -a $1 pgi@192.168.1.105:archive/$2 < /dev/null;
> >
> > # Slave 2
> > rsync -a $1 pg@192.168.1.93:archive/$2 < /dev/null;
> >
> > In this case if Slave 1 is up but Slave 2 is down. It will ship the log
> > file to Slave 1 but not Slave 2 and move one. Thereby Slave 2 will now be
> > out of sync, correct?
>
> You could cause the script to return failure if either of these copies
> fail, and return success if once both replicas have the file
> (considering that one replica might already have the file from a
> previous run of your script); that way, the master will retain the file
> until both replicas have it, and remove the file once both replicas have
> it.  Of course, you want to avoid copying the file again to the replica
> that already had the file, without getting confused by a partially
> written file.
>

Excellent, thanks for the reply.

-Joseph


>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


[GENERAL] Row security policies documentation question

2016-05-31 Thread Alexander M. Sauer-Budge
Hello,

Section 5.7. on Row Security Policies 
(https://www.postgresql.org/docs/current/static/ddl-rowsecurity.html) for 9.5 
says:

As a simple example, here is how to create a policy on the account relation to 
allow only members of the managers role to access rows, and only rows of their 
accounts:

CREATE TABLE accounts (manager text, company text, contact_email text);

ALTER TABLE accounts ENABLE ROW LEVEL SECURITY;

CREATE POLICY account_managers ON accounts TO managers
USING (manager = current_user);

If no role is specified, or the special user name PUBLIC is used, then the 
policy applies to all users on the system. To allow all users to access their 
own row in a users table, a simple policy can be used:

CREATE POLICY user_policy ON users
USING (user = current_user);

---

I’m trying understand the example as it references both an `accounts` table and 
a `users` table which isn’t defined. Is this a mishmash of example fragments or 
should the CREATE POLICY statement reference the `accounts` table instead of 
`users`? Specifically, what does `user` reference in the statement "CREATE 
POLICY user_policy ON users USING (user = current_user);”? Is this a table 
column in a `users` table the example doesn’t define or does PostgreSQL keep 
track of what user/role inserted a row and allow policies to use it?

Thanks!
Alex



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Row security policies documentation question

2016-05-31 Thread David G. Johnston
On Tue, May 31, 2016 at 4:59 PM, Alexander M. Sauer-Budge <
ambu...@alum.mit.edu> wrote:

> Hello,
>
> Section 5.7. on Row Security Policies (
> https://www.postgresql.org/docs/current/static/ddl-rowsecurity.html) for
> 9.5 says:
>

[...]


> ​
>
> CREATE POLICY user_policy ON users
> USING (user = current_user);
>
> ---
>
> I’m trying understand the example as it references both an `accounts`
> table and a `users` table which isn’t defined. Is this a mishmash of
> example fragments or should the CREATE POLICY statement reference the
> `accounts` table instead of `users`? Specifically, what does `user`
> reference in the statement "CREATE POLICY user_policy ON users USING (user
> = current_user);”?




> Is this a table column in a `users` table the example doesn’t define or
> does PostgreSQL keep track of what user/role inserted a row and allow
> policies to use it?
>

​It assumes the user can envision a trivial "users" table having at least a
column named "user" that represents the user's name/id and which the names
of said users are identical to those assigned to them in the PostgreSQL
database and accessible via the "pg_authid" catalog (rolname) and its
related views: namely "pg_user" (usename).

​​So, in effect the following works, and returns a single row.

SELECT *
FROM users
JOIN pg_user ON (user = usename)
WHERE user = current_user;

David J.​


[GENERAL] plql and or clausule

2016-05-31 Thread carlos
hello

this is my first question.

I am new in postgres and using plsql.

i am making this (bellow)

i want to insert one copy of one record into the log table but if there is some 
change into the original recor to update into this record two fields but i have 
one rror

can you help me please?


CREATE OR REPLACE FUNCTION lst_tot_mytable_LOG() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
IF (TG_OP = 'UPDATE') THEN
INSERT INTO lst_tot_mytable_log SELECT 'U', now(), OLD.*;
IF OLD.Peticionario != NEW.Peticionario or OLD.interlocclte != NEW.interlocclte 
or OLD.Equipo != NEW.Equipo or OLD.RespTecnico != NEW.RespTecnico or 
OLD.RespOrganiz != NEW.RespOrganiz THEN
UPDATE lst_tot_mytable set fultimamodificacion = now(), 
esmodificadoerspectoaanterior = true WHERE nropeti = OLD.nropeti;
ELSIF OLD.Peticionario == NEW.Peticionario or OLD.interlocclte == 
NEW.interlocclte or OLD.Equipo == NEW.Equipo or OLD.RespTecnico == 
NEW.RespTecnico or OLD.RespOrganiz == NEW.RespOrganiz THEN
UPDATE lst_tot_mytable set NEW.fultimamodificacion = now(), 
NEW.esmodificadoerspectoaanterior = fase WHERE nropeti = OLD.nropeti;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$$;

the eror si this:

ERROR: el operador no existe: character varying == character varying
LINE 1: SELECT OLD.Peticionario == NEW.Peticionario or OLD.interlocc...
^
HINT: Ningún operador coincide con el nombre y el tipo de los argumentos. Puede 
ser necesario agregar conversiones explícitas de tipos.
QUERY: SELECT OLD.Peticionario == NEW.Peticionario or OLD.interlocclte == 
NEW.interlocclte or OLD.Equipo == NEW.Equipo or OLD.RespTecnico == 
NEW.RespTecnico or OLD.RespOrganiz == NEW.RespOrganiz or OLD.FAlta == NEW.FAlta 
or OLD.FRequerida == NEW.FRequerida or OLD.Titulo == NEW.Titulo or OLD.Cliente 
== NEW.Cliente or OLD.Organico == NEW.Organico or OLD.Pri == NEW.Pri or 
OLD.Estado == NEW.Estado or OLD.FEstado == NEW.FEstado or OLD.CCCA == NEW.CCCA 
or OLD.Aplicacion_Actvdad == NEW.Aplicacion_Actvdad or OLD.Servicio == 
NEW.Servicio or OLD.FPrevistaInicioPeticion == NEW.FPrevistaInicioPeticion or 
OLD.FPrevistaFinPeticion == NEW.FPrevistaFinPeticion or OLD.autpet_Horas == 
NEW.autpet_Horas or OLD.autpet_Importe == NEW.autpet_Importe or 
OLD.auteje_Horas == NEW.auteje_Horas or OLD.auteje_Importe == 
NEW.auteje_Importe or OLD.aprpet_Horas == NEW.aprpet_Horas or 
OLD.aprpet_Importe == NEW.aprpet_Importe or OLD.apreje_Horas == 
NEW.apreje_Horas or OLD.apreje_Importe == NEW.apreje_Importe or 
OLD.ultprvpet_Horas == NEW.ultprvpet_Horas or OLD.ultprvpet_Importe == 
NEW.ultprvpet_Importe or OLD.ultprveje_Horas == NEW.ultprveje_Horas or 
OLD.ultprveje_Importe == NEW.ultprveje_Importe or OLD.realpet_Horas == 
NEW.realpet_Horas or OLD.realpet_Importe == NEW.realpet_Importe or 
OLD.realeje_Horas == NEW.realeje_Horas or OLD.realeje_Importe == 
NEW.realeje_Importe or OLD.CodFacturacion == NEW.CodFacturacion or 
OLD.Facturable == NEW.Facturable or OLD.ProyCliente == NEW.ProyCliente
CONTEXT: función PL/pgSQL lst_tot_mytable_log() en la línea 12 en IF
sentencia SQL: «UPDATE lst_tot_mytable set fultimamodificacion = now(), 
esmodificadoerspectoaanterior = true WHERE nropeti = OLD.nropeti»
función PL/pgSQL lst_tot_mytable_log() en la línea 13 en sentencia SQL
** Error **

ERROR: el operador no existe: character varying == character varying
SQL state: 42883
Hint: Ningún operador coincide con el nombre y el tipo de los argumentos. Puede 
ser necesario agregar conversiones explícitas de tipos.
Context: función PL/pgSQL lst_tot_mytable_log() en la línea 12 en IF
sentencia SQL: «UPDATE lst_tot_mytable set fultimamodificacion = now(), 
esmodificadoerspectoaanterior = true WHERE nropeti = OLD.nropeti»
función PL/pgSQL lst_tot_mytable_log() en la línea 13 en sentencia SQL

Re: [GENERAL] plql and or clausule

2016-05-31 Thread Kevin Grittner
On Tue, May 31, 2016 at 4:18 PM,   wrote:

> ERROR: el operador no existe: character varying == character varying
> LINE 1: SELECT OLD.Peticionario == NEW.Peticionario or OLD.interlocc...

Perhaps you want the = operator?

-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plql and or clausule

2016-05-31 Thread CS DBA

Try this:


CREATE OR REPLACE FUNCTION lst_tot_mytable_LOG() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
IF (TG_OP = 'UPDATE') THEN
INSERT INTO lst_tot_mytable_log SELECT 'U', now(), OLD.*;
IF OLD.Peticionario != NEW.Peticionario
or OLD.interlocclte != NEW.interlocclte
or OLD.Equipo != NEW.Equipo
or OLD.RespTecnico != NEW.RespTecnico
or OLD.RespOrganiz != NEW.RespOrganiz THEN
UPDATE lst_tot_mytable set fultimamodificacion = now(),
esmodificadoerspectoaanterior = true
WHERE nropeti = OLD.nropeti;
ELSIF OLD.Peticionario = NEW.Peticionario
or OLD.interlocclte = NEW.interlocclte
or OLD.Equipo = NEW.Equipo
or OLD.RespTecnico = NEW.RespTecnico
or OLD.RespOrganiz = NEW.RespOrganiz THEN
UPDATE lst_tot_mytable set NEW.fultimamodificacion = now(),
NEW.esmodificadoerspectoaanterior = fase
WHERE nropeti = OLD.nropeti;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$$;



However if I get your purpose I suspect the ELSIF section may need to be 
AND's:



ELSIF OLD.Peticionario = NEW.Peticionario
AND OLD.interlocclte = NEW.interlocclte
AND OLD.Equipo = NEW.Equipo
AND OLD.RespTecnico = NEW.RespTecnico
AND OLD.RespOrganiz = NEW.RespOrganiz THEN




On 05/31/2016 03:18 PM, car...@lpis.com wrote:


CREATE OR REPLACE FUNCTION lst_tot_mytable_LOG() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
IF (TG_OP = 'UPDATE') THEN
INSERT INTO lst_tot_mytable_log SELECT 'U', now(), OLD.*;
IF OLD.Peticionario != NEW.Peticionario or OLD.interlocclte != 
NEW.interlocclte or OLD.Equipo != NEW.Equipo or OLD.RespTecnico != 
NEW.RespTecnico or OLD.RespOrganiz != NEW.RespOrganiz THEN
UPDATE lst_tot_mytable set fultimamodificacion = now(), 
esmodificadoerspectoaanterior = true WHERE nropeti = OLD.nropeti;
ELSIF OLD.Peticionario == NEW.Peticionario or OLD.interlocclte == 
NEW.interlocclte or OLD.Equipo == NEW.Equipo or OLD.RespTecnico == 
NEW.RespTecnico or OLD.RespOrganiz == NEW.RespOrganiz THEN
UPDATE lst_tot_mytable set NEW.fultimamodificacion = now(), 
NEW.esmodificadoerspectoaanterior = fase WHERE nropeti = OLD.nropeti;

END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$$;




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plql and or clausule

2016-05-31 Thread Adrian Klaver

On 05/31/2016 02:18 PM, car...@lpis.com wrote:

hello

this is my first question.

I am new in postgres and using plsql.

i am making this (bellow)

i want to insert one copy of one record into the log table but if there
is some change into the original recor to update into this record two
fields but i have one rror

can you help me please?


CREATE OR REPLACE FUNCTION lst_tot_mytable_LOG() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
IF (TG_OP = 'UPDATE') THEN
INSERT INTO lst_tot_mytable_log SELECT 'U', now(), OLD.*;
IF OLD.Peticionario != NEW.Peticionario or OLD.interlocclte !=
NEW.interlocclte or OLD.Equipo != NEW.Equipo or OLD.RespTecnico !=
NEW.RespTecnico or OLD.RespOrganiz != NEW.RespOrganiz THEN
UPDATE lst_tot_mytable set fultimamodificacion = now(),
esmodificadoerspectoaanterior = true WHERE nropeti = OLD.nropeti;
ELSIF OLD.Peticionario == NEW.Peticionario or OLD.interlocclte ==
NEW.interlocclte or OLD.Equipo == NEW.Equipo or OLD.RespTecnico ==
NEW.RespTecnico or OLD.RespOrganiz == NEW.RespOrganiz THEN
UPDATE lst_tot_mytable set NEW.fultimamodificacion = now(),
NEW.esmodificadoerspectoaanterior = fase WHERE nropeti = OLD.nropeti;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$$;

the eror si this:

ERROR: el operador no existe: character varying == character varying
LINE 1: SELECT OLD.Peticionario == NEW.Peticionario or OLD.interlocc...


Change:

OLD.Peticionario == NEW.Peticionario

to

OLD.Peticionario = NEW.Peticionario


As example:

aklaver@test=> select 'one' = 'two';
 ?column?
--
 f
(1 row)

aklaver@test=> select 'one'::varchar == 'two'::varchar;
ERROR:  operator does not exist: character varying == character varying
LINE 1: select 'one'::varchar == 'two'::varchar;



^
HINT: Ningún operador coincide con el nombre y el tipo de los
argumentos. Puede ser necesario agregar conversiones explícitas de tipos.
QUERY: SELECT OLD.Peticionario == NEW.Peticionario or OLD.interlocclte
== NEW.interlocclte or OLD.Equipo == NEW.Equipo or OLD.RespTecnico ==
NEW.RespTecnico or OLD.RespOrganiz == NEW.RespOrganiz or OLD.FAlta ==
NEW.FAlta or OLD.FRequerida == NEW.FRequerida or OLD.Titulo ==
NEW.Titulo or OLD.Cliente == NEW.Cliente or OLD.Organico == NEW.Organico
or OLD.Pri == NEW.Pri or OLD.Estado == NEW.Estado or OLD.FEstado ==
NEW.FEstado or OLD.CCCA == NEW.CCCA or OLD.Aplicacion_Actvdad ==
NEW.Aplicacion_Actvdad or OLD.Servicio == NEW.Servicio or
OLD.FPrevistaInicioPeticion == NEW.FPrevistaInicioPeticion or
OLD.FPrevistaFinPeticion == NEW.FPrevistaFinPeticion or OLD.autpet_Horas
== NEW.autpet_Horas or OLD.autpet_Importe == NEW.autpet_Importe or
OLD.auteje_Horas == NEW.auteje_Horas or OLD.auteje_Importe ==
NEW.auteje_Importe or OLD.aprpet_Horas == NEW.aprpet_Horas or
OLD.aprpet_Importe == NEW.aprpet_Importe or OLD.apreje_Horas ==
NEW.apreje_Horas or OLD.apreje_Importe == NEW.apreje_Importe or
OLD.ultprvpet_Horas == NEW.ultprvpet_Horas or OLD.ultprvpet_Importe ==
NEW.ultprvpet_Importe or OLD.ultprveje_Horas == NEW.ultprveje_Horas or
OLD.ultprveje_Importe == NEW.ultprveje_Importe or OLD.realpet_Horas ==
NEW.realpet_Horas or OLD.realpet_Importe == NEW.realpet_Importe or
OLD.realeje_Horas == NEW.realeje_Horas or OLD.realeje_Importe ==
NEW.realeje_Importe or OLD.CodFacturacion == NEW.CodFacturacion or
OLD.Facturable == NEW.Facturable or OLD.ProyCliente == NEW.ProyCliente
CONTEXT: función PL/pgSQL lst_tot_mytable_log() en la línea 12 en IF
sentencia SQL: «UPDATE lst_tot_mytable set fultimamodificacion = now(),
esmodificadoerspectoaanterior = true WHERE nropeti = OLD.nropeti»
función PL/pgSQL lst_tot_mytable_log() en la línea 13 en sentencia SQL
** Error **

ERROR: el operador no existe: character varying == character varying
SQL state: 42883
Hint: Ningún operador coincide con el nombre y el tipo de los
argumentos. Puede ser necesario agregar conversiones explícitas de tipos.
Context: función PL/pgSQL lst_tot_mytable_log() en la línea 12 en IF
sentencia SQL: «UPDATE lst_tot_mytable set fultimamodificacion = now(),
esmodificadoerspectoaanterior = true WHERE nropeti = OLD.nropeti»
función PL/pgSQL lst_tot_mytable_log() en la línea 13 en sentencia SQL




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checkpoint Err on Startup of Rsynced System

2016-05-31 Thread Venkata Balaji N
On Wed, Jun 1, 2016 at 3:13 AM, Jim Longwill  wrote:

> I am trying to setup a 2nd, identical, db server (M2) for development and
> I've run into a problem with starting up the 2nd Postgres installation.
>
> Here's what I've done:
>   1) did a 'clone' of 1st (production) machine M1 (so both machines on
> Cent OS 7.2)
>   2) setup an rsync operation, did a complete 'rsync' from M1 to M2
>   3) did a final 'CHECKPOINT' command on M1 postgres
>   4) shutdown postgres on M1 with 'pg_ctl stop'
>   5) did final 'rsync' operation  (then restarted postgres on M1 with
> 'pg_ctl start')
>   6) tried to startup postgres on M2
>

If you rsync the data-directory of an live running postgres instance, that
is not going to work. As Scott said earlier, you need to do "select
pg_start_backup('labelname');" before you initiate rsync and "select
pg_stop_backup()" after you complete rsync. That way, postgresql would know
that you are rsyncing and also identifies the required WALs to be copied
over.

Or if you can shutdown M1 for sometime then, simply shutdown M1 copy over
(or rsync) the data-directory to M2 and then start the M2 instance. That
should work.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Row security policies documentation question

2016-05-31 Thread Adrian Klaver

On 05/31/2016 01:59 PM, Alexander M. Sauer-Budge wrote:

Hello,

Section 5.7. on Row Security Policies 
(https://www.postgresql.org/docs/current/static/ddl-rowsecurity.html) for 9.5 
says:

As a simple example, here is how to create a policy on the account relation to 
allow only members of the managers role to access rows, and only rows of their 
accounts:

CREATE TABLE accounts (manager text, company text, contact_email text);

ALTER TABLE accounts ENABLE ROW LEVEL SECURITY;

CREATE POLICY account_managers ON accounts TO managers
USING (manager = current_user);

If no role is specified, or the special user name PUBLIC is used, then the 
policy applies to all users on the system. To allow all users to access their 
own row in a users table, a simple policy can be used:

CREATE POLICY user_policy ON users
USING (user = current_user);

---

I’m trying understand the example as it references both an `accounts` table and a 
`users` table which isn’t defined. Is this a mishmash of example fragments or should 
the CREATE POLICY statement reference the `accounts` table instead of `users`? 
Specifically, what does `user` reference in the statement "CREATE POLICY 
user_policy ON users USING (user = current_user);”? Is this a table column in a 
`users` table the example doesn’t define or does PostgreSQL keep track of what 
user/role inserted a row and allow policies to use it?


For a good review of what is possible with RLS take a look at this blog:

http://blog.2ndquadrant.com/application-users-vs-row-level-security/



Thanks!
Alex






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres_fdw and Kerberos authentication

2016-05-31 Thread Jean-Marc Lessard
postgres_fdw is a great feature, but several organizations disallow to hold any 
kind of passwords as plain text.
Providing the superuser role is not either an option.

A nice way to meet security requirements would be to provide single sign on 
support for the postgres_fdw.
As long as you have defined a user in the source and destination databases, and 
configure the Kerberos authentication you should be able to use postgres_fdw.

I tried without success as follow:
jml@dcx1-005-jml =# CREATE SERVER "dcx1-006-jml" FOREIGN DATA WRAPPER 
postgres_fdw OPTIONS (host 'dcx1-006-jml', dbname 'ibis');
jml@dcx1-005-jml =# CREATE USER MAPPING FOR CURRENT_USER SERVER "dcx1-006-jml" 
OPTIONS (user 'jml');
jml@dcx1-005-jml =# IMPORT FOREIGN SCHEMA ibisl0 FROM SERVER "dcx1-006-jml" 
INTO "dcx1-006-jml";
ERROR:  could not connect to server "dcx1-006-jml"
DETAIL:  FATAL:  SSPI authentication failed for user "jml"

Am I doing something wrong or postgres_fdw does not support Kerberos 
authentication?
Is there any plan to support Kerberos authentication?

Jean-Marc Lessard
Administrateur de base de donn?es / Database Administrator
Ultra Electronics Forensic Technology Inc.
T +1 514 489 4247 x4164
www.ultra-forensictechnology.com


Re: [GENERAL] Checkpoint Err on Startup of Rsynced System

2016-05-31 Thread Jim Longwill

Scott,
Thanks.  If I understand you correctly..   Actually, we did have M1 
shutdown when the inital clone was done (some weeks ago).  That was done 
using the VMWare system, not rsync.  My main problem is that I don't 
have WAL archiving setup yet  (I've not changed the Postgres defaults on 
this so far).  That's part of what the new machine M2 is for.. to 
practice doing this before adjusting our production machine (M1).   As 
regards doing a snapshot, I thought that the manual 'CHECKPOINT' would 
take care of it.


So, this time around I may try to do a manual (initdb.. & pg_restore 
from backup files) on the new machine in order to get a roughly 
equivalent installation going.   One early goal I have is to get 
archiving setup & working at beyond the minimal level.


Jim Longwill

On 05/31/2016 11:50 AM, Scott Mead wrote:



On Tue, May 31, 2016 at 1:13 PM, Jim Longwill > wrote:


I am trying to setup a 2nd, identical, db server (M2) for
development and I've run into a problem with starting up the 2nd
Postgres installation.

Here's what I've done:
  1) did a 'clone' of 1st (production) machine M1 (so both
machines on Cent OS 7.2)
  2) setup an rsync operation, did a complete 'rsync' from M1 to M2
  3) did a final 'CHECKPOINT' command on M1 postgres
  4) shutdown postgres on M1 with 'pg_ctl stop'
  5) did final 'rsync' operation  (then restarted postgres on M1
with 'pg_ctl start')
  6) tried to startup postgres on M2

It won't start, & in the log file gives the error message:
...
< 2016-05-31 09:02:52.337 PDT >LOG:  invalid primary checkpoint record
< 2016-05-31 09:02:52.337 PDT >LOG:  invalid secondary checkpoint
record
< 2016-05-31 09:02:52.337 PDT >PANIC:  could not locate a valid
checkpoint record
< 2016-05-31 09:02:53.184 PDT >LOG:  startup process (PID 26680)
was terminated by signal 6: Aborted
< 2016-05-31 09:02:53.184 PDT >LOG:  aborting startup due to
startup process failure

I've tried several times to do this but always get this result. 
So, do I need to do a new 'initdb..' operation on machine M2 +

restore from M1 backups?  Or is there another way to fix this?


You should have stopped M1 prior to taking the backup. If you can't do 
that, it can be done online via:


 1. Setup archiving
 2. select pg_start_backup('some label');
 3. 
 4. select pg_stop_backup();

  Without archiving and the pg_[start|stop]_backup, you're not 
guaranteed anything.  You could use an atomic snapshot (LVM, storage, 
etc...), but it's got to be a true snapshot.  Without that, you need 
archiving + start / stop backup.


Last section of: 
https://wiki.postgresql.org/wiki/Simple_Configuration_Recommendation#Physical_Database_Backups 
will take you to: 
https://www.postgresql.org/docs/current/static/continuous-archiving.html


--Scott


--o--o--o--o--o--o--o--o--o--o--o--o--
Jim Longwill
PSMFC Regional Mark Processing Center
jlongw...@psmfc.org 
--o--o--o--o--o--o--o--o--o--o--o--o--


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
--
Scott Mead
Sr. Architect
/OpenSCG /
http://openscg.com




Re: [GENERAL] postgres_fdw and Kerberos authentication

2016-05-31 Thread Tom Lane
Jean-Marc Lessard  writes:
> A nice way to meet security requirements would be to provide single sign on 
> support for the postgres_fdw.
> As long as you have defined a user in the source and destination databases, 
> and configure the Kerberos authentication you should be able to use 
> postgres_fdw.

It's not really that easy, because postgres_fdw (like the server in
general) is running as the database-owner operating system user.
How will you associate a Postgres role that's responsible for a
particular connection request with some Kerberos credentials,
while keeping it away from credentials that belong to other roles?

This is certainly something that'd be useful to have, but it's not
clear how to do it in a secure fashion.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Change in order of criteria - reg

2016-05-31 Thread sri harsha
Hi,

In PostgreSQL , does the order in which the criteria is given matter ??
For example

Query 1 : Select * from TABLE where a > 5 and b < 10;

Query 2 : Select * from TABLE where b <10 and a > 5;

Are query 1 and query 2 the same in PostgreSQL or different ?? If its
different , WHY ??



Thanks,

Harsha


Re: [GENERAL] After replication failover: could not read block X in file Y read only 0 of 8192 bytes

2016-05-31 Thread Venkata Balaji N
Oops, i missed including pgsql-general in my earlier replies..


> > > I have data_checksums switched on so am suspecting a streaming
> > > > > replication bug.  Anyone know of a recent bug which could have
> caused
> > > > > this?
> > > > >
> > > >
> > > > I cannot conclude at this point. I encountered these kind of errors
> with
> > > > Indexes and re-indexing fixed them.
> > >
> > > This is actually the second time I am seeing these kinds of errors, in
> > > the past, after verifying that no data was lost I used VACUUM FULL to
> > > recover the ability to INSERT. There was no pitchfork uprising...
> > >
> >
> > Did you check your disk file systems for any issues ?
>
> I'm on EC2 and praying that data_checksums takes care of that for me...
>

Please check if any issues with the storage.


> > These errors only
> > happen after you do pg_restore ?
>
> they happened after the failover, I'm not convinced they were caused by
> the pg_restore.
>

mmh ok. Any issues on slave before failover ?

>
> I still have the backups and WAL logs and could probably try use them to
> build a bad slave. But that's a few hundred GB of WAL and is not worth
> it unless I know what to look for.
>

Well, that is the work around if you are encountering data loss due to this
bug.
Developers must be able to tell you if this is a bug in streaming
replication. I haven't encountered such bug till now.

Regards,
Venkata B N

Fujitsu Australia