[GENERAL] pg_upgrade not able to cope with pg_largeobject being in a different tablespace

2016-10-13 Thread Andreas Joseph Krogh
Hi all.
 
(I tried raising this on -hackers but got "this is not supported"-answer, 
which is quite dissatisfactory..)
 
I have an issue with pg_upgrade upgrading 9.5 to 9.6. (my system is 
Ubuntu-16.04 and packages from http://apt.postgresql.org/) 

 
In short pg_upgrade fails with:
 
Linking user relation files
 No match found in new cluster for old relation with OID 16388 in database 
"andreak": "pg_toast.pg_toast_2613" which is the TOAST table for 
"pg_catalog.pg_largeobject"
 No match found in new cluster for old relation with OID 16390 in database 
"andreak": "pg_toast.pg_toast_2613_index" which is an index on 
"pg_toast.pg_toast_2613" which is the TOAST table for 
"pg_catalog.pg_largeobject"
Failed to match up old and new tables in database "andreak"
 Failure, exiting
 
I issued the following command:
PG_NEW_VERSION=9.6
 PG_OLD_VERSION=9.5
 /usr/lib/postgresql/$PG_NEW_VERSION/bin/pg_upgrade \
           --old-bindir=/usr/lib/postgresql/$PG_OLD_VERSION/bin/ \
           --new-bindir=/usr/lib/postgresql/$PG_NEW_VERSION/bin/ \
           --old-datadir=/var/lib/postgresql/$PG_OLD_VERSION/main \
           --new-datadir=/var/lib/postgresql/$PG_NEW_VERSION/main \
           -o " -c 
config_file=/etc/postgresql/$PG_OLD_VERSION/main/postgresql.conf" \
           -O " -c 
config_file=/etc/postgresql/$PG_NEW_VERSION/main/postgresql.conf" \
           --link

 
 
My database and tablespaces are created like this:
 I have tablespaces configured outside PGDATA:
mkdir /var/lib/postgresql/9.5/tablespaces/andreak
 mkdir /var/lib/postgresql/9.5/tablespaces_lo/andreak
psql -c "create tablespace andreak OWNER andreak location 
'/var/lib/postgresql/9.5/tablespaces/andreak'" postgres;
 psql -c "create tablespace andreak_lo OWNER andreak location 
'/var/lib/postgresql/9.5/tablespaces_lo/andreak'" postgres;
 createdb --tablespace=andreak -O andreak andreak


psql -U postgres -c "alter table pg_largeobject set tablespace andreak_lo" -d 
andreak


(I've set allow_system_table_mods=on in postgresql.conf)
 
These are symlinked:
ln -s /storage/wal/9.5/pg_xlog /var/lib/postgresql/9.5/main/pg_xlog
 ln -s /storage/fast_ssd/9.5/tablespaces /var/lib/postgresql/9.5/tablespaces
 ln -s /storage/archive_disk/9.5/tablespaces_lo 
/var/lib/postgresql/9.5/tablespaces_lo

 
I would assume that having pg_largeobject in a separate tablespace is more and 
more common these days, having real-cheap SAN vs. fast-SSD for normal 
tables/indexes/wal.
 
So - I'm wondering if we can fund development of pg_upgrade to cope with this 
configuration or somehow motivate to getting this issue fixed?
 
Would any of the PG-companies (2ndQ, EDB, PgPro) take a stab at this?
 
Any feedback welcome, thanks.
 

-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 




Re: [GENERAL] MultiXact member wraparound protections are disabled

2016-10-13 Thread Alvaro Herrera
AnandKumar, Karthik wrote:
> root@site-db01a:/var/lib/pgsql/cmates/data # ls pg_multixact/members
>   0001  0002  0003  0004  0005  0006  0007  0008  0009  000A  000B  000C  
> 000D  000E  000F  0010  0011  0012  0013  0014  0015  0016  0017  0018  0019  
> 001A  001B
> root@site-db01a:/var/lib/pgsql/cmates/data # ls pg_multixact/offsets
> 0001  0002  0003  0004  0005  0006  0007  0008  0009  000A  000B

> postgres@site-db01a:~ $ /usr/pgsql-9.4/bin/pg_controldata 
> /var/lib/pgsql/cmates/data

> Latest checkpoint's NextMultiXactId:  784503
> Latest checkpoint's NextMultiOffset:  1445264
> Latest checkpoint's oldestMultiXid:   1
> Latest checkpoint's oldestMulti's DB: 16457

This looks perfectly normal, except that the pg_multixact/offsets/
file is gone.  oldestMultiXid is 1 so I don't see how could have the
file gotten removed.  Has this been upgraded recently from a previous
9.3 or 9.4 version?  There have been bugs in this area but they've been
fixed now for some time.

The  file could have been removed manually, perhaps?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MultiXact member wraparound protections are disabled

2016-10-13 Thread AnandKumar, Karthik
Thanks. We started seeing this error right after a SAN FC re-cable effort - so 
yes, that would make sense. 
We’ll do a little more digging to see if the  could have gotten removed.
If that’s an older file that we have in our filesystem backups, is it safe to 
restore from there?




On 10/13/16, 3:30 PM, "Alvaro Herrera"  wrote:

>AnandKumar, Karthik wrote:
>> root@site-db01a:/var/lib/pgsql/cmates/data # ls pg_multixact/members
>>   0001  0002  0003  0004  0005  0006  0007  0008  0009  000A  000B  000C 
>>  000D  000E  000F  0010  0011  0012  0013  0014  0015  0016  0017  0018  
>> 0019  001A  001B
>> root@site-db01a:/var/lib/pgsql/cmates/data # ls pg_multixact/offsets
>> 0001  0002  0003  0004  0005  0006  0007  0008  0009  000A  000B
>
>> postgres@site-db01a:~ $ /usr/pgsql-9.4/bin/pg_controldata 
>> /var/lib/pgsql/cmates/data
>
>> Latest checkpoint's NextMultiXactId:  784503
>> Latest checkpoint's NextMultiOffset:  1445264
>> Latest checkpoint's oldestMultiXid:   1
>> Latest checkpoint's oldestMulti's DB: 16457
>
>This looks perfectly normal, except that the pg_multixact/offsets/
>file is gone.  oldestMultiXid is 1 so I don't see how could have the
>file gotten removed.  Has this been upgraded recently from a previous
>9.3 or 9.4 version?  There have been bugs in this area but they've been
>fixed now for some time.
>
>The  file could have been removed manually, perhaps?
>
>-- 
>Álvaro Herrerahttps://www.2ndQuadrant.com/
>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-13 Thread Kevin Grittner
On Wed, Oct 12, 2016 at 5:21 PM, Peter Geoghegan  wrote:
> On Wed, Oct 12, 2016 at 2:06 PM, Kevin Grittner  wrote:
>> If the "proper" fix is impossible (or just too freaking ugly) we
>> might fall back on the fix Thomas suggested, but I would like to
>> take advantage of the "special properties" of the INSERT/ON
>> CONFLICT DO NOTHING code to avoid false positives where we can.
>
> Do you intend to propose a patch to do that?

Yes, I'm working on that.

-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-13 Thread Kevin Grittner
On Wed, Oct 12, 2016 at 8:06 PM, Thomas Munro
 wrote:
> On Thu, Oct 13, 2016 at 10:06 AM, Kevin Grittner  wrote:
>> On Wed, Oct 12, 2016 at 3:02 PM, Peter Geoghegan  wrote:
>>
>>> I agree that the multi-value case is a bug.
>>
>>> I think that it should be pretty obvious to you why the check exists
>>> at all, Kevin. It exists because it would be improper to decide to
>>> take the DO NOTHING path on the basis of some other committed tuple
>>> existing that is not visible to the original MVCC snapshot, at higher
>>> isolation levels.
>>
>> That's only true if it causes a cycle in the apparent order of
>> execution.  If we rip out the check what we have is behavior
>> completely consistent with the other transaction executing first;
>> in other words, it creates a read-write dependency (a/k/a
>> rw-conflict) from the current transaction to the concurrent
>> transaction which succeeds in its insert.  That may or may not
>> cause a cycle, depending on what else is happening.
>
> The "higher isolation levels" probably shouldn't be treated the same way.
>
> I think Peter's right about REPEATABLE READ.  We should definitely
> raise the error immediately as we do in that level, because our RR
> (SI) doesn't care about write skew and all that stuff, it just
> promises that you can only see data in your snapshot.

But the whole point of the special code for both RI and INSERT/ON
CONFLICT is to get "underneath" that and provide a "primitive" that
can see things an application statement can't, for better
performance and error handling.  What SERIALIZABLE promises is that
it runs exactly the same as REPEATABLE READ, but with some
additional monitoring for serialization failure errors in some
places that REPEATABLE READ does not generate them -- this would be
the first and only place that SERIALIZABLE would break that model.
The idea seems completely wrong and arbitrary.

Where do you see a problem if REPEATABLE READ handles INSERT/ON
CONFLICT without error?  In many cases it would actually be
providing a result consistent with a serial execution of the
transactions; and where it doesn't, it would be the same anomalies
that are possible with anything else under REPEATABLE READ.

> We can't allow you to take a different course of action based on
> data that your snapshot can't see,

But that is exactly what INSERT/ON CONFLICT always does!  That is
the only way to avoid the so-called "unprincipled deadlocks" the
feature aims to avoid.

> so the only reasonable thing to do is abandon ship.

I disagree.

> But yeah, the existing code raises false positive serialization
> failures under SERIALIZABLE, and that's visible in the isolation test
> I posted: there is actually a serial order of those transactions with
> the same result.

Exactly.  The error based on the write conflict with ON CONFLICT DO
NOTHING in your patch is really a false positive.  That doesn't
break correctness, but it hurts performance, so it should be
avoided if possible.

> When working on commit fcff8a57 I became suspicious of the way ON
> CONFLICT interacts with SSI, as I mentioned in passing back then[1],
> thinking mainly of false negatives.  I failed to find a
> non-serializable schedule involving ON CONFLICT that was allowed to
> run, though I didn't spend much time on it.   One thing that worries
> me is the final permutation of read-write-unique-4.spec, which
> produces an arguably spurious UCV, that is, a transaction that doesn't
> commit but raises a UCV instead of the serialization failure you might
> expect.  The ON CONFLICT equivalent might be a transaction that takes
> the ON CONFLICT path and then commits, even though it should be
> considered non-serializable.  I would really like to understand that
> case better, and until then I wouldn't bet my boots that it isn't
> possible to commit anomalies using ON CONFLICT under SERIALIZABLE
> without Peter's check (or even with it), despite the fact that it
> reaches predicate locking code via heap_fetch etc.

Hm.  With the duplicate key error I fail to see how any anomaly
could make it to a committed state in the database, although I
agree it is unfortunate that there is that one case where it really
should be considered a serialization failure that we haven't yet
coerced to yield that instead of the duplicate key error.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-13 Thread Kevin Grittner
On Wed, Oct 12, 2016 at 8:32 PM, Peter Geoghegan  wrote:
> On Wed, Oct 12, 2016 at 6:06 PM, Thomas Munro  
> wrote:
>> But yeah, the existing code raises false positive serialization
>> failures under SERIALIZABLE, and that's visible in the isolation test
>> I posted: there is actually a serial order of those transactions with
>> the same result.
>
> I was under the impression that false positives of this kind are
> allowed by SSI. Why focus on this false positive scenario in
> particular?

Every situation that generates a false positive hurts performance;
we went to great lengths to minimize those cases.  In addition, we
made sure that at the point that a serialization failure is
returned, that retrying the transaction from the start could not
fail on the same combination of transactions, by ensuring that at
least one transaction in the set had successfully committed, and
that it was a transaction which had done writes.  To generate a
serialization failure on a single transaction has to be considered
a bug, because a retry *CAN NOT SUCCEED*!  This is likely to break
many frameworks designed to work with serializable transactions.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Passing of where clause to remote table in FDW

2016-10-13 Thread Jaisingkar, Piyush
Hello,



While using fdw I am trying to get and filtered data from a foreign table. 
Filtering using a regex in where clause as follows:



EXPLAIN ANALYZE VERBOSE SELECT CAF1.uprn
  FROM XYZ as CAF1
  WHERE
 
(regexp_replace(upper(concat(CAF1.SUB_BUILDING_NAME,CAF1.BUILDING_NAME,CAF1.BUILDING_NUMBER,CAF1.STREET_DESCRIPTION
 , CAF1.TOWN_NAME)), '[^a-zA-Z0-9]','') = '84WELLINGTONROADEXETER');







Following is the query plan for the query, please observe the remote SQL 
statement:


"Foreign Scan on abc.xyz caf1  (cost=100.00..212.40 rows=13 width=8) (actual 
time=0.607..590.992 rows=2 loops=1)"
"  Output: uprn"
"  Filter: (regexp_replace(upper(concat(caf1.sub_building_name, 
caf1.building_name, caf1.building_number, caf1.street_description, 
caf1.town_name)), '[^a-zA-Z0-9]'::text, ''::text) = 
'84WELLINGTONROADEXETER'::text)"
"  Rows Removed by Filter: 115952"
"  Remote SQL: SELECT uprn, building_name, sub_building_name, building_number, 
street_description, town_name FROM lmn.xyz"
"Total runtime: 591.365 ms"



What I interpret through this is that the regex is not going to the remote 
side, instead whole remote table is being copied onto the foreign table and 
then the regex is applied.



A remote SQL that I wish to have is as follows:



Remote SQL: SELECT uprn, building_name, sub_building_name, building_number, 
street_description, town_name FROM lmn.xyz  where 
(regexp_replace(upper(CAF1.SUB_BUILDING_NAME||CAF1.BUILDING_NAME||CAF1.BUILDING_NUMBER||CAF1.STREET_DESCRIPTION
 || CAF1.TOWN_NAME), '[^a-zA-Z0-9]','')) = '84WELLINGTONROADEXETER'"



As the functions regexp_replace and upper are inbuild ones there should not be 
any problem passing them onto remote side as mentioned as follows:

"his is done by sending query WHERE clauses to the remote server for execution, 
and by not retrieving table columns that are not needed for the current query. 
To reduce the risk of misexecution of queries, WHERE clauses are not sent to 
the remote server unless they use only built-in data types, operators, and 
functions. Operators and functions in the clauses must be IMMUTABLE as well."

Ref: https://www.postgresql.org/docs/9.3/static/postgres-fdw.html
PostgreSQL: Documentation: 9.3: 
postgres_fdw
www.postgresql.org
F.31. postgres_fdw. The postgres_fdw module provides the foreign-data wrapper 
postgres_fdw, which can be used to access data stored in external PostgreSQL 
servers.











Do we have any option to carry the execution as I want?





Thanks and Regards,

Piyush Jaisingkar





__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

Re: [GENERAL] confusion about user paring with pg_hba and pg_ident

2016-10-13 Thread Adrian Klaver

On 10/12/2016 08:57 AM, arnaud gaboury wrote:



On Wed, Oct 12, 2016 at 3:41 PM Adrian Klaver 





So other then adding the mapping for the dovecot user, did anything else

change?

after a little bit of cleaning and a change in my Postgres username (now
postgres username == unix user), the various commands to connect are
working.

Now I want to be sure to have correctly understood the mapping story.
Say root is running myApp, and at one point, myApp is poling a
postgresql DB as user myUser.
Run myApp as root:
# myApp

Do I have to add an entry in pg_ident to map linux user root to Postgres
myUser ? Or the command above will be enough with no entries in pg_ident
or pg_hba ?



The answer depends on what result you are trying to achieve.

Are you trying to restrict access to a database by Postgres user only, 
regardless of who they logged into the system as?


Or do you want to restrict access based on their system user login as well?

If not the above, what are your criteria for determining who can log in 
to the database?




TY for your time.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Passing of where clause to remote table in FDW

2016-10-13 Thread Tom Lane
"Jaisingkar, Piyush"  writes:
> While using fdw I am trying to get and filtered data from a foreign table. 
> Filtering using a regex in where clause as follows:

> EXPLAIN ANALYZE VERBOSE SELECT CAF1.uprn
>   FROM XYZ as CAF1
>   WHERE
>  
> (regexp_replace(upper(concat(CAF1.SUB_BUILDING_NAME,CAF1.BUILDING_NAME,CAF1.BUILDING_NUMBER,CAF1.STREET_DESCRIPTION
>  , CAF1.TOWN_NAME)), '[^a-zA-Z0-9]','') = '84WELLINGTONROADEXETER');

concat() is not immutable (it's only stable) so it can't be sent for
remote execution.  Use the || operator instead.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade not able to cope with pg_largeobject being in a different tablespace

2016-10-13 Thread Bruce Momjian
On Thu, Oct 13, 2016 at 10:14:08AM +0200, Andreas Joseph Krogh wrote:
> I would assume that having pg_largeobject in a separate tablespace is more and
> more common these days, having real-cheap SAN vs. fast-SSD for normal tables/
> indexes/wal.

So common that no one has ever asked for this feature before?

> So - I'm wondering if we can fund development of pg_upgrade to cope with this
> configuration or somehow motivate to getting this issue fixed?
>  
> Would any of the PG-companies (2ndQ, EDB, PgPro) take a stab at this?
>  
> Any feedback welcome, thanks.

You would need to get buy-in that that community wants the relocation of
pg_largeobject to be supported via an SQL command, and at that point
pg_upgrade would be modified to support that.  It is unlikely pg_upgrade
is going to be modified to support something that isn't supported at the
SQL level.  Of course, you can create a custom version of pg_upgrade to
do that.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade not able to cope with pg_largeobject being in a different tablespace

2016-10-13 Thread Andreas Joseph Krogh
På torsdag 13. oktober 2016 kl. 16:09:34, skrev Bruce Momjian mailto:br...@momjian.us>>:
On Thu, Oct 13, 2016 at 10:14:08AM +0200, Andreas Joseph Krogh wrote:
 > I would assume that having pg_largeobject in a separate tablespace is more 
and
 > more common these days, having real-cheap SAN vs. fast-SSD for normal 
tables/
 > indexes/wal.

 So common that no one has ever asked for this feature before?
 
 
Sometimes one gets the feeling that one is the only one in the universe doing 
something one considers "quite common":-)
 
> So - I'm wondering if we can fund development of pg_upgrade to cope with this
 > configuration or somehow motivate to getting this issue fixed?
 >  
 > Would any of the PG-companies (2ndQ, EDB, PgPro) take a stab at this?
 >  
 > Any feedback welcome, thanks.

 You would need to get buy-in that that community wants the relocation of
 pg_largeobject to be supported via an SQL command, and at that point
 pg_upgrade would be modified to support that.  It is unlikely pg_upgrade
 is going to be modified to support something that isn't supported at the
 SQL level.  Of course, you can create a custom version of pg_upgrade to
 do that.
 
Doesn't "ALTER TABLE pg_largeobject SET TABLESPACE myspace_lo" count as being 
"at the SQL-level"?
 
The whole problem seems to come from the fact that BLOBs are stored in 
pg_largeobject which for some reason is implemented as a system-catalogue in 
PG, which imposes all kinds of weird problems, from a DBA-perspective.
 
Can we pay you at EDB for making such a custom version of pg_upgrade for 9.6?
 
Thanks.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] MultiXact member wraparound protections are disabled

2016-10-13 Thread Alvaro Herrera
AnandKumar, Karthik wrote:
> Thanks. We started seeing this error right after a SAN FC re-cable effort - 
> so yes, that would make sense. 
> We’ll do a little more digging to see if the  could have gotten removed.
> If that’s an older file that we have in our filesystem backups, is it safe to 
> restore from there?

Sure, the files are immutable after they are completed.  I worry that if
the system removed it automatically, it would just remove it again,
though.  Shouldn't happen on 9.4.5, but it seems just too much of a
coincidence that that file was removed.

Changes such as FC recabling should not cause anything like this.  I
mean, why a pg_multixact file and not a table data file?  Very fishy.

I'd advise to verify your older logs at the time of restarts whether the
"multixact protections are enabled" message has ever appeared, or it has
always been "protections are disabled".  Maybe you've had the problem
for ages and just never noticed ...

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MultiXact member wraparound protections are disabled

2016-10-13 Thread avi Singh
Your right we looked back in our old logs and we do see the messages there
as well. Still what I'm not getting is since we restarted the database
after SAN FC re-cable effort auto-vacuum is running on all the threads
continuous. I have never seen auto-vacuum using all the threads 24*7 on
this database. Any thoughts ?



On Thu, Oct 13, 2016 at 8:35 AM, Alvaro Herrera 
wrote:

> AnandKumar, Karthik wrote:
> > Thanks. We started seeing this error right after a SAN FC re-cable
> effort - so yes, that would make sense.
> > We’ll do a little more digging to see if the  could have gotten
> removed.
> > If that’s an older file that we have in our filesystem backups, is it
> safe to restore from there?
>
> Sure, the files are immutable after they are completed.  I worry that if
> the system removed it automatically, it would just remove it again,
> though.  Shouldn't happen on 9.4.5, but it seems just too much of a
> coincidence that that file was removed.
>
> Changes such as FC recabling should not cause anything like this.  I
> mean, why a pg_multixact file and not a table data file?  Very fishy.
>
> I'd advise to verify your older logs at the time of restarts whether the
> "multixact protections are enabled" message has ever appeared, or it has
> always been "protections are disabled".  Maybe you've had the problem
> for ages and just never noticed ...
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] MultiXact member wraparound protections are disabled

2016-10-13 Thread Alvaro Herrera
avi Singh wrote:
> Your right we looked back in our old logs and we do see the messages there
> as well. Still what I'm not getting is since we restarted the database
> after SAN FC re-cable effort auto-vacuum is running on all the threads
> continuous. I have never seen auto-vacuum using all the threads 24*7 on
> this database. Any thoughts ?

It's trying to ensure all tables are correctly frozen.  As I recall,
that's working per spec and you should just let it run until it's done.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Issue with installation of postgresql server

2016-10-13 Thread Karre, Kanakaraju (US - Hyderabad)
I have been trying to install Postgresql server on my machine (Windows 8-64 bit 
version) but it is failing to install by throwing the following error -
"There has been an error. The environment variable COMSPEC does not seem to 
point to the cmd.exe or  there is a trailing semicolon present. Please fix this 
variable and restart installation ."

I searched some of the forums where the solution mentioned was to add an 
environment variable "COMSPEC" with the value - "C:\Windows\system32\cmd.exe". 
I did this but the issue persists. Also, there is no semicolon at the end in 
the value of COMSPEC variable.

I tried installing 2 versions of Postgresql but I am getting the same error for 
both. The versions are as follows :

1.   postgresql-9.3.14-2-windows-x64.exe

2.   postgresql-9.6.0-1-windows-x64.exe

Since the COMSPEC variable is present, it got me thinking if this is a general 
exception thrown while installing postgresql.
Also, are there any pre-conditions to be checked before installing the server. 
Could a particular software installation be hindering the postgresql 
installation (At some forums, it was suggested that the anti-virus 
uninstallation solved the issue. Could this be the reason?)

Please let me know how to resolve this.

Thanks & Regards
Kanakaraju





This message (including any attachments) contains confidential information 
intended for a specific individual and purpose, and is protected by law. If you 
are not the intended recipient, you should delete this message and any 
disclosure, copying, or distribution of this message, or the taking of any 
action based on it, by you is strictly prohibited.

v.E.1









[GENERAL] "The index is not optimal" GiST warnings

2016-10-13 Thread James Robinson
Folks,

We are using triggers and daterange-enhanced side-tables for temporal 
logging of inserts/updates/deletes of data so that we can perform arbitrary 
point-in-time reporting. Given a slightly augmented many-many relationship 
table like:

  Table "housing.unit_funding_source"
  Column   |  Type  |Modifiers  
   
---++--
 id| bigint | not null default 
nextval('housing.unit_funding_source_id_seq'::regclass)
 unit_id   | bigint | not null
 funding_source_id | bigint | not null
 funding_id| text   | 
Indexes:
"unit_funding_source_pkey" PRIMARY KEY, btree (id)
"unit_funding_source_unit_id_key" UNIQUE CONSTRAINT, btree (unit_id, 
funding_source_id) CLUSTER


("id" column is a concession to our older generation ORM)

we have a trigger maintained historical archive table documenting the open or 
closed date ranges documenting when data was visible in the main table:

  Table 
"housing.unit_funding_source_snapshot"
 Column  |   Type| 
Modifiers 
-+---+---
 seen| daterange | not null
 id  | bigint| not null
 unit_id | bigint| not null
 funding_source_id   | bigint| not null
 funding_identifier  | text  | 
 listing_deleted_code_id | integer   | 
 listing_deleted_text| text  | 

(essentially the same table but with 'seen' describing the either open-ended 
(row still visible in housing.unit_funding_source), or closed (row no longer 
visible) date range. Basic temporal range logging pattern.

My question stems from the building of the GiST constraint to both guard 
against 'overlapping daterange for same logical row' as well as used for 
point-in-time queries (i.e. consider only the rows which contain midnight of an 
arbitrary day, or those which overlap with an entire month). When going by the 
recipes and we express the constraint with the daterange column and operator && 
first:

ALTER TABLE ONLY housing.unit_funding_source_snapshot
ADD CONSTRAINT no_overlapping_rows_orig EXCLUDE USING gist (seen WITH &&, 
unit_id WITH =, funding_source_id WITH =)
DEFERRABLE INITIALLY DEFERRED;

This ends up building a GiST index with 'seen' as the first column:

# \d housing.no_overlapping_rows_orig
  Index "housing.no_overlapping_rows_orig"
  Column   |Type |Definition 
---+-+---
 seen  | daterange   | seen
 unit_id   | gbtreekey16 | unit_id
 funding_source_id | gbtreekey16 | funding_source_id
gist, for table "housing.unit_funding_source_snapshot", deferrable, 
initially deferred


and provokes the following warning 14 times (on PostgreSQL 9.5):

DEBUG:  picksplit method for column 1 of index 
"no_overlapping_rows_orig" failed
HINT:  The index is not optimal. To optimize it, contact a developer, 
or try to use the column as the second one in the CREATE INDEX command.

When rephrasing to have the columns reordered favoring the primary key 
combination from the original table:

ALTER TABLE ONLY housing.unit_funding_source_snapshot
ADD CONSTRAINT no_overlapping_rows_two EXCLUDE USING gist (unit_id WITH 
=, funding_source_id WITH =, seen WITH &&)
DEFERRABLE INITIALLY DEFERRED;

we only get one such warning.

Question: 

It seems that, contrary to expectation / experience with btree indices, 
that the index backing no_overlapping_rows_two, with 'seen' as the non-leftmost 
column, still is useable / preferred for daterange-only queries:

# explain analyze select count(*) from  
housing.unit_funding_source_snapshot where seen && '[2016-09-01, 
2016-10-01)'::daterange;

QUERY PLAN  
   

---
 Aggregate  (cost=481.83..481.84 rows=1 width=0) (actual 
time=7.184..7.184 rows=1 loops=1)
   ->  Index Only Scan using no_overlapping_rows_two on 
unit_funding_source_snapshot  (cost=0.28..456.59 rows=10098 width=0) (actual 
time=0.074..6.298 rows=10095 loops=1)
Index Cond: (seen && '[2016-09-01,2016-10-01)'::daterange)
 Heap Fetches: 10095
 Planning time: 0.153 ms
 Execution time: 7.22

Re: [GENERAL] Issue with installation of postgresql server

2016-10-13 Thread George Weaver

Hi Kanakaraju,

This is a long shot but in case "case" matters, on all my computer the 
variable is "ComSpec".


And I assume you added it as a System variable...

Cheers,
George

On 13/10/2016 6:24 AM, Karre, Kanakaraju (US - Hyderabad) wrote:


I have been trying to install Postgresql server on my machine (Windows 
8-64 bit version) but it is failing to install by throwing the 
following error –


“_There has been an error. The environment variable COMSPEC does not 
seem to point to the cmd.exe or  there is a trailing semicolon 
present. Please fix this variable and restart installation _.”__


__

I searched some of the forums where the solution mentioned was to add 
an environment variable “COMSPEC” with the value – 
“C:\Windows\system32\cmd.exe”. I did this but the issue persists. 
Also, there is no semicolon at the end in the value of COMSPEC variable.


I tried installing 2 versions of Postgresql but I am getting the same 
error for both. The versions are as follows :


1.postgresql-9.3.14-2-windows-x64.exe

2.postgresql-9.6.0-1-windows-x64.exe

Since the COMSPEC variable is present, it got me thinking if this is a 
general exception thrown while installing postgresql.


Also, are there any pre-conditions to be checked before installing the 
server. Could a particular software installation be hindering the 
postgresql installation (At some forums, it was suggested that the 
anti-virus uninstallation solved the issue. Could this be the reason?)


Please let me know how to resolve this.

*Thanks & Regards*

*Kanakaraju*__


This message (including any attachments) contains confidential 
information intended for a specific individual and purpose, and is 
protected by law. If you are not the intended recipient, you should 
delete this message and any disclosure, copying, or distribution of 
this message, or the taking of any action based on it, by you is 
strictly prohibited.


v.E.1




--
Cleartag Software, Inc.
972 McMillan Avenue
Winnipeg, MB
R3M 0V7
(204) 284-9839 phone/cell
(204) 284-9838 fax
gwea...@cleartagsoftware.com

Fast. Accurate. Easy.



Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-13 Thread Peter Geoghegan
On Thu, Oct 13, 2016 at 6:19 AM, Kevin Grittner  wrote:
>> I was under the impression that false positives of this kind are
>> allowed by SSI. Why focus on this false positive scenario in
>> particular?
>
> Every situation that generates a false positive hurts performance;
> we went to great lengths to minimize those cases.  In addition, we
> made sure that at the point that a serialization failure is
> returned, that retrying the transaction from the start could not
> fail on the same combination of transactions, by ensuring that at
> least one transaction in the set had successfully committed, and
> that it was a transaction which had done writes.  To generate a
> serialization failure on a single transaction has to be considered
> a bug, because a retry *CAN NOT SUCCEED*!  This is likely to break
> many frameworks designed to work with serializable transactions.

It sounds like you're talking about the original complaint about a
multi-value INSERT. It took me a minute to decide that that's probably
what you meant, because everyone already agrees that that isn't okay
-- you don't need to convince me.

We must still determine if a fix along the lines of the one proposed
by Thomas is basically acceptable (that is, that it does not clearly
break any documented guarantees, even if it is overly strict).
Separately, I'd be interested in seeing how specifically we could do
better with the patch that you have in the works for this.

In general, I see value in reducing false positives, but I don't
understand why your concern here isn't just about preferring to keep
them to a minimum (doing our best). In other words, I don't understand
why these false positives are special, and I'm still not even clear on
whether you are actually arguing that they are special. (Except, of
course, the multi-value case -- that's clearly not okay.)

So, with the fix proposed by Thomas applied, will there be any
remaining false positives that are qualitatively different to existing
false positive cases? And, if so, how?

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "The index is not optimal" GiST warnings

2016-10-13 Thread Tom Lane
James Robinson  writes:
> ... provokes the following warning 14 times (on PostgreSQL 9.5):

>   DEBUG:  picksplit method for column 1 of index 
> "no_overlapping_rows_orig" failed
>   HINT:  The index is not optimal. To optimize it, contact a developer, 
> or try to use the column as the second one in the CREATE INDEX command.

This just indicates that you had a whole index page full of entries with
the same daterange value.  I wouldn't put a lot of concern into it
(there's a reason it's only a DEBUG message), especially if there are
only 14 occurrences in what I assume is a pretty large index.

> ... It seems that, contrary to expectation / experience with btree indices, 
> that the index backing no_overlapping_rows_two, with 'seen' as the 
> non-leftmost column, still is useable / preferred for daterange-only queries:

GiST indexes have a preference for queries on the leading column rather
than later columns, but it's less strong than is true for btree.  AFAICS
the planner doesn't account for such an effect at all.  If you're seeing
a cost estimate difference, that must come just from one index being
physically larger than the other.  That wouldn't be a surprising thing,
due to possibly different choices about page splits, but it's hard to say
whether it really corresponds to any noticeable difference in access
speed.  Your actual-runtime results suggest that the daterange-first
index is faster, and I'd tend to believe that over any other evidence.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-13 Thread Kevin Grittner
On Thu, Oct 13, 2016 at 2:16 PM, Peter Geoghegan  wrote:
> On Thu, Oct 13, 2016 at 6:19 AM, Kevin Grittner  wrote:

>> Every situation that generates a false positive hurts performance;
>> we went to great lengths to minimize those cases.

>> To generate a
>> serialization failure on a single transaction has to be considered
>> a bug, because a retry *CAN NOT SUCCEED*!  This is likely to break
>> many frameworks designed to work with serializable transactions.
>
> It sounds like you're talking about the original complaint about a
> multi-value INSERT. It took me a minute to decide that that's probably
> what you meant, because everyone already agrees that that isn't okay
> -- you don't need to convince me.

That second part, yeah -- that's about generating a serialization
failure with one transaction.  It's pretty bad if you can't get a
set that contains one transaction to behave as though the
transactions in that set were run one at a time.  ;-)

> We must still determine if a fix along the lines of the one proposed
> by Thomas is basically acceptable (that is, that it does not clearly
> break any documented guarantees, even if it is overly strict).
> Separately, I'd be interested in seeing how specifically we could do
> better with the patch that you have in the works for this.

Basically, rather than just failing, I think we should call
CheckForSerializableConflictOut() (which determines whether the
tuple we are reading causes a rw-conflict between our current
transaction and the transaction which last wrote that tuple) and
PredicateLockTuple() (which tells later updates or deletes that
we've read the tuple).

> In general, I see value in reducing false positives, but I don't
> understand why your concern here isn't just about preferring to keep
> them to a minimum (doing our best).

That's exactly what I want to do, rather that what is the easiest
and first thing to come to mind.

> In other words, I don't understand
> why these false positives are special, and I'm still not even clear on
> whether you are actually arguing that they are special. (Except, of
> course, the multi-value case -- that's clearly not okay.)
>
> So, with the fix proposed by Thomas applied, will there be any
> remaining false positives that are qualitatively different to existing
> false positive cases? And, if so, how?

The INSERT ... ON CONFLICT DO NOTHING case does not write the
tuple, so this would be the first place we would be generating a
"write conflict" when we're not writing a tuple.  (You might argue
that "behind the scenes we write a tuple that disappears
automagically, but that's an implementation detail that might
someday change and should not be something users need to think
about a lot.)  We put a lot of effort into minimizing false
positives everywhere we could, and I'm not sure why you seem to be
arguing that we should not do so here.  If it proves impractical to
"do it right", we would not destroy logical correctness by using
the patch Thomas proposed, but we would increase the number of
transaction rollbacks and retries, which has a performance hit.

BTW, feel free to post a fix for the locking issue separately
when/if you have one.  I'm not looking at that for the moment,
since it sounded like you had already looked at it and were working
on something.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-13 Thread Kevin Grittner
On Thu, Oct 13, 2016 at 3:16 PM, Kevin Grittner  wrote:
> On Thu, Oct 13, 2016 at 2:16 PM, Peter Geoghegan  wrote:

>> We must still determine if a fix along the lines of the one proposed
>> by Thomas is basically acceptable (that is, that it does not clearly
>> break any documented guarantees, even if it is overly strict).
>> Separately, I'd be interested in seeing how specifically we could do
>> better with the patch that you have in the works for this.
>
> Basically, rather than just failing, I think we should call
> CheckForSerializableConflictOut() (which determines whether the
> tuple we are reading causes a rw-conflict between our current
> transaction and the transaction which last wrote that tuple) and
> PredicateLockTuple() (which tells later updates or deletes that
> we've read the tuple).

I'm wondering whether the error is appropriate on the INSERT ... ON
CONFLICT UPDATE case.  For example, with
ExecCheckHeapTupleVisible() commented out, this does not violate
the SERIALIZABLE requirements (which is that the behavior of any
set of concurrent serializable transactions which successfully
commit must be consistent with running them one at a time in some
order):

CREATE TABLE with_pk (i integer PRIMARY KEY, v text);

-- T1:
START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO with_pk VALUES (4) ON CONFLICT DO NOTHING;

-- T2:
START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO with_pk AS t VALUES (8)
  ON CONFLICT (i) DO UPDATE SET v = 'updated';
-- T2 blocks, waiting for T1

-- T1:
COMMIT;
-- T2 unblocks and does the "ON CONFLICT ... UPDATE"

-- T2:
COMMIT;

It seems to me that the result is consistent with T1 -> T2.  There
is no cycle in the apparent order of execution, and no error is
needed.  Can you show a case where there is a problem?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres_fdw and permissions

2016-10-13 Thread Jeff Janes
Say user1 creates a foreign table, and grants select permissions on it to
user2.

When user2 selects from the table, whose USER MAPPING is going to be used,
the table owner (user1) or the executing user (user2)?

Is this part of the FDW infrastructure or SQL/MED spec, and so above the
postgres_fdw implementation, or is it a fdw implementor's choice?

Cheers,

Jeff


[GENERAL] functions - triggers cross schemas

2016-10-13 Thread Armand Pirvu (home)
2 schemas , schema1 and schema2, with same tables , a base table and a tracking 
one. 


Search path is set to schema1 by default.
insert into schema2.test1 (col2 , col3) values ('foo1','foo2')  
I get an entry in schema1.test1_hist and not in schema2.test1_hist 
I understand that the trigger  inherits the schema of its table. But does that 
mean that 
a - the trigger will be created in the same schema 
or
b - it will use the current schema and pass it to the function ? It sure seems 
so , even if I drop the function and trigger from schema1 I still have the same 
behavior. Even I do 
"CREATE TRIGGER test1_audit_trig_prev
after insert or update or delete on schema2.test1
  FOR EACH ROW EXECUTE PROCEDURE tbl_audit_func_prev();
"
I still get the same behavior

The only way I could make it to add in each function
SET search_path = schema1 and SET search_path = schema2

I was expecting since I am doing an insert into schema2.test1 from schema1 to 
see executed the function from schema2 which would then put data in 
schema2.test1_hist

I did play with security definer/invoker but no luck


Any other suggestion other than hardcoding in set search_path within the 
function code, and aside plproxy ?


Thanks
Armand

-- Code
In both schemas I have


create table test1 (col1 bigserial, col2 char(10), col3 char(10), primary key 
(col1, col2));

create table test1_hist (
stmt_seq bigint not null , 
stmt_type char(6) not null,
stmt_subtype char(1) not null,
stmt_date timestamp not null,
like test1);

In schema1
create sequence seq_audit_func;

CREATE OR REPLACE FUNCTION tbl_audit_func () RETURNS TRIGGER AS $$
DECLARE
  temp_new RECORD;
  temp_old RECORD;
  tgopdet1 char(1);
  tgopdet2 char(1);
  vdate timestamp;
begin
  IF (TG_OP = 'INSERT') THEN 
  temp_new := NEW;
  tgopdet2='I';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func'), TG_OP, tgopdet2, vdate, 
temp_new;
  END IF;
  IF (TG_OP = 'UPDATE') THEN
  temp_old := OLD;
  temp_new := NEW;
  tgopdet1='D';
  tgopdet2='I';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func'), TG_OP, tgopdet1, vdate, 
temp_old;
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func'), TG_OP, tgopdet2, vdate, 
temp_new;
  END IF;
  IF (TG_OP = 'DELETE') THEN
  temp_old := OLD;
  tgopdet1='D';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func'), TG_OP, tgopdet1, vdate, 
temp_old;
  END IF;
  RETURN NULL;
END;
$$ LANGUAGE plpgsql  ;


drop trigger test1_audit_trig on test1;
CREATE TRIGGER test1_audit_trig
after insert or update or delete on test1
  FOR EACH ROW EXECUTE PROCEDURE tbl_audit_func();


In schema2
create sequence seq_audit_func_prev;

CREATE OR REPLACE FUNCTION tbl_audit_func_prev () RETURNS TRIGGER AS $$
DECLARE
  temp_new RECORD;
  temp_old RECORD;
  tgopdet1 char(1);
  tgopdet2 char(1);
  vdate timestamp;
begin
  IF (TG_OP = 'INSERT') THEN 
  temp_new := NEW;
  tgopdet2='I';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func_prev'), TG_OP, tgopdet2, 
vdate, temp_new;
  END IF;
  IF (TG_OP = 'UPDATE') THEN
  temp_old := OLD;
  temp_new := NEW;
  tgopdet1='D';
  tgopdet2='I';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func_prev'), TG_OP, tgopdet1, 
vdate, temp_old;
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func_prev'), TG_OP, tgopdet2, 
vdate, temp_new;
  END IF;
  IF (TG_OP = 'DELETE') THEN
  temp_old := OLD;
  tgopdet1='D';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func_prev'), TG_OP, tgopdet1, 
vdate, temp_old;
  END IF;
  RETURN NULL;
END;
$$ LANGUAGE plpgsql ;



drop trigger test1_audit_trig_prev on test1;
CREATE TRIGGER test1_audit_trig_prev
after insert or update or delete on test1
  FOR EACH ROW EXECUTE PROCEDURE tbl_audit_func_prev();







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-13 Thread Thomas Munro
On Fri, Oct 14, 2016 at 2:04 AM, Kevin Grittner  wrote:
> On Wed, Oct 12, 2016 at 8:06 PM, Thomas Munro  
> wrote:
>> The "higher isolation levels" probably shouldn't be treated the same way.
>>
>> I think Peter's right about REPEATABLE READ.  We should definitely
>> raise the error immediately as we do in that level, because our RR
>> (SI) doesn't care about write skew and all that stuff, it just
>> promises that you can only see data in your snapshot.
>
> But the whole point of the special code for both RI and INSERT/ON
> CONFLICT is to get "underneath" that and provide a "primitive" that
> can see things an application statement can't, for better
> performance and error handling.  What SERIALIZABLE promises is that
> it runs exactly the same as REPEATABLE READ, but with some
> additional monitoring for serialization failure errors in some
> places that REPEATABLE READ does not generate them -- this would be
> the first and only place that SERIALIZABLE would break that model.
> The idea seems completely wrong and arbitrary.

Ugh, yeah.  Thanks for this reminder of the relationship between SI
and SSI, which I somehow temporarily lost sight of.

> Where do you see a problem if REPEATABLE READ handles INSERT/ON
> CONFLICT without error?  In many cases it would actually be
> providing a result consistent with a serial execution of the
> transactions; and where it doesn't, it would be the same anomalies
> that are possible with anything else under REPEATABLE READ.

I thought that there was something fishy about the idea of not running
Peter's check in the case of ON CONFLICT DO NOTHING in RR, because
then there isn't an opportunity to detect serialization failure that
the DO UPDATE variant has.  Upon reflection, DO NOTHING is not very
different from INSERT with an exception handler for unique_violation
that does nothing, and that doesn't cause RR to raise an error.  I see
now that you are right, and the check is probably bogus for RR.

>> But yeah, the existing code raises false positive serialization
>> failures under SERIALIZABLE, and that's visible in the isolation test
>> I posted: there is actually a serial order of those transactions with
>> the same result.
>
> Exactly.  The error based on the write conflict with ON CONFLICT DO
> NOTHING in your patch is really a false positive.  That doesn't
> break correctness, but it hurts performance, so it should be
> avoided if possible.

Agreed.  The check is bogus for SERIALIZABLE too, if we have proper SSI checks.

>> When working on commit fcff8a57 I became suspicious of the way ON
>> CONFLICT interacts with SSI, as I mentioned in passing back then[1],
>> thinking mainly of false negatives.  I failed to find a
>> non-serializable schedule involving ON CONFLICT that was allowed to
>> run, though I didn't spend much time on it.   One thing that worries
>> me is the final permutation of read-write-unique-4.spec, which
>> produces an arguably spurious UCV, that is, a transaction that doesn't
>> commit but raises a UCV instead of the serialization failure you might
>> expect.  The ON CONFLICT equivalent might be a transaction that takes
>> the ON CONFLICT path and then commits, even though it should be
>> considered non-serializable.  I would really like to understand that
>> case better, and until then I wouldn't bet my boots that it isn't
>> possible to commit anomalies using ON CONFLICT under SERIALIZABLE
>> without Peter's check (or even with it), despite the fact that it
>> reaches predicate locking code via heap_fetch etc.
>
> Hm.  With the duplicate key error I fail to see how any anomaly
> could make it to a committed state in the database, although I
> agree it is unfortunate that there is that one case where it really
> should be considered a serialization failure that we haven't yet
> coerced to yield that instead of the duplicate key error.

Right, in the unique_violation case it can't commit so there's no
problem (it would just be nicer to users if we could catch that case;
you might call it a false negative but it is harmless because a
unique_violation saves the day).  What I'm wondering about though is
whether a similar ON CONFLICT schedule suffers a similar problem, but
would allow you to commit.  For example, I think the ON CONFLICT
equivalent might be something like the following (rather contrived)
schedule, which happily commits if you comment out Peter's check:

(1)  postgres=# create table bank_account (id int primary key, cash int);
(1)  CREATE TABLE
(1)  postgres=# begin transaction isolation level serializable ;
(1)  BEGIN

(2)  postgres=# begin transaction isolation level serializable ;
(2)  BEGIN

(1)  postgres=# select * from bank_account where id = 1;
(1)  ┌┬──┐
(1)  │ id │ cash │
(1)  ├┼──┤
(1)  └┴──┘
(1)  (0 rows)

(2)  postgres=# insert into bank_account values (1, 100);
(2)  INSERT 0 1

(1)  postgres=# insert into bank_account values (1, 200) on conflict do nothing;
(1)  ...waits for tx2...

(

Re: [GENERAL] functions - triggers cross schemas

2016-10-13 Thread David G. Johnston
On Thu, Oct 13, 2016 at 3:18 PM, Armand Pirvu (home)  wrote:

> 2 schemas , schema1 and schema2, with same tables , a base table and a
> tracking one.
>
>
> Search path is set to schema1 by default.
> insert into schema2.test1 (col2 , col3) values ('foo1','foo2')
> I get an entry in schema1.test1_hist and not in schema2.test1_hist
>

​See:

https://www.postgresql.org/docs/current/static/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER

​
TG_TABLE_SCHEMA

​Any object name not schema qualified will use search_path for resolution.
Search path doesn't change upon entering a function unless the function
defines its own - and your's does not.

David J.


Re: [GENERAL] functions - triggers cross schemas

2016-10-13 Thread Armand Pirvu (home)
Ok that was my oversight in understanding that the path does not change upon 
entering a function. I assumed that .it does by simply picking it from the 
schema2.test1 

Thanks a bunch

Armand

On Oct 13, 2016, at 5:31 PM, David G. Johnston  
wrote:

> On Thu, Oct 13, 2016 at 3:18 PM, Armand Pirvu (home)  
> wrote:
> 2 schemas , schema1 and schema2, with same tables , a base table and a 
> tracking one.
> 
> 
> Search path is set to schema1 by default.
> insert into schema2.test1 (col2 , col3) values ('foo1','foo2')
> I get an entry in schema1.test1_hist and not in schema2.test1_hist
> 
> ​See:
> 
> https://www.postgresql.org/docs/current/static/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER
> 
> ​TG_TABLE_SCHEMA
> 
> ​Any object name not schema qualified will use search_path for resolution.  
> Search path doesn't change upon entering a function unless the function 
> defines its own - and your's does not.
> 
> David J.
> 



Re: [GENERAL] Issue with installation of postgresql server

2016-10-13 Thread Adrian Klaver

On 10/13/2016 04:24 AM, Karre, Kanakaraju (US - Hyderabad) wrote:

I have been trying to install Postgresql server on my machine (Windows
8-64 bit version) but it is failing to install by throwing the following
error –

“_There has been an error. The environment variable COMSPEC does not
seem to point to the cmd.exe or  there is a trailing semicolon present.
Please fix this variable and restart installation _.”__

_ _

I searched some of the forums where the solution mentioned was to add an
environment variable “COMSPEC” with the value –
“C:\Windows\system32\cmd.exe”. I did this but the issue persists. Also,
there is no semicolon at the end in the value of COMSPEC variable.


See if the below helps:

https://technet.microsoft.com/en-us/library/cc976142.aspx





I tried installing 2 versions of Postgresql but I am getting the same
error for both. The versions are as follows :

1.   postgresql-9.3.14-2-windows-x64.exe

2.   postgresql-9.6.0-1-windows-x64.exe


You are getting these from where?





Since the COMSPEC variable is present, it got me thinking if this is a
general exception thrown while installing postgresql.


But is it in the format shown in the technet article above, in 
particular the Data type?




Also, are there any pre-conditions to be checked before installing the
server. Could a particular software installation be hindering the
postgresql installation (At some forums, it was suggested that the
anti-virus uninstallation solved the issue. Could this be the reason?)



Please let me know how to resolve this.



*Thanks & Regards*

*Kanakaraju*__






This message (including any attachments) contains confidential
information intended for a specific individual and purpose, and is
protected by law. If you are not the intended recipient, you should
delete this message and any disclosure, copying, or distribution of this
message, or the taking of any action based on it, by you is strictly
prohibited.

v.E.1











--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MultiXact member wraparound protections are disabled

2016-10-13 Thread AnandKumar, Karthik
Thank you for your help Alvaro - we really appreciate it.
The error in fact stopped this morning - we took downtime and ran a vacuum 
across all of our tables, and saw increased auto vacuum activity as well.

It looks like it bumped up the oldest multitxid to something other than 1 now:

postgres@site-db01a:~ $ /usr/pgsql-9.4/bin/pg_controldata 
/var/lib/pgsql/cmates/data | grep -i multi 
Latest checkpoint's NextMultiXactId: 785051 
Latest checkpoint's NextMultiOffset: 1446371 
Latest checkpoint's oldestMultiXid: 575211 
Latest checkpoint's oldestMulti's DB: 12998


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Passing of where clause to remote table in FDW

2016-10-13 Thread Jaisingkar, Piyush
Thanks for the suggestion, Used || , actually I was reluctant to use this 
because my columns could have contained null values. But that I have handled 
using COALESCE.

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Thursday, October 13, 2016 7:37 PM
To: Jaisingkar, Piyush
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Passing of where clause to remote table in FDW

"Jaisingkar, Piyush"  writes:
> While using fdw I am trying to get and filtered data from a foreign table. 
> Filtering using a regex in where clause as follows:

> EXPLAIN ANALYZE VERBOSE SELECT CAF1.uprn
>   FROM XYZ as CAF1
>   WHERE
>  
> (regexp_replace(upper(concat(CAF1.SUB_BUILDING_NAME,CAF1.BUILDING_NAME
> ,CAF1.BUILDING_NUMBER,CAF1.STREET_DESCRIPTION , CAF1.TOWN_NAME)), 
> '[^a-zA-Z0-9]','') = '84WELLINGTONROADEXETER');

concat() is not immutable (it's only stable) so it can't be sent for remote 
execution.  Use the || operator instead.

regards, tom lane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general