Re: [HACKERS] Possible PANIC in PostPrepare_Locks

2013-01-11 Thread Heikki Linnakangas

On 11.01.2013 04:16, Tom Lane wrote:

[explanation of a race condition]


Good catch.


Also, it looks like we'll need two code paths in PostPrepare_Locks to
deal with the possibility that a conflicting entry already exists?
I'm not sure this is possible, but I'm not sure it's not, either.


If I understand this correctly, that would mean that someone else is 
holding a lock that conflicts with the lock the 
transaction-being-prepared holds. That shouldn't happen.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Get current query in a trigger function

2013-01-11 Thread Vlad Arkhipov
Is there any simple way of getting a query for which a trigger was 
executed? debug_query_string and ActivePortal->sourceText return the top 
query when there are nested triggers.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Amit Kapila
On Wednesday, January 09, 2013 4:57 PM Simon Riggs wrote:
> On 9 January 2013 08:05, Amit kapila  wrote:
> 
> > Update patch contains handling of below Comments
> 
> Thanks
> 
> 
> > Test results with modified pgbench (1800 record size) on the latest
> patch:
> >
> > -Patch- -tps@-c1- -WAL@-c1-  -tps@-c2-  -
> WAL@-c2-
> > Head831   4.17 GB1416   7.13
> GB
> > WAL modification846   2.36 GB1712   3.31
> GB
> >
> > -Patch- -tps@-c4- -WAL@-c4-  -tps@-c8-  -
> WAL@-c8-
> > Head2196  11.01 GB   2758   13.88
> GB
> > WAL modification3295   5.87 GB   54729.02
> GB
> 
> And test results on normal pgbench?

configuration: 

shared_buffers = 4GB 
wal_buffers = 16MB 
checkpoint_segments = 256 
checkpoint_interval = 15min 
autovacuum = off 
server_encoding = SQL_ASCII 
client_encoding = UTF8 
lc_collate = C 
lc_ctype = C 


init: 

pgbench -s 75 -i -F 80 

run: 

pgbench -T 600 


Test results with original pgbench (synccommit off) on the latest patch: 


-Patch- -tps@-c1- -WAL@-c1-  -tps@-c2-  -WAL@-c2-
Head1459  1.40 GB2491   1.70 GB
WAL modification1558  1.38 GB2441   1.59 GB


-Patch- -tps@-c4- -WAL@-c4-  -tps@-c8-  -WAL@-c8-
Head5139  2.49 GB10651  4.72 GB
WAL modification5224  2.28 GB11329  3.96 GB 



Test results with original pgbench (synccommit on) on the latest patch: 


-Patch- -tps@-c1- -WAL@-c1-  -tps@-c2-  -WAL@-c2-
Head146   0.45 GB1670.49 GB
WAL modification144   0.44 GB1660.49 GB


-Patch- -tps@-c4- -WAL@-c4-  -tps@-c8-  -WAL@-c8-
Head325   0.77 GB6031.03 GB
WAL modification321   0.76 GB6041.01 GB



The results are similar as noted by Kyotaro-San. The WAL size is reduced
even for original pgbench.
There is slight performance dip in some of the cases for original pgbench.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Simon Riggs
On 11 January 2013 10:40, Amit Kapila  wrote:

> Test results with original pgbench (synccommit off) on the latest patch:
>
>
> -Patch- -tps@-c1- -WAL@-c1-  -tps@-c2-  -WAL@-c2-
> Head1459  1.40 GB2491   1.70 GB
> WAL modification1558  1.38 GB2441   1.59 GB
>
>
> -Patch- -tps@-c4- -WAL@-c4-  -tps@-c8-  -WAL@-c8-
> Head5139  2.49 GB10651  4.72 GB
> WAL modification5224  2.28 GB11329  3.96 GB

> There is slight performance dip in some of the cases for original pgbench.

Is this just one run? Can we see 3 runs please?

Can we investigate the performance dip at c=2?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] bugfix: --echo-hidden is not supported by \sf statements

2013-01-11 Thread Pavel Stehule
Hello

this is very simple patch -  it enables hidden_queries  for commands
\sf and \ef to be consistent with other describing commands.

bash-4.1$ ./psql postgres -E
psql (9.3devel)
Type "help" for help.

postgres=# \sf+ foo
* QUERY **
SELECT pg_catalog.pg_get_functiondef(16385)
**

CREATE OR REPLACE FUNCTION public.foo(a integer)
 RETURNS integer
 LANGUAGE plpgsql
1   AS $function$
2   begin
3 return 10/a;
4   end;
5   $function$

Regards

Pavel Stehule


sf_hidden_query.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] allowing privileges on untrusted languages

2013-01-11 Thread Peter Eisentraut
Here is a proposed patch for the issue discussed in
:

I'd propose getting rid of lanplistrusted, at least for access
checking.  Instead, just don't install USAGE privileges by
default for those languages.

The reason is that there is value in having a role that can
deploy
schemas, possibly containing functions in untrusted languages,
without having to be a full superuser.  Just like you can have a
user that can create roles without being a superuser.

It turned out that actually getting rid of lanpltrusted would be too
invasive, especially because some language handlers use it to determine
their own behavior.

So instead the lanpltrusted attribute now just determined what the
default privileges of the language are, and all the checks the require
superuserness to do anything with untrusted languages are removed.

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9144eec..3988de3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -3838,8 +3838,8 @@ pg_language Columns
   
True if this is a trusted language, which means that it is believed
not to grant access to anything outside the normal SQL execution
-   environment.  Only superusers can create functions in untrusted
-   languages.
+   environment.  Untrusted languages don't have default privileges
+   assigned.
   
  
 
diff --git a/doc/src/sgml/ref/create_language.sgml b/doc/src/sgml/ref/create_language.sgml
index 0995b13..891f472 100644
--- a/doc/src/sgml/ref/create_language.sgml
+++ b/doc/src/sgml/ref/create_language.sgml
@@ -121,9 +121,7 @@ Parameters
   TRUSTED specifies that the language does
not grant access to data that the user would not otherwise
have.  If this key word is omitted
-   when registering the language, only users with the
-   PostgreSQL superuser privilege can
-   use this language to create new functions.
+   when registering the language, no default privileges are assigned.
   
  
 
diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c
index 0bf5356..5c90c68 100644
--- a/src/backend/catalog/aclchk.c
+++ b/src/backend/catalog/aclchk.c
@@ -2560,13 +2560,6 @@ static AclMode pg_aclmask(AclObjectKind objkind, Oid table_oid, AttrNumber attnu
 
 		pg_language_tuple = (Form_pg_language) GETSTRUCT(tuple);
 
-		if (!pg_language_tuple->lanpltrusted)
-			ereport(ERROR,
-	(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-	 errmsg("language \"%s\" is not trusted",
-			NameStr(pg_language_tuple->lanname)),
-   errhint("Only superusers can use untrusted languages.")));
-
 		/*
 		 * Get owner ID and working copy of existing ACL. If there's no ACL,
 		 * substitute the proper default.
@@ -2576,7 +2569,10 @@ static AclMode pg_aclmask(AclObjectKind objkind, Oid table_oid, AttrNumber attnu
    &isNull);
 		if (isNull)
 		{
-			old_acl = acldefault(ACL_OBJECT_LANGUAGE, ownerId);
+			if (pg_language_tuple->lanpltrusted)
+old_acl = acldefault(ACL_OBJECT_LANGUAGE, ownerId);
+			else
+old_acl = make_empty_acl();
 			/* There are no old member roles according to the catalogs */
 			noldmembers = 0;
 			oldmembers = NULL;
@@ -3823,7 +3819,10 @@ static AclMode pg_aclmask(AclObjectKind objkind, Oid table_oid, AttrNumber attnu
 	if (isNull)
 	{
 		/* No ACL, so build default ACL */
-		acl = acldefault(ACL_OBJECT_LANGUAGE, ownerId);
+		if (((Form_pg_language) GETSTRUCT(tuple))->lanpltrusted)
+			acl = acldefault(ACL_OBJECT_LANGUAGE, ownerId);
+		else
+			acl = make_empty_acl();
 		aclDatum = (Datum) 0;
 	}
 	else
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index c858511..897e170 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -854,23 +854,11 @@
 	languageOid = HeapTupleGetOid(languageTuple);
 	languageStruct = (Form_pg_language) GETSTRUCT(languageTuple);
 
-	if (languageStruct->lanpltrusted)
-	{
-		/* if trusted language, need USAGE privilege */
-		AclResult	aclresult;
-
-		aclresult = pg_language_aclcheck(languageOid, GetUserId(), ACL_USAGE);
-		if (aclresult != ACLCHECK_OK)
-			aclcheck_error(aclresult, ACL_KIND_LANGUAGE,
-		   NameStr(languageStruct->lanname));
-	}
-	else
-	{
-		/* if untrusted language, must be superuser */
-		if (!superuser())
-			aclcheck_error(ACLCHECK_NO_PRIV, ACL_KIND_LANGUAGE,
-		   NameStr(languageStruct->lanname));
-	}
+	/* need USAGE privilege on language */
+	aclresult = pg_language_aclcheck(languageOid, GetUserId(), ACL_USAGE);
+	if (aclresult != ACLCHECK_OK)
+		aclcheck_error(aclresult, ACL_KIND_LANGUAGE,
+	   NameStr(languageStruct->lanname));
 
 	languageValidator = languageStruct->lanvalidator;
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9b71c75..dfb8305 100644
--- a/src/bi

Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Amit Kapila
On Friday, January 11, 2013 4:28 PM Simon Riggs wrote:
> On 11 January 2013 10:40, Amit Kapila  wrote:
> 
> > Test results with original pgbench (synccommit off) on the latest
> patch:
> >
> >
> > -Patch- -tps@-c1- -WAL@-c1-  -tps@-c2-  -
> WAL@-c2-
> > Head1459  1.40 GB2491   1.70
> GB
> > WAL modification1558  1.38 GB2441   1.59
> GB
> >
> >
> > -Patch- -tps@-c4- -WAL@-c4-  -tps@-c8-  -
> WAL@-c8-
> > Head5139  2.49 GB10651  4.72
> GB
> > WAL modification5224  2.28 GB11329  3.96
> GB
> 
> > There is slight performance dip in some of the cases for original
> pgbench.
> 
> Is this just one run? Can we see 3 runs please?

  This average of 3 runs.

 -Patch-   -tps@-c1- -WAL@-c1-  -tps@-c2-  -WAL@-c2-
  Head-1 1648  1.47 GB2491   1.69 GB
  Head-2 1538  1.43 GB2529   1.72 GB
  Head-3 1192  1.31 GB2453   1.70 GB

  AvgHead1459  1.40 GB2491   1.70 GB

  WAL modification-1  1618  1.40 GB2351   1.56
GB
  WAL modification-2  1623  1.40 GB2411   1.59
GB
  WAL modification-3  1435  1.34 GB2562   1.61
GB

  WAL modification-Avg1558  1.38 GB2441   1.59
GB


-Patch-   -tps@-c4- -WAL@-c4-  -tps@-c8-  -WAL@-c8-
  Head-1 5285  2.53 GB11858   5.43
GB
  Head-2 5105  2.47 GB10724   4.98
GB
  Head-3 5029  2.46 GB93723.75
GB

  AvgHead5139  2.49 GB10651   4.72
GB

  WAL modification-1  5117  2.26 GB12092   4.42
GB
  WAL modification-2  5142  2.26 GB99653.48
GB
  WAL modification-3  5413  2.33 GB11930   3.99
GB

  WAL modification-Avg5224  2.28 GB11329   3.96
GB 


> Can we investigate the performance dip at c=2?
  Please consider following points for this dip:
  1. For synchronous commit = off, there is always slight variation in data.
  2. The size of WAL is reduced.
  3. For small rows (128 bytes), sometimes the performance difference
created by this algorithm doesn't help much, 
 as the size is not reduced significantly and there is equivalent
overhead for delta compression. 
 We can put check that this optimization should be applied if row length
is greater than some 
 threshold(128 bytes, 200 bytes), but I feel as performance dip is not
much and WAL reduction gain is also  
 there, so it should be okay without any check as well.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Pavel Stehule
Hello

We use successfully use auto_explain. We miss support cancelled
queries - we need same info for queries, that we cancel after x
minutes.

Subtask of this feature can be dumping currently executed query with
plan and with complete query string to log after receiving some signal
- maybe SIGTRAP.

What do you thinking about this feature?

Regards

Pavel Stehule


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Simon Riggs
On 11 January 2013 12:30, Amit Kapila  wrote:

>> Is this just one run? Can we see 3 runs please?
>
>   This average of 3 runs.

The results are so variable its almost impossible to draw any
conclusions at all. I think if we did harder stats on those we'd get
nothing.

Can you do something to bring that in? Or just do more tests to get a
better view?


>  -Patch-   -tps@-c1- -WAL@-c1-  -tps@-c2-  -WAL@-c2-
>   Head-1 1648  1.47 GB2491   1.69 GB
>   Head-2 1538  1.43 GB2529   1.72 GB
>   Head-3 1192  1.31 GB2453   1.70 GB
>
>   AvgHead1459  1.40 GB2491   1.70 GB
>
>   WAL modification-1  1618  1.40 GB2351   1.56
> GB
>   WAL modification-2  1623  1.40 GB2411   1.59
> GB
>   WAL modification-3  1435  1.34 GB2562   1.61
> GB
>
>   WAL modification-Avg1558  1.38 GB2441   1.59
> GB
>
>
> -Patch-   -tps@-c4- -WAL@-c4-  -tps@-c8-  -WAL@-c8-
>   Head-1 5285  2.53 GB11858   5.43
> GB
>   Head-2 5105  2.47 GB10724   4.98
> GB
>   Head-3 5029  2.46 GB93723.75
> GB
>
>   AvgHead5139  2.49 GB10651   4.72
> GB
>
>   WAL modification-1  5117  2.26 GB12092   4.42
> GB
>   WAL modification-2  5142  2.26 GB99653.48
> GB
>   WAL modification-3  5413  2.33 GB11930   3.99
> GB
>
>   WAL modification-Avg5224  2.28 GB11329   3.96
> GB
>
>
>> Can we investigate the performance dip at c=2?
>   Please consider following points for this dip:
>   1. For synchronous commit = off, there is always slight variation in data.
>   2. The size of WAL is reduced.
>   3. For small rows (128 bytes), sometimes the performance difference
> created by this algorithm doesn't help much,
>  as the size is not reduced significantly and there is equivalent
> overhead for delta compression.
>  We can put check that this optimization should be applied if row length
> is greater than some
>  threshold(128 bytes, 200 bytes), but I feel as performance dip is not
> much and WAL reduction gain is also
>  there, so it should be okay without any check as well.
>
> With Regards,
> Amit Kapila.
>



-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Simon Riggs
On 28 December 2012 10:21, Simon Riggs  wrote:

> * There is a fixed 75% heuristic in the patch.

I'm concerned that we're doing extra work while holding the buffer
locked, which will exacerbate any block contention that exists.

We have a list of the columns that the UPDATE is touching since we use
that to check column permissions for the UPDATE. Which means we should
be able to use that list to check only the columns actually changing
in this UPDATE statement.

That will likely save us some time during the compression check.

Can you look into that please? I don't think it will be much work.

I've moved this to the next CF. I'm planning to review this one first.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Amit Kapila
On Friday, January 11, 2013 6:18 PM Simon Riggs wrote:
> On 11 January 2013 12:30, Amit Kapila  wrote:
> 
> >> Is this just one run? Can we see 3 runs please?
> >
> >   This average of 3 runs.
> 
> The results are so variable its almost impossible to draw any
> conclusions at all. I think if we did harder stats on those we'd get
> nothing.
> 
> Can you do something to bring that in? Or just do more tests to get a
> better view?

To be honest, I have tried this set of 3 readings 2 times and there is
similar fluctuation for sync commit =off
What I can do is early next week, 
a. I can run this test for 10 times to see the results.
b. run the tests for record length-256 instead of 128

However I think my results of sync commit = on is matching with Kyotaro-San.

Please suggest if you have anything in mind?

This is for sync mode= off, if see the result on sync mode= on, it is
comparatively consistent. 
I think for sync commit = off, there is always fluctuation in results. 
The sync mode= on, results are as below:

 -Patch- -tps@-c1- -WAL@-c1-  -tps@-c2-  -WAL@-c2-
  Head-1  149  0.46 GB160   0.48
GB
  Head-2  145  0.45 GB180   0.52
GB
  Head-3  144  0.45 GB161   0.48
GB

  WAL modification-1142  0.44 GB161   0.48 GB
  WAL modification-2146  1.45 GB162   0.48 GB
  WAL modification-3144  1.44 GB175   0.51 GB

 -Patch- -tps@-c4- -WAL@-c4-  -tps@-c8-  -WAL@-c8-
  Head-1  325  0.77 GB602   1.03
GB
  Head-2  328  0.77 GB606   1.03
GB
  Head-3  323  0.77 GB603   1.03
GB

  WAL modification-1324  0.76 GB604   1.01 GB
  WAL modification-2322  0.76 GB604   1.01 GB
  WAL modification-3317  0.75 GB604   1.01 GB
> >
> >
> >> Can we investigate the performance dip at c=2?
> >   Please consider following points for this dip:
> >   1. For synchronous commit = off, there is always slight variation
> in data.
> >   2. The size of WAL is reduced.
> >   3. For small rows (128 bytes), sometimes the performance difference
> > created by this algorithm doesn't help much,
> >  as the size is not reduced significantly and there is equivalent
> > overhead for delta compression.
> >  We can put check that this optimization should be applied if row
> length
> > is greater than some
> >  threshold(128 bytes, 200 bytes), but I feel as performance dip
> is not
> > much and WAL reduction gain is also
> >  there, so it should be okay without any check as well.
> >
> > With Regards,
> > Amit Kapila.
> >

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Amit Kapila
On Friday, January 11, 2013 6:45 PM Simon Riggs wrote:
> On 28 December 2012 10:21, Simon Riggs  wrote:
> 
> > * There is a fixed 75% heuristic in the patch.
> 
> I'm concerned that we're doing extra work while holding the buffer
> locked, which will exacerbate any block contention that exists.
> 
> We have a list of the columns that the UPDATE is touching since we use
> that to check column permissions for the UPDATE. Which means we should
> be able to use that list to check only the columns actually changing
> in this UPDATE statement.
> 
> That will likely save us some time during the compression check.
> 
> Can you look into that please? I don't think it will be much work.

IIUC, I have done that way in the initial version of the patch that is do
encoding for modified columns.
I have mentioned reference of my initial patch as below:

modifiedCols = (rt_fetch(resultRelInfo->ri_RangeTableIndex, 
+
estate->es_range_table)->modifiedCols); 

http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C3828
52DE51@szxeml509-mbs

1. However Heikki has pointed, it has some problems similar to for HOT
implementation and that is the reason we have done memcmp for HOT.
2. Also we have found in initial readings that this doesn't have any
performance difference as compare to current Approach.
 
> I've moved this to the next CF. I'm planning to review this one first.

Thank you.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Alvaro Herrera
Simon Riggs wrote:
> On 28 December 2012 10:21, Simon Riggs  wrote:
> 
> > * There is a fixed 75% heuristic in the patch.
> 
> I'm concerned that we're doing extra work while holding the buffer
> locked, which will exacerbate any block contention that exists.
> 
> We have a list of the columns that the UPDATE is touching since we use
> that to check column permissions for the UPDATE. Which means we should
> be able to use that list to check only the columns actually changing
> in this UPDATE statement.

But that doesn't include columns changed by triggers, AFAIR, so you
could only use that if there weren't any triggers.

I was also worried about the high variance in the results.  Those
averages look rather meaningless.  Which would be okay, I think, because
it'd mean that performance-wise the patch is a wash, but it is still
achieving a lower WAL volume, which is good.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Simon Riggs
On 11 January 2013 14:29, Alvaro Herrera  wrote:

> But that doesn't include columns changed by triggers, AFAIR, so you
> could only use that if there weren't any triggers.

True, well spotted

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] AIX buildfarm member

2013-01-11 Thread Steve Singer
The only animal in the buildfarm running AIX is grebe 
(http://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=grebe&br=HEAD)


It is likely that the server running this animal will go away sometime 
this year and the machine replacing it isn't running AIX.


If someone else in the community is running PostgreSQL on AIX then it 
would be good if they setup a buildfarm member, perhaps with a more 
recent version of AIX.



Steve


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks

2013-01-11 Thread Alvaro Herrera
Andres Freund wrote:
> On 2013-01-10 18:00:40 -0300, Alvaro Herrera wrote:
> > Here's version 28 of this patch.  The only substantive change here from
> > v26 is that I've made GetTupleForTrigger() use either LockTupleExclusive
> > or LockTupleNoKeyExclusive, depending on whether the key columns are
> > being modified by the update.  This needs to go through EvalPlanQual, so
> > that function is now getting the lock mode as a parameter instead of
> > hardcoded LockTupleExclusive.  (All other users of GetTupleForTrigger
> > still use LockTupleExclusive, so there's no change for anybody other
> > than FOR EACH ROW BEFORE UPDATE triggers).
> 
> Is that enough in case of a originally non-key update in read committed
> mode that turns into a key update due to a concurrent key update?

Hm, let me try to work through your example.  You say that a transaction
T1 does a non-key update, and is working through the BEFORE UPDATE
trigger; then transaction T2 does a key update and changes the key
underneath T1?  So at that point T1 becomes a key update, because it's
now using the original key values which are no longer the key?

I don't think this can really happen, because T2 (which is requesting
TupleLockExclusive) would block on the lock that the trigger is grabbing
(TupleLockNoKeyExclusive) on the tuple.  So T2 would sleep until T1 is
committed.


Now, maybe you meant that the BEFORE UPDATE trigger changes the key
value but the user-supplied UPDATE query does not.  So the trigger turns
the no-key update into a key update.  What would happen here is that
GetTupleForTrigger would acquire TupleLockNoKeyExclusive on the tuple,
and later heap_update would acquire TupleLockExclusive.  So there is
lock escalation happening.  This could cause a deadlock against someone
else grabbing a TupleLockKeyShare on the tuple.  I think the answer is
"don't do that", i.e. don't update the key columns in a BEFORE UPDATE
trigger.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks

2013-01-11 Thread Andres Freund
On 2013-01-11 12:11:47 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
> > On 2013-01-10 18:00:40 -0300, Alvaro Herrera wrote:
> > > Here's version 28 of this patch.  The only substantive change here from
> > > v26 is that I've made GetTupleForTrigger() use either LockTupleExclusive
> > > or LockTupleNoKeyExclusive, depending on whether the key columns are
> > > being modified by the update.  This needs to go through EvalPlanQual, so
> > > that function is now getting the lock mode as a parameter instead of
> > > hardcoded LockTupleExclusive.  (All other users of GetTupleForTrigger
> > > still use LockTupleExclusive, so there's no change for anybody other
> > > than FOR EACH ROW BEFORE UPDATE triggers).
> >
> > Is that enough in case of a originally non-key update in read committed
> > mode that turns into a key update due to a concurrent key update?
>
> Hm, let me try to work through your example.  You say that a transaction
> T1 does a non-key update, and is working through the BEFORE UPDATE
> trigger; then transaction T2 does a key update and changes the key
> underneath T1?  So at that point T1 becomes a key update, because it's
> now using the original key values which are no longer the key?
>
> I don't think this can really happen, because T2 (which is requesting
> TupleLockExclusive) would block on the lock that the trigger is grabbing
> (TupleLockNoKeyExclusive) on the tuple.  So T2 would sleep until T1 is
> committed.

No, I was thinking about an update without triggers present.

T0: CREATE TABLE tbl(id serial pk, name text unique, data text);
T1: BEGIN; -- read committed
T1: UPDATE tbl SET name = 'foo' WHERE name = 'blarg'; /* key update of row id = 
1 */
T2: BEGIN; -- read committed
T2: UPDATE tbl SET name = 'blarg', data = 'blarg' WHERE id = 1; /* no key 
update, waiting */
T1: COMMIT;
T2: /* UPDATE follows to updated row, due to the changed name its a key update 
now */

Does that make sense?

Greetings,

Andres Freund

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] allowing privileges on untrusted languages

2013-01-11 Thread Tom Lane
Peter Eisentraut  writes:
> It turned out that actually getting rid of lanpltrusted would be too
> invasive, especially because some language handlers use it to determine
> their own behavior.

> So instead the lanpltrusted attribute now just determined what the
> default privileges of the language are, and all the checks the require
> superuserness to do anything with untrusted languages are removed.

Hmm ... that worries me a bit.  It seems like system security will now
require being sure that the permissions on the language match the
lanpltrusted setting.  Even if the code is right today, there's a lot
of scope for future oversights with security implications.  Don't know
what we could do to mitigate that.

In particular, have you thought carefully about upgrade scenarios?
Will a dump-and-restore of a pre-9.3 installation end up with safe
language privileges?

In the same vein, I'm worried that the proposed change in pg_dump will
do the wrong thing when looking at a pre-9.3 server.  Is any
server-version-dependent behavior needed there?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Tom Lane
Pavel Stehule  writes:
> What do you thinking about this feature?

The idea of expecting an add-on module to execute operations in an
already-failed transaction seems pretty dubious to me.  I also think
it's not a great idea to add partial executions into a query's stats.
For instance, suppose query X has been done 3 times and took a minute
each time.  The fourth time, the user cancels it after one second.
If we now report that the query's average execution time is 45 seconds
or so, that seems pretty misleading to me.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Print b-tree tuples

2013-01-11 Thread Samuel Vogel

Am 04.01.13 21:26, schrieb Tom Lane:

Samuel Vogel  writes:

I'm trying to print out the tuples in the b-tree nodes for analysis, but
when iterate over more than the first entry of the tuples (scanKey++), I
strangely get the error below on query execution:
ERROR:  relation "simpletest" does not exist
LINE 1: SELECT * FROM simpletest WHERE id = 50;

Is this patch the only thing you changed?  The only obvious explanation
for the above error (other than "you fat-fingered the query") seems to
be that you caused index searches in pg_class to fail, but I don't see
how this patch could be affecting the result of _bt_binsrch.
Yes, I've switched to a clean master and only applied this patch. If I 
put the the break, the query works, so it's fine as well.


BTW: I just had a discussion if it would make sense, to recreate my test 
table with a different order, as a b-tree is always dependent on 
insertion order. But as the b-tree index is newly created in memory 
after each restart, two different insertion orders (same values) would 
give me the same b-tree at least after a restart, right?


Regards,
Samuel


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Pavel Stehule
2013/1/11 Tom Lane :
> Pavel Stehule  writes:
>> What do you thinking about this feature?
>
> The idea of expecting an add-on module to execute operations in an
> already-failed transaction seems pretty dubious to me.  I also think
> it's not a great idea to add partial executions into a query's stats.
> For instance, suppose query X has been done 3 times and took a minute
> each time.  The fourth time, the user cancels it after one second.
> If we now report that the query's average execution time is 45 seconds
> or so, that seems pretty misleading to me.

I don't propose logging query time for cancelled queries - although we
have path that do it - but it is our specific issue.

My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we killing queries. But we have to know little bit more
about these bad queries - and we hope, so execution plan can give this
additional info. We have same motivation like people who use
auto_explain for slow query - but we can't to wait to query complete.

Regards

Pavel

>
> regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Stephen Frost
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> My propose is proposed for different dimensions and purpose - for
> example - we have a limit 20 minutes for almost all queries, and after
> this limit we killing queries. But we have to know little bit more
> about these bad queries - and we hope, so execution plan can give this
> additional info. We have same motivation like people who use
> auto_explain for slow query - but we can't to wait to query complete.

Why not an option to auto_explain (or whatever) to log an execution plan
right before actually executing it?  If that was something which could
be set with a GUC or similar, you could just do that before running
whatever queries you're interested in capturing the plans for.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Tom Lane
Pavel Stehule  writes:
> My propose is proposed for different dimensions and purpose - for
> example - we have a limit 20 minutes for almost all queries, and after
> this limit we killing queries. But we have to know little bit more
> about these bad queries - and we hope, so execution plan can give this
> additional info. We have same motivation like people who use
> auto_explain for slow query - but we can't to wait to query complete.

Oh, sorry, not enough caffeine yet --- somehow I was thinking about
pg_stat_statements not auto_explain.

However, auto_explain is even worse on the other problem.  You flat out
cannot do catalog lookups in a failed transaction, but there's no way to
print a decompiled plan without such lookups.  So it won't work.  (It
would also be appropriate to be suspicious of whether the executor's
plan state tree is even fully set up at the time the error is thrown...)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Pavel Stehule
2013/1/11 Stephen Frost :
> * Pavel Stehule (pavel.steh...@gmail.com) wrote:
>> My propose is proposed for different dimensions and purpose - for
>> example - we have a limit 20 minutes for almost all queries, and after
>> this limit we killing queries. But we have to know little bit more
>> about these bad queries - and we hope, so execution plan can give this
>> additional info. We have same motivation like people who use
>> auto_explain for slow query - but we can't to wait to query complete.
>
> Why not an option to auto_explain (or whatever) to log an execution plan
> right before actually executing it?  If that was something which could
> be set with a GUC or similar, you could just do that before running
> whatever queries you're interested in capturing the plans for.

for our OLAP usage it is probably possible, but it can be slow for OLTP usage..

Regards

Pavel

>
> Thanks,
>
> Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Alvaro Herrera
Tom Lane escribió:
> Pavel Stehule  writes:
> > My propose is proposed for different dimensions and purpose - for
> > example - we have a limit 20 minutes for almost all queries, and after
> > this limit we killing queries. But we have to know little bit more
> > about these bad queries - and we hope, so execution plan can give this
> > additional info. We have same motivation like people who use
> > auto_explain for slow query - but we can't to wait to query complete.
> 
> Oh, sorry, not enough caffeine yet --- somehow I was thinking about
> pg_stat_statements not auto_explain.
> 
> However, auto_explain is even worse on the other problem.  You flat out
> cannot do catalog lookups in a failed transaction, but there's no way to
> print a decompiled plan without such lookups.  So it won't work.  (It
> would also be appropriate to be suspicious of whether the executor's
> plan state tree is even fully set up at the time the error is thrown...)

Maybe it'd work to save the query source text and parameter values
somewhere and log an explain in a different session.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Simon Riggs
On 11 January 2013 14:24, Amit Kapila  wrote:

> http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C3828
> 52DE51@szxeml509-mbs
>
> 1. However Heikki has pointed, it has some problems similar to for HOT
> implementation and that is the reason we have done memcmp for HOT.
> 2. Also we have found in initial readings that this doesn't have any
> performance difference as compare to current Approach.

OK, forget that idea.

>> I've moved this to the next CF. I'm planning to review this one first.
>
> Thank you.

Just reviewing the patch now, making more sense with comments added.

In heap_delta_encode() do we store which columns have changed? Do we
store the whole new column value?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] json generation enhancements

2013-01-11 Thread Andrew Dunstan


I have not had anyone follow up on this, so I have added docs and will 
add this to the commitfest.


Recap:

This adds the following:

json_agg(anyrecord) -> json
to_json(any) -> json
hstore_to_json(hstore) -> json (also used as a cast)
hstore_to_json_loose(hstore) -> json

Also, in json generation, if any non-builtin type has a cast to json, 
that function is used instead of the type's output function.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json generation enhancements

2013-01-11 Thread Andrew Dunstan


On 01/11/2013 11:00 AM, Andrew Dunstan wrote:


I have not had anyone follow up on this, so I have added docs and will 
add this to the commitfest.


Recap:

This adds the following:

json_agg(anyrecord) -> json
to_json(any) -> json
hstore_to_json(hstore) -> json (also used as a cast)
hstore_to_json_loose(hstore) -> json

Also, in json generation, if any non-builtin type has a cast to json, 
that function is used instead of the type's output function.





This time with a patch.

cheers

andrew


*** a/contrib/hstore/expected/hstore.out
--- b/contrib/hstore/expected/hstore.out
***
*** 1453,1455  select count(*) from testhstore where h = 'pos=>98, line=>371, node=>CBA, indexe
--- 1453,1491 
   1
  (1 row)
  
+ -- json
+ select hstore_to_json('"a key" =>1, b => t, c => null, d=> 12345, e => 012345, f=> 1.234, g=> 2.345e+4');
+  hstore_to_json  
+ -
+  {"b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4", "a key": "1"}
+ (1 row)
+ 
+ select cast( hstore  '"a key" =>1, b => t, c => null, d=> 12345, e => 012345, f=> 1.234, g=> 2.345e+4' as json);
+   json   
+ -
+  {"b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4", "a key": "1"}
+ (1 row)
+ 
+ select hstore_to_json_loose('"a key" =>1, b => t, c => null, d=> 12345, e => 012345, f=> 1.234, g=> 2.345e+4');
+hstore_to_json_loose   
+ --
+  {"b": true, "c": null, "d": 12345, "e": "012345", "f": 1.234, "g": 2.345e+4, "a key": 1}
+ (1 row)
+ 
+ create table test_json_agg (f1 text, f2 hstore);
+ insert into test_json_agg values ('rec1','"a key" =>1, b => t, c => null, d=> 12345, e => 012345, f=> 1.234, g=> 2.345e+4'),
+('rec2','"a key" =>2, b => f, c => "null", d=> -12345, e => 012345.6, f=> -1.234, g=> 0.345e-4');
+ select json_agg(q) from test_json_agg q;
+   json_agg  
+ 
+  [{"f1":"rec1","f2":{"b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4", "a key": "1"}},  +
+   {"f1":"rec2","f2":{"b": "f", "c": "null", "d": "-12345", "e": "012345.6", "f": "-1.234", "g": "0.345e-4", "a key": "2"}}]
+ (1 row)
+ 
+ select json_agg(q) from (select f1, hstore_to_json_loose(f2) as f2 from test_json_agg) q;
+json_agg   
+ --
+  [{"f1":"rec1","f2":{"b": true, "c": null, "d": 12345, "e": "012345", "f": 1.234, "g": 2.345e+4, "a key": 1}},   +
+   {"f1":"rec2","f2":{"b": false, "c": "null", "d": -12345, "e": "012345.6", "f": -1.234, "g": 0.345e-4, "a key": 2}}]
+ (1 row)
+ 
*** a/contrib/hstore/hstore--1.1.sql
--- b/contrib/hstore/hstore--1.1.sql
***
*** 234,239  LANGUAGE C IMMUTABLE STRICT;
--- 234,252 
  CREATE CAST (text[] AS hstore)
WITH FUNCTION hstore(text[]);
  
+ CREATE FUNCTION hstore_to_json(hstore)
+ RETURNS json
+ AS 'MODULE_PATHNAME', 'hstore_to_json'
+ LANGUAGE C IMMUTABLE STRICT;
+ 
+ CREATE CAST (hstore AS json)
+   WITH FUNCTION hstore_to_json(hstore);
+ 
+ CREATE FUNCTION hstore_to_json_loose(hstore)
+ RETURNS json
+ AS 'MODULE_PATHNAME', 'hstore_to_json_loose'
+ LANGUAGE C IMMUTABLE STRICT;
+ 
  CREATE FUNCTION hstore(record)
  RETURNS hstore
  AS 'MODULE_PATHNAME', 'hstore_from_record'
*** a/contrib/hstore/hstore_io.c
--- b/contrib/hstore/hstore_io.c
***
*** 8,14 
--- 8,17 
  #include "access/htup_details.h"
  #include "catalog/pg_type.h"
  #include "funcapi.h"
+ #include "lib/stringinfo.h"
  #include "libpq/pqformat.h"
+ #include "utils/builtins.h"
+ #include "utils/json.h"
  #include "utils/lsyscache.h"
  #include "utils/typcache.h"
  
***
*** 1209,1211  hstore_send(PG_FUNCTION_ARGS)
--- 1212,1425 
  
  	PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
  }
+ 
+ 
+ /*
+  * hstore_to_json_loose
+  *
+  * This is a heuristic conversion to json which treats
+  * 't' and 'f' as booleans and strings that look like numbers as numbers,
+  * as long as they don't start with a leading zero followed by another digit
+  * (think zip codes or phone numbers starting with 0).
+  */
+ PG_FUNCTION_INFO_V1(hstore_to_json_loose)

Re: [HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Andres Freund
On 2013-01-10 10:55:20 +0100, Andres Freund wrote:
> On 2013-01-10 10:31:04 +0100, Andres Freund wrote:
> > On 2013-01-10 00:05:07 +0100, Andres Freund wrote:
> > > On 2013-01-09 17:28:35 -0500, Tom Lane wrote:
> > > > (We know this doesn't
> > > > matter, but gcc doesn't; this version of gcc apparently isn't doing much
> > > > with the knowledge that elog won't return.)
> > >
> > > Afaics one reason for that is that we don't have any such annotation for
> > > elog(), just for ereport. And I don't immediately see how it could be
> > > added to elog without relying on variadic macros. Bit of a shame, there
> > > probably a good number of callsites that could actually benefit from
> > > that knowledge.
> > > Is it worth making that annotation conditional on variadic macro
> > > support?
> >
> > A quick test confirms that. With:
> >
> > diff --git a/src/include/utils/elog.h b/src/include/utils/elog.h
> > index cbbda04..39cd809 100644
> > --- a/src/include/utils/elog.h
> > +++ b/src/include/utils/elog.h
> > @@ -212,7 +212,13 @@ extern int getinternalerrposition(void);
> >   * elog(ERROR, "portal \"%s\" not found", stmt->portalname);
> >   *--
> >   */
> > +#ifdef __GNUC__
> > +#define elog(elevel, ...) elog_start(__FILE__, __LINE__, 
> > PG_FUNCNAME_MACRO), \
> > +   elog_finish(elevel, __VA_ARGS__),   
> > \
> > +   ((elevel) >= ERROR ? __builtin_unreachable() : (void) 0)
> > +#else
> >  #define elog   elog_start(__FILE__, __LINE__, PG_FUNCNAME_MACRO), 
> > elog_finish
> > +#endif
> >
> >  extern void elog_start(const char *filename, int lineno, const char 
> > *funcname);
> >  extern void
> >
> > nearly the same code is generated for both variants (this is no
> > additionally saved registers and such).
> >
> > Two unfinished things:
> > * the above only checks for gcc although all C99 compilers should
> >   support varargs macros
> > * It uses __builtin_unreachable() instead of abort() as that creates
> >   a smaller executable. That's gcc 4.5 only. Saves about 30kb in a
> >   both a stripped and unstripped binary.
> > * elog(ERROR) i.e. no args would require more macro ugliness, but that
> >   seems unneccesary
> >
> > Doing the __builtin_unreachable() for ereport as well saves about 50kb.
> >
> > Given that some of those elog/ereports are in performance critical code
> > paths it seems like a good idea to remove code from the
> > callsites. Adding configure check for __VA_ARGS__ and
> > __builtin_unreachable() should be simple enough?
> 
> For whatever its worth - I am not sure its much - after relying on
> variadic macros we can make elog into one function call instead of
> two. That saves another 100kb.
> 
> [tinker]
> 
> The builtin_unreachable and combined elog thing bring a measurable
> performance benefit of a rather surprising 7% when running Pavel's
> testcase ontop of yesterdays HEAD. So it seems worth investigating.
> About 3% of that is the __builtin_unreachable() and about 4% the
> combining of elog_start/finish.
> 
> Now that testcase sure isn't a very representative one for this, but I
> don't really see why it should benefit much more than other cases. Seems
> to be quite the benefit for relatively minor cost.
> Although I am pretty sure just copying the whole of
> elog_start/elog_finish into one elog_all() isn't the best way to go at
> this...

The attached patch:

* adds configure checks for __VA_ARGS__ and __builtin_unreachable
  support
* provides a pg_unreachable() macro that expands to
  __builtin_unreachable() or abort()
* changes #define elog ... into #define elog(elevel, ...) if varargs are
  available

It does *not* combine elog_start and elog_finish into one function if
varargs are available although that brings a rather measurable
size/performance benefit.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
>From 94a3f93f4c8a381358a4dc52a43ec60a8f3e33f6 Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Fri, 11 Jan 2013 16:58:17 +0100
Subject: [PATCH] Mark elog() as not returning if the capabilities of the C
 compiler allow it

To do that:
* Add a configure check for __VA_ARGS__ support
* Provide pg_unreachable() macro which expands to__builtin_unreachable() if the
  compiler supports it, abort() otherwise

Marking it as such improves code generation and reduces executable size.
---
 config/c-compiler.m4 | 41 +
 configure.in |  2 ++
 src/include/c.h  | 10 ++
 src/include/utils/elog.h | 18 +++---
 4 files changed, 68 insertions(+), 3 deletions(-)

diff --git a/config/c-compiler.m4 b/config/c-compiler.m4
index 7cbb8ec..f3efe72 100644
--- a/config/c-compiler.m4
+++ b/config/c-compiler.m4
@@ -161,6 +161,47 @@ fi])# PGAC_C_TYPES_COMPATIBLE
 
 
 
+# PGAC_C_VA_ARGS
+# ---
+# Check if t

Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Pavel Stehule
2013/1/11 Tom Lane :
> Pavel Stehule  writes:
>> My propose is proposed for different dimensions and purpose - for
>> example - we have a limit 20 minutes for almost all queries, and after
>> this limit we killing queries. But we have to know little bit more
>> about these bad queries - and we hope, so execution plan can give this
>> additional info. We have same motivation like people who use
>> auto_explain for slow query - but we can't to wait to query complete.
>
> Oh, sorry, not enough caffeine yet --- somehow I was thinking about
> pg_stat_statements not auto_explain.
>
> However, auto_explain is even worse on the other problem.  You flat out
> cannot do catalog lookups in a failed transaction, but there's no way to
> print a decompiled plan without such lookups.  So it won't work.  (It
> would also be appropriate to be suspicious of whether the executor's
> plan state tree is even fully set up at the time the error is thrown...)

yes, it is - I have a few ideas

1) using signal handler - we don't use a SIGTRAP - so we can use
SIGTRAP (for example) - and inside signal handler we can ensure dump
of plan. It has one advantage - we can take a plan - and maybe more
without query cancelling - somebody can have significantly higher
limit then we, but he would to know a plan.

2) creating some hook for some operations called before query is
really cancelled - this can be called before exception is raised, so
we can materialise plan without problem in this moment. This hook can
be used for auto_explain, because we don't would to print plans for
queries that was faster cancelled than auto_explain limit is it. Maybe
this hook can be called before raising more kinds of exceptions -
across temp file limits. We are not interested on queries that was
finished different errors - because these errors coming quickly
usually.

>From my perspective some possible integration with auto_explain can be
nice, I would not to set limits on more places.

Regards

Pavel


>
> regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Simon Riggs
On 11 January 2013 15:54, Alvaro Herrera  wrote:
> Tom Lane escribió:
>> Pavel Stehule  writes:
>> > My propose is proposed for different dimensions and purpose - for
>> > example - we have a limit 20 minutes for almost all queries, and after
>> > this limit we killing queries. But we have to know little bit more
>> > about these bad queries - and we hope, so execution plan can give this
>> > additional info. We have same motivation like people who use
>> > auto_explain for slow query - but we can't to wait to query complete.
>>
>> Oh, sorry, not enough caffeine yet --- somehow I was thinking about
>> pg_stat_statements not auto_explain.
>>
>> However, auto_explain is even worse on the other problem.  You flat out
>> cannot do catalog lookups in a failed transaction, but there's no way to
>> print a decompiled plan without such lookups.  So it won't work.  (It
>> would also be appropriate to be suspicious of whether the executor's
>> plan state tree is even fully set up at the time the error is thrown...)
>
> Maybe it'd work to save the query source text and parameter values
> somewhere and log an explain in a different session.

I think this would be an important feature.

But then I also want to be able to kill a query without it doing 50
pushups and a backflip before it dies, since that will inevitably go
wrong.

Perhaps we can have a new signal that means "exit gracefully, with
info if requested". That way we can keep "kill" to mean "kill".

An even better feature would be to be able to send a signal to a
running query to log its currently executing plan. That way you can
ask "Why so slow?" before deciding to kill it. That way we don't need
to overload the kill signal at all. That's the most useful part of a
"progress indicator" tool without the complexity.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks

2013-01-11 Thread Alvaro Herrera
Andres Freund wrote:

> No, I was thinking about an update without triggers present.
> 
> T0: CREATE TABLE tbl(id serial pk, name text unique, data text);
> T1: BEGIN; -- read committed
> T1: UPDATE tbl SET name = 'foo' WHERE name = 'blarg'; /* key update of row id 
> = 1 */
> T2: BEGIN; -- read committed
> T2: UPDATE tbl SET name = 'blarg', data = 'blarg' WHERE id = 1; /* no key 
> update, waiting */
> T1: COMMIT;
> T2: /* UPDATE follows to updated row, due to the changed name its a key 
> update now */
> 
> Does that make sense?

So I guess your question is "is T2 now holding a TupleLockExclusive
lock?"  To answer it, I turned your example into a isolationtester spec:

setup
{
CREATE TABLE tbl(id serial primary key, name text unique, data text);
INSERT INTO tbl VALUES (1, 'blarg', 'no data');
}

teardown
{
DROP TABLE tbl;
}

session "s1"
step "s1b" { BEGIN; }
step "s1u" { UPDATE tbl SET name = 'foo' WHERE name = 'blarg'; }
step "s1c" { COMMIT; }

session "s2"
step "s2b" { BEGIN; }
step "s2u" { UPDATE tbl SET name = 'blarg', data = 'blarg' WHERE id = 1; }
step "s2c" { COMMIT; }

session "s3"
step "s3l" { SELECT * FROM tbl FOR KEY SHARE; }

permutation "s1b" "s1u" "s2b" "s2u" "s1c" "s3l" "s2c"


And the results:
Parsed test spec with 3 sessions

starting permutation: s1b s1u s2b s2u s1c s3l s2c
step s1b: BEGIN;
step s1u: UPDATE tbl SET name = 'foo' WHERE name = 'blarg';
step s2b: BEGIN;
step s2u: UPDATE tbl SET name = 'blarg', data = 'blarg' WHERE id = 1; 
step s1c: COMMIT;
step s2u: <... completed>
step s3l: SELECT * FROM tbl FOR KEY SHARE; 
step s2c: COMMIT;
step s3l: <... completed>
id name   data   

1  blarg  blarg  


So session 3 is correctly waiting for session 2 to finish before being
ablt to grab its FOR KEY SHARE lock, indicating that session 2 is
holding a FOR UPDATE lock.  Good.

If I change session 1 to update the data column instead of name, session
3 no longer needs to wait for session 2, meaning session 2 now only
grabs a FOR NO KEY UPDATE lock.  Also good.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Tom Lane
Alvaro Herrera  writes:
> Tom Lane escribió:
>> However, auto_explain is even worse on the other problem.  You flat out
>> cannot do catalog lookups in a failed transaction, but there's no way to
>> print a decompiled plan without such lookups.  So it won't work.  (It
>> would also be appropriate to be suspicious of whether the executor's
>> plan state tree is even fully set up at the time the error is thrown...)

> Maybe it'd work to save the query source text and parameter values
> somewhere and log an explain in a different session.

There wouldn't be a lot of certainty that you got the same plan.

AFAICS the only thing you could do is what Stephen suggested: run
EXPLAIN *before* starting the query.  You could stash the text somewhere
and only print it on failure, which would prevent useless log bloat.
But it'd still be awfully expensive.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Stephen Frost
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2013/1/11 Stephen Frost :
> > Why not an option to auto_explain (or whatever) to log an execution plan
> > right before actually executing it?  If that was something which could
> > be set with a GUC or similar, you could just do that before running
> > whatever queries you're interested in capturing the plans for.
> 
> for our OLAP usage it is probably possible, but it can be slow for OLTP 
> usage..

If it was a GUC, you could turn it on/off at appropriate places in the
OLTP scenario.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Tom Lane
Simon Riggs  writes:
> An even better feature would be to be able to send a signal to a
> running query to log its currently executing plan. That way you can
> ask "Why so slow?" before deciding to kill it.

That could conceivably work.  At least it wouldn't require running
EXPLAIN in a failed transaction.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Pavel Stehule
2013/1/11 Stephen Frost :
> * Pavel Stehule (pavel.steh...@gmail.com) wrote:
>> 2013/1/11 Stephen Frost :
>> > Why not an option to auto_explain (or whatever) to log an execution plan
>> > right before actually executing it?  If that was something which could
>> > be set with a GUC or similar, you could just do that before running
>> > whatever queries you're interested in capturing the plans for.
>>
>> for our OLAP usage it is probably possible, but it can be slow for OLTP 
>> usage..
>
> If it was a GUC, you could turn it on/off at appropriate places in the
> OLTP scenario.

yes, it is possible solution

Pavel

>
> Thanks,
>
> Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Pavel Stehule
2013/1/11 Stephen Frost :
> * Pavel Stehule (pavel.steh...@gmail.com) wrote:
>> 2013/1/11 Stephen Frost :
>> > Why not an option to auto_explain (or whatever) to log an execution plan
>> > right before actually executing it?  If that was something which could
>> > be set with a GUC or similar, you could just do that before running
>> > whatever queries you're interested in capturing the plans for.
>>
>> for our OLAP usage it is probably possible, but it can be slow for OLTP 
>> usage..
>
> If it was a GUC, you could turn it on/off at appropriate places in the
> OLTP scenario.

but still - if this is a auto_explain feature - then we need to return
execution to auto_explain after cancelled query - and it is probably
impossible now

Regards

Pavel

>
> Thanks,
>
> Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Simon Riggs  writes:
> > An even better feature would be to be able to send a signal to a
> > running query to log its currently executing plan. That way you can
> > ask "Why so slow?" before deciding to kill it.
> 
> That could conceivably work.  At least it wouldn't require running
> EXPLAIN in a failed transaction.

I like this idea, in general, also.  Taking that to the next level would
be figuring out how you can do the same kind of thing through an
interactive psql session where the user running the query doesn't need
access to the database server or PG log files...

We can send a 'cancel query', how about a 'report on query' which
returns the plan and perhaps whatever other stats are easily available?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] psql \l to accept patterns

2013-01-11 Thread Robert Haas
On Tue, Jan 8, 2013 at 11:36 PM, Peter Eisentraut  wrote:
> On Mon, 2013-01-07 at 17:37 -0500, Robert Haas wrote:
>> If we make the postgres database undroppable, unrenamable, and
>> strictly read-only, I will happily support a proposal to consider it a
>> system object.  Until then, it's no more a system object than the
>> public schema - which, you will note, \dn has no compunctions about
>> displaying, even without S.
>
> Good point.  What about the other suggestion about only displaying
> databases by default that you can connect to?

I would tend not to adopt that suggestion, on the grounds that it has
no obvious parallel with anything else psql hides by default.
However, I don't feel quite as strongly about that case.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks

2013-01-11 Thread Andres Freund
On 2013-01-11 13:10:49 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
>
> > No, I was thinking about an update without triggers present.
> >
> > T0: CREATE TABLE tbl(id serial pk, name text unique, data text);
> > T1: BEGIN; -- read committed
> > T1: UPDATE tbl SET name = 'foo' WHERE name = 'blarg'; /* key update of row 
> > id = 1 */
> > T2: BEGIN; -- read committed
> > T2: UPDATE tbl SET name = 'blarg', data = 'blarg' WHERE id = 1; /* no key 
> > update, waiting */
> > T1: COMMIT;
> > T2: /* UPDATE follows to updated row, due to the changed name its a key 
> > update now */
> >
> > Does that make sense?
>
> So I guess your question is "is T2 now holding a TupleLockExclusive
> lock?"  To answer it, I turned your example into a isolationtester spec:

Great! I reread the code and it does make sense the way its implemented
now. I misremembered something...

I vote for adding that spectest including some appropriate permutations.

FWIW: Looks good to me. It could use another pair of eyes, but I guess
that will have to come by being used.

Greetings,

Andres Freund

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Pavel Stehule
2013/1/11 Stephen Frost :
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> Simon Riggs  writes:
>> > An even better feature would be to be able to send a signal to a
>> > running query to log its currently executing plan. That way you can
>> > ask "Why so slow?" before deciding to kill it.
>>
>> That could conceivably work.  At least it wouldn't require running
>> EXPLAIN in a failed transaction.
>
> I like this idea, in general, also.  Taking that to the next level would
> be figuring out how you can do the same kind of thing through an
> interactive psql session where the user running the query doesn't need
> access to the database server or PG log files...
>

this is simple - it can be printed via elog(WARNING, ...) to original console

> We can send a 'cancel query', how about a 'report on query' which
> returns the plan and perhaps whatever other stats are easily available?

there is only one question - that POSIX signal we can use?

Pavel

>
> Thanks,
>
> Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-11 Thread Boszormenyi Zoltan

Hi,

2013-01-09 10:08 keltezéssel, Amit kapila írta:

On Sunday, January 06, 2013 10:26 AM Boszormenyi Zoltan wrote:
On Saturday, January 05, 2013 12:35 PM Boszormenyi Zoltan wrote:
2013-01-05 05:58 keltezéssel, Amit kapila írta:

On Friday, January 04, 2013 10:57 PM Boszormenyi Zoltan wrote:

Hi,
I am reviewing your patch.

Thank you very much.


All comments are addressed as below:


One specific comment about the documentation part of the patch:

  >

***
*** 86,92  SET [ SESSION | LOCAL ] TIME ZONE { timezonePL/pgSQL exception block.  This behavior
  has been changed because it was deemed unintuitive.
 
!   
   
  
   

--- 95,101 
 PL/pgSQL exception block.  This behavior
  has been changed because it was deemed unintuitive.
 
!
   
  
   

***

Fixed the white space comment.


# This includes the default configuration directory included to support
# SET PERSISTENT statement.
#
# WARNNING: Any configuration parameter values specified after this line
#   will override the values set by SET PERSISTENT statement.
include_dir = 'config_dir'

Note the typo in the above message: WARNNING, there are too many Ns.

Fixed.


Can't we add a new LWLock and use it as a critical section instead
of waiting for 10 seconds?

Added LWLock and removed sleep logic. After adding the LWLock the lock file 
name can be changed
as temp file. but presently the patch contains the name as lock file only. 
please provide the
suggestions.


Current state of affairs:

a.) The whole operation is protected by the LWLock so only one backend
can do this any time. Clean operation is ensured.

b.) The code creates the lock file and fails if it the file exists. This 
protects
against nasties done externally. The current logic to bail out with an error
is OK, I can know that there is a stupid intruder on the system. But then
they can also remove the original .auto.conf file too and anything else and
I lost anyway.

c.) In reaper(), the code cleans up the lock file and since there can
be only one lock file, no need to search for temp files, a straightforward
unlink() call does its job.


This may be changed in two ways to make it more comfortable:

1. Simply unlink() and retry with creat() once.

Comments:
unlink() may fail under Windows if someone keeps this file open.
Then you can either give up with ereport(ERROR) just like now.

POSIX systems simply remove the file <-> inode mapping so unlink()
doesn't fail in this case.

2. Create the tempfile. There will be one less error case, the file creation
may only fail in case you are out of disk space.

Creating the tempfile is tempting. But then reaper() will need a little
more code to remove all the tempfiles.

I just noticed that you are using "%m" in format strings twice. man 3 printf 
says:

   m  (Glibc extension.)  Print output of strerror(errno). No argument 
is required.

This doesn't work anywhere else, only on GLIBC systems, it means Linux.
I also like the brevity of this extension but PostgreSQL is a portable system.
You should use %s and strerror(errno) as argument explicitly.


I also tried to trigger one of the errors by creating the lock file manually.
You need an extra space between the "... retry" and "or ...":

Fixed.


Also, since the SET PERSISTENT patch seems to create a single lock file,
sizeof(string) (which includes the 0 byte at the end of the string, so it
matches the whole filename) can be used instead of strlen(string) or
sizeof(string)-1 that match the filename as a prefix only.
#define PGAUTOCONFLOCK"postgresql.auto.conf.lock"

+   /* skip lock files used in postgresql.auto.conf edit */
+   if (strncmp(de->d_name,
+   "postgresql.auto.conf.lock",
+   strlen("postgresql.auto.conf.lock")) == 
0)

Added a macro and changed the check accordingly.


If the current logic stays or the modification in 1) will be chosen,
the comment needs tweaking:

+   /* skip lock files used in postgresql.auto.conf edit */

   "skip the lock file ..."

+   if (strncmp(de->d_name,
+   PGAUTOCONFLOCK,
+   sizeof(PGAUTOCONFLOCK)) == 0)
+   continue;
+

If the 2nd suggestion is chosen, sizeof()-1 or strlen() must be uses again
to exclude all the possible tempfiles.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Pavel Stehule
2013/1/11 Pavel Stehule :
> 2013/1/11 Stephen Frost :
>> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>>> Simon Riggs  writes:
>>> > An even better feature would be to be able to send a signal to a
>>> > running query to log its currently executing plan. That way you can
>>> > ask "Why so slow?" before deciding to kill it.
>>>
>>> That could conceivably work.  At least it wouldn't require running
>>> EXPLAIN in a failed transaction.
>>
>> I like this idea, in general, also.  Taking that to the next level would
>> be figuring out how you can do the same kind of thing through an
>> interactive psql session where the user running the query doesn't need
>> access to the database server or PG log files...
>>
>
> this is simple - it can be printed via elog(WARNING, ...) to original console

theoretically we can show current state via EXPLAIN ANALYSE result

>
>> We can send a 'cancel query', how about a 'report on query' which
>> returns the plan and perhaps whatever other stats are easily available?
>
> there is only one question - that POSIX signal we can use?
>
> Pavel
>
>>
>> Thanks,
>>
>> Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \l to accept patterns

2013-01-11 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
> On Tue, Jan 8, 2013 at 11:36 PM, Peter Eisentraut  wrote:
> > Good point.  What about the other suggestion about only displaying
> > databases by default that you can connect to?
> 
> I would tend not to adopt that suggestion, on the grounds that it has
> no obvious parallel with anything else psql hides by default.
> However, I don't feel quite as strongly about that case.

In the past, haven't we done this through the catalog tables themselves
rather than hacking up psql..?  pg_stats being a prime example?  With
the row-level-security discussion, there was talk about if we might be
able to apply that capability to catalogs also.  That strikes me as a
better option/approach than doing any of this in one particular
application (psql in this case) which connects to PG.

tbh, I'm not entirely against excluding databases that don't allow *any*
connections (key'd off datallowconns) to clear out template0/template1
from the default list, but I see that as different from "things I don't
have permissions to".

Thanks,

Stephen


signature.asc
Description: Digital signature


[HACKERS] Version 4.10 of buildfarm client released.

2013-01-11 Thread Andrew Dunstan

Version 4.10 of the buildfarm client has been released.

Following GitHub's abandonment of their download feature, releases will 
now be published on the buildfarm server. The latest release will always 
be available at  
This particular release is available at 



The main feature of this release is that it does better logging of 
pg_upgrade failures (which is why I hope Heikki applies it to chipmunk 
right away ;-) )


The rest is minor bug fixes and very small enhancements.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Stephen Frost
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2013/1/11 Stephen Frost :
> > We can send a 'cancel query', how about a 'report on query' which
> > returns the plan and perhaps whatever other stats are easily available?
> 
> there is only one question - that POSIX signal we can use?

This would be a new protocol message, psql doesn't ever send any actual
process signals to the backend processes...

Or at least, that's how I was thinking it would be implemented, in an
ideal world.  It's possible we could have some backend helper function
which a user could call on another connection to send a signal to the
first, after figuring out the pid, blah, blah.

Of course, I haven't gone and looked at how cancel query really works
under the hood, so I have no idea if this is at all possible. :)

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Simon Riggs
On 11 January 2013 16:31, Pavel Stehule  wrote:
> 2013/1/11 Stephen Frost :
>> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>>> Simon Riggs  writes:
>>> > An even better feature would be to be able to send a signal to a
>>> > running query to log its currently executing plan. That way you can
>>> > ask "Why so slow?" before deciding to kill it.
>>>
>>> That could conceivably work.  At least it wouldn't require running
>>> EXPLAIN in a failed transaction.
>>
>> I like this idea, in general, also.  Taking that to the next level would
>> be figuring out how you can do the same kind of thing through an
>> interactive psql session where the user running the query doesn't need
>> access to the database server or PG log files...
>>
>
> this is simple - it can be printed via elog(WARNING, ...) to original console
>
>> We can send a 'cancel query', how about a 'report on query' which
>> returns the plan and perhaps whatever other stats are easily available?
>
> there is only one question - that POSIX signal we can use?

We already overload the signals, so its just a new type for the signal
handler to cope with.

See procsignal_sigusr1_handler()

If we do it this way we can have time-based explain logging, so log
anything that hits 60 minutes etc..

Stephen's idea of an additional protocol message to support this is
fine, but I wouldn't want to limit this capability so it can only be
invoked from the client. I'd like a sysadmin be able to enquire about
other sessions.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] Version 4.10 of buildfarm client released.

2013-01-11 Thread Heikki Linnakangas

On 11.01.2013 18:38, Andrew Dunstan wrote:

The main feature of this release is that it does better logging of
pg_upgrade failures (which is why I hope Heikki applies it to chipmunk
right away ;-) )


Heh, ok :-)

I've upgraded it, and launched a new buildfarm run, so we'll now more in 
a moment. This box has a very small disk (a 4GB sd card), so it's quite 
possible it simply ran out of disk space.


There was a stray postgres instance running on the box, which I killed:

pgbfarm@raspberrypi ~ $ ps ax | grep pg_upg
 5993 pts/0S+ 0:00 grep --color=auto pg_upg
20200 ?S  0:00 
/home/pgbfarm/buildroot/HEAD/pgsql.8210/contrib/pg_upgrade/tmp_check/install/home/pgbfarm/buildroot/HEAD/inst/bin/postgres 
-F -c listen_addresses=


The directory /home/pgbfarm/buildroot/HEAD/pgsql.8210 did not exist 
anymore when I looked. Apparently the server was running within an 
already-deleted directory.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Amit kapila

On Friday, January 11, 2013 9:27 PM Simon Riggs wrote:
On 11 January 2013 14:24, Amit Kapila  wrote:

>> http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C3828
>> 52DE51@szxeml509-mbs
>
>> 1. However Heikki has pointed, it has some problems similar to for HOT
>> implementation and that is the reason we have done memcmp for HOT.
>> 2. Also we have found in initial readings that this doesn't have any
>> performance difference as compare to current Approach.

>OK, forget that idea.

>>> I've moved this to the next CF. I'm planning to review this one first.
>
>> Thank you.

> Just reviewing the patch now, making more sense with comments added.

>In heap_delta_encode() do we store which columns have changed? 

Not the attribute bumberwise, but offsetwise it is stored.

> Do we store the whole new column value?

Yes, please refer else part of code

+   else
+   {
+   data_len = new_tup_off - change_off;
+   if ((bp + (2 * data_len)) - bstart >= result_max)
+   return false;
+ 
+   /* Copy the modified column data to the output buffer 
if present */
+   pglz_out_add(ctrlp, ctrlb, ctrl, bp, data_len, dp);
+ 

With Regards,
Amit Kapila.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Amit kapila
On Friday, January 11, 2013 7:59 PM Alvaro Herrera wrote:
Simon Riggs wrote:
> On 28 December 2012 10:21, Simon Riggs  wrote:
>

> I was also worried about the high variance in the results.  Those
> averages look rather meaningless.  Which would be okay, I think, because
> it'd mean that performance-wise the patch is a wash, 

For larger tuple sizes (>1000 && < 1800), the performance gain will be good.
Please refer performance results by me and Kyotaro-san in below links:

http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C383BEAAE32@szxeml509-mbx
http://archives.postgresql.org/message-id/20121228.170748.90887322.horiguchi.kyot...@lab.ntt.co.jp

In fact, I believe for all tuples with length between 200 to 1800 bytes and 
changed values around 15~20%, there will be both performance gain as well as 
WAL reduction.
The reason for keeping the logic same for smaller tuples (<=128 bytes) also 
same, that there is no much performance difference but still WAL reduction gain 
is visible.

> but it is still achieving a lower WAL volume, which is good.

With Regards,
Amit Kapila.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Simon Riggs
On 11 January 2013 17:08, Amit kapila  wrote:

>> Just reviewing the patch now, making more sense with comments added.
>
>>In heap_delta_encode() do we store which columns have changed?
>
> Not the attribute bumberwise, but offsetwise it is stored.

(Does that mean "numberwise"??)

Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?


>> Do we store the whole new column value?
>
> Yes, please refer else part of code
>
> +   else
> +   {
> +   data_len = new_tup_off - change_off;
> +   if ((bp + (2 * data_len)) - bstart >= result_max)
> +   return false;
> +
> +   /* Copy the modified column data to the output buffer 
> if present */
> +   pglz_out_add(ctrlp, ctrlb, ctrl, bp, data_len, dp);
> +
>

"modified column data" could mean either 1) (modified column) data
i.e. the data for the modified column, or 2) modified (column data)
i.e. the modified data in the column. I read that as (2) and didn't
look at the code. ;-)

Happy now that I know its (1)

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Simon Riggs
On 11 January 2013 17:30, Amit kapila  wrote:
> On Friday, January 11, 2013 7:59 PM Alvaro Herrera wrote:
> Simon Riggs wrote:
>> On 28 December 2012 10:21, Simon Riggs  wrote:
>>
>
>> I was also worried about the high variance in the results.  Those
>> averages look rather meaningless.  Which would be okay, I think, because
>> it'd mean that performance-wise the patch is a wash,
>
> For larger tuple sizes (>1000 && < 1800), the performance gain will be good.
> Please refer performance results by me and Kyotaro-san in below links:
>
> http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C383BEAAE32@szxeml509-mbx
> http://archives.postgresql.org/message-id/20121228.170748.90887322.horiguchi.kyot...@lab.ntt.co.jp

AFAICS your tests are badly variable, but as Alvaro says, they aren't
accurate enough to tell there's a regression.

I'll assume not and carry on.

(BTW the rejection of the null bitmap patch because of a performance
regression may also need to be reconsidered).

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Amit kapila

On Friday, January 11, 2013 11:09 PM Simon Riggs wrote:
On 11 January 2013 17:08, Amit kapila  wrote:

>>> Just reviewing the patch now, making more sense with comments added.
>
>>>In heap_delta_encode() do we store which columns have changed?
>
>> Not the attribute bumberwise, but offsetwise it is stored.

> (Does that mean "numberwise"??)
   Yes.

> Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?
  As per current algorithm, we can't as it is based on offsets.
  What I mean to say is that the basic idea to reconstruct tuple during 
recovery 
  is copy data from old tuple offset-wise (offsets stored in encoded tuple) and 
use new data (modified column data)
  from encoded tuple directly. So we don't need exact column numbers.

With Regards,
Amit Kapila.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Amit kapila

On Friday, January 11, 2013 11:12 PM Simon Riggs wrote:
On 11 January 2013 17:30, Amit kapila  wrote:
> On Friday, January 11, 2013 7:59 PM Alvaro Herrera wrote:
> Simon Riggs wrote:
>> On 28 December 2012 10:21, Simon Riggs  wrote:
>>
>
>>> I was also worried about the high variance in the results.  Those
>>> averages look rather meaningless.  Which would be okay, I think, because
>>> it'd mean that performance-wise the patch is a wash,
>
>> For larger tuple sizes (>1000 && < 1800), the performance gain will be good.
>> Please refer performance results by me and Kyotaro-san in below links:
>
>> http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C383BEAAE32@szxeml509-mbx
>> http://archives.postgresql.org/message-id/20121228.170748.90887322.horiguchi.kyot...@lab.ntt.co.jp

>AFAICS your tests are badly variable, but as Alvaro says, they aren't
>accurate enough to tell there's a regression.

>I'll assume not and carry on.

> (BTW the rejection of the null bitmap patch because of a performance
> regression may also need to be reconsidered).

  I can post detailed numbers during next commit fest.

With Regards,
Amit Kapila.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] I s this a bug of spgist index in a heavy write condition?

2013-01-11 Thread Tom Lane
=?gb2312?B?wO66o8H6?=  writes:
> This time I will give you the contents of the table route_raw, the download 
> link is https://www.box.com/s/yxa4yxo6rcb3dzeaefmz or  
> http://dl.dropbox.com/u/203288/route_raw_spgist.sql.tar.gz .

Thanks for the data, but I still can't reproduce the problem :-(.

Can you confirm whether loading this dump into an empty database and
running your test case (15 instances of that script) fails for you?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] Version 4.10 of buildfarm client released.

2013-01-11 Thread Tom Lane
Heikki Linnakangas  writes:
> There was a stray postgres instance running on the box, which I killed:

FWIW, we've seen an awful lot of persistent buildfarm failures that
seemed to be due to port conflicts with leftover postmasters.  I think
the buildfarm script needs to try harder to ensure that it's killed
everything after a run.  No good ideas how to go about that exactly.
You could look through "ps" output for postmasters, but what if there's
a regular Postgres installation on the same box?  Can we just document
that the buildfarm had better not be run as "postgres"?  (If so, its
attempt to kill an unowned postmaster would fail anyway; else we need
a reliable way to tell which ones to kill.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Simon Riggs
On 11 January 2013 18:11, Amit kapila  wrote:

>> Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?
>   As per current algorithm, we can't as it is based on offsets.
>   What I mean to say is that the basic idea to reconstruct tuple during 
> recovery
>   is copy data from old tuple offset-wise (offsets stored in encoded tuple) 
> and use new data (modified column data)
>   from encoded tuple directly. So we don't need exact column numbers.

Another patch is going through next CF related to reassembling changes
from WAL records.

To do that efficiently, we would want to store a bitmap showing which
columns had changed in each update. Would that be an easy addition, or
is that blocked by some aspect of the current design?

The idea would be that we could re-construct an UPDATE statement that
would perform exactly the same change, yet without needing to refer to
a base tuple.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Tom Lane
Andres Freund  writes:
> The attached patch:

> * adds configure checks for __VA_ARGS__ and __builtin_unreachable
>   support
> * provides a pg_unreachable() macro that expands to
>   __builtin_unreachable() or abort()
> * changes #define elog ... into #define elog(elevel, ...) if varargs are
>   available

Seems like a good thing to do --- will review and commit.

> It does *not* combine elog_start and elog_finish into one function if
> varargs are available although that brings a rather measurable
> size/performance benefit.

Since you've apparently already done the measurement: how much?
It would be a bit tedious to support two different infrastructures for
elog(), but if it's a big enough win maybe we should.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Andres Freund
On 2013-01-11 14:01:40 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > The attached patch:
>
> > * adds configure checks for __VA_ARGS__ and __builtin_unreachable
> >   support
> > * provides a pg_unreachable() macro that expands to
> >   __builtin_unreachable() or abort()
> > * changes #define elog ... into #define elog(elevel, ...) if varargs are
> >   available
>
> Seems like a good thing to do --- will review and commit.

Thanks.

I guess you will catch that anyway, but afaik msvc supports __VA_ARGS__
these days (since v2005 seemingly) and I didn't add HAVE__VA_ARGS to the
respective pg_config.h.win32

> > It does *not* combine elog_start and elog_finish into one function if
> > varargs are available although that brings a rather measurable
> > size/performance benefit.
>
> Since you've apparently already done the measurement: how much?
> It would be a bit tedious to support two different infrastructures for
> elog(), but if it's a big enough win maybe we should.

Imo its pretty definitely a big enough win. So big I have a hard time
believing it can be true without negative effects somewhere else.

Ontop of the patch youve replied to it saves somewhere around 80kb and
between 0.8 (-S -M prepared), 2% (-S -M simple) and 4% (own stuff) I had
lying around and it consistently gives 6%-7% in Pavel's testcase. With
todays HEAD, not with the one before your fix.

I attached the absolutely ugly and unready patch I used for testing.

Greetings,

Andres Freund

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c
index e710f22..c1f3200 100644
--- a/src/backend/utils/error/elog.c
+++ b/src/backend/utils/error/elog.c
@@ -1151,6 +1151,43 @@ getinternalerrposition(void)
 	return edata->internalpos;
 }
 
+#if defined(HAVE__VA_ARGS) && defined(COMBINE_ELOG)
+void
+elog_all(const char *filename, int lineno, const char *funcname, int elevel, const char *fmt,...)
+{
+	elog_start(filename, lineno, funcname);
+	{
+		ErrorData  *edata = &errordata[errordata_stack_depth];
+		MemoryContext oldcontext;
+
+		CHECK_STACK_DEPTH();
+
+		/*
+		 * Do errstart() to see if we actually want to report the message.
+		 */
+		errordata_stack_depth--;
+		errno = edata->saved_errno;
+		if (!errstart(elevel, edata->filename, edata->lineno, edata->funcname, NULL))
+			return;	/* nothing to do */
+
+		/*
+		 * Format error message just like errmsg_internal().
+		 */
+		recursion_depth++;
+		oldcontext = MemoryContextSwitchTo(ErrorContext);
+
+		EVALUATE_MESSAGE(edata->domain, message, false, false);
+
+		MemoryContextSwitchTo(oldcontext);
+		recursion_depth--;
+
+		/*
+		 * And let errfinish() finish up.
+		 */
+		errfinish(0);
+	}
+}
+#endif
 
 /*
  * elog_start --- startup for old-style API
diff --git a/src/include/utils/elog.h b/src/include/utils/elog.h
index d6e1054..9af530f 100644
--- a/src/include/utils/elog.h
+++ b/src/include/utils/elog.h
@@ -216,7 +216,15 @@ extern int	getinternalerrposition(void);
  * generated.
  *--
  */
-#ifdef HAVE__VA_ARGS
+#define COMBINE_ELOG
+#if defined(HAVE__VA_ARGS) && defined(COMBINE_ELOG)
+extern void elog_all(const char *filename, int lineno, const char *funcname, int elevel, const char *fmt,...)
+__attribute__((format(PG_PRINTF_ATTRIBUTE, 5, 6)));
+
+#define elog(elevel, ...)\
+	elog_all(__FILE__, __LINE__, PG_FUNCNAME_MACRO, elevel, __VA_ARGS__), \
+		((elevel) >= ERROR ? pg_unreachable() : (void) 0)
+#elif defined(HAVE__VA_ARGS)
 #define elog(elevel, ...)\
 	elog_start(__FILE__, __LINE__, PG_FUNCNAME_MACRO),	\
 		elog_finish(elevel, __VA_ARGS__),\

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] Version 4.10 of buildfarm client released.

2013-01-11 Thread Andrew Dunstan


On 01/11/2013 01:39 PM, Tom Lane wrote:

Heikki Linnakangas  writes:

There was a stray postgres instance running on the box, which I killed:

FWIW, we've seen an awful lot of persistent buildfarm failures that
seemed to be due to port conflicts with leftover postmasters.  I think
the buildfarm script needs to try harder to ensure that it's killed
everything after a run.  No good ideas how to go about that exactly.
You could look through "ps" output for postmasters, but what if there's
a regular Postgres installation on the same box?  Can we just document
that the buildfarm had better not be run as "postgres"?  (If so, its
attempt to kill an unowned postmaster would fail anyway; else we need
a reliable way to tell which ones to kill.)





The buildfarm never builds with the standard port unless someone is 
quite perverse indeed. The logic that governs it is:


   $buildport = $PGBuild::conf{base_port};
   if ($branch =~ /REL(\d+)_(\d+)/)
   {
$buildport += (10 * ($1 - 7)) + $2;
   }

Certainly the script should not be run as the standard postgres user.

Part of the trouble with detecting rogue postmasters it might have left 
lying around is that various things like to decide what port to run on, 
so it's not always easy for the buildfarm to know what it should be 
looking for.


For branches >= 9.2 this is somewhat ameliorated by the existence of 
EXTRA_REGRESS_OPTS, although we might need a slight adjustment to 
pg_upgrade's test.sh to stop it from trampling on that willy-nilly.


I'm certainly reluctant to be trying to kill anything we aren't dead 
certain is ours. We could possibly detect very early that there is a 
suspected rogue postmaster.


One major source of these rogue processes has almost certainly been this 
piece of logic in pg_ctl:


   * The postmaster should create postmaster.pid very soon after being
   * started.  If it's not there after we've waited 5 or more seconds,
   * assume startup failed and give up waiting.

WHen that happens, pg_ctl fails, and thus so does the buildfarmj client, 
but if it has in fact started a postmaster that was just very slow in 
writing its pid file it has left a postmastr lying around.


ISTR we discussed this phenomenon relatively recently, but I can't find 
a reference to it readily. In any case, nothing has changed on that front.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Tom Lane
Andres Freund  writes:
> [ patch to mark elog() as non-returning if elevel >= ERROR ]

It strikes me that both in this patch, and in Peter's previous patch to
do this for ereport(), there is an opportunity for improvement.  Namely,
that the added code only has any use if elevel is a constant, which in
some places it isn't.  We don't really need a call to abort() to be
executed there, but the compiler knows no better and will have to
generate code to test the value of elevel and call abort().  Which
rather defeats the purpose of saving code; plus the compiler will still
not have any knowledge that code after the ereport isn't reached.
And another thing: what if the elevel argument isn't safe for multiple
evaluation?  No such hazard ever existed before these patches, so I'm
not very comfortable with adding one.  (Even if all our own code is
safe, there's third-party code to worry about.)

When we're using recent gcc, there is an easy fix, which is that the
macros could do this:

__builtin_constant_p(elevel) && (elevel) >= ERROR : pg_unreachable() : (void) 0

This gets rid of both useless code and the risk of undesirable multiple
evaluation, while not giving up any optimization possibility that
existed before.  So I think we should add a configure test for
__builtin_constant_p() and do it like that when possible.

That still leaves the question of what to do when the compiler doesn't
have __builtin_constant_p().  Should we use the coding that we have,
or give up and not try to mark the macros as non-returning?  I'm rather
inclined to do the latter, because of the multiple-evaluation-risk
argument.

Comments?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] GIN over array of ENUMs

2013-01-11 Thread Rod Taylor
I wish to create this data structure but GIN does not currently support an
array of ENUM. Is intarray() a good place to look into adding ENUM support
or is there already an operator class for working supports enums that I
simply don't see at the moment.

This is being done as an alternative to a very large number of boolean
columns which are rarely true (under 1%).


CREATE TYPE feature AS ENUM ('item1', 'item2', 'item3');
CREATE TABLE test (id serial PRIMARY KEY, features feature[]);

CREATE INDEX test_features_idx ON test USING GIN (features, id);

ERROR:  data type feature[] has no default operator class for access method
"gin"
HINT:  You must specify an operator class for the index or define a default
operator class for the data type.


Thanks in advance,

Rod


Re: [HACKERS] PL/perl should fail on configure, not make

2013-01-11 Thread pgbuildfarm
On Thu, 10 Jan 2013, Andrew Dunstan wrote:

>
> On 01/10/2013 11:32 AM, Tom Lane wrote:
> > Andrew Dunstan  writes:
> > >   cc -o interp interp.c `perl -MExtUtils::Embed -e ccopts -e ldopts`
> > Hm.  It would be interesting to see what ccopts expands to, but if you
> > compare the command configure issued with the previous successful link
> > of plperl, there's nothing missing that ccopts would be likely to
> > supply.

$ perl -MExtUtils::Embed -e ccopts -e ldopts
-Wl,-E -Wl,-O1 -Wl,--as-needed
-L/usr/lib64/perl5/5.12.4/x86_64-linux/CORE -lperl -lnsl -ldl -lm -lcrypt
-lutil -lc
 -fno-strict-aliasing -pipe -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
-I/usr/lib64/perl5/5.12.4/x86_64-linux/CORE

>
> Without access to the machine it's pretty hard to know, so I was just
> speculating fairly wildly. If Jeremy can find out what the problem is that
> would be good, or if he can give us access to the machine it shouldn't be
> terribly hard to get to the bottom.

I'll see what I can do.  For now, I can tell you that it works with GCC
and fails with ICC.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/perl should fail on configure, not make

2013-01-11 Thread Jeremy Drake
On Fri, 11 Jan 2013, Tom Lane wrote:

> pgbuildf...@jdrake.com writes:
> Well, that's darn interesting in itself, because the error message looks
> like it should be purely a linker issue.  (And I note that your other
> buildfarm animal mongoose uses icc but is working anyway, so that's
> definitely not the whole story ...)

mongoose is 32-bit, and a really old version of icc.  okapi is 64-bit, and
a merely moderately old icc.  I should set up a dedicated buildfarm
VM with the latest version...

>
> Please note Aaron Swenson's offer of help too -- he's probably a lot
> better qualified than anybody else here to figure out what is going on
> with this.

I'm sorry, I didn't see this.  It must not have been CC'd to me, I don't
subscribe to -hackers anymore, I just couldn't keep up with the traffic
after I got a new job that wasn't postgres-related.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Andres Freund
On 2013-01-11 15:05:54 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > [ patch to mark elog() as non-returning if elevel >= ERROR ]
>
> It strikes me that both in this patch, and in Peter's previous patch to
> do this for ereport(), there is an opportunity for improvement.  Namely,
> that the added code only has any use if elevel is a constant, which in
> some places it isn't.  We don't really need a call to abort() to be
> executed there, but the compiler knows no better and will have to
> generate code to test the value of elevel and call abort().
> Which rather defeats the purpose of saving code; plus the compiler will still
> not have any knowledge that code after the ereport isn't reached.

Yea, I think that's why __builtin_unreachable() is a performance
benefit in comparison with abort().
Both are a benefit over not doing either btw in my measurements.

> And another thing: what if the elevel argument isn't safe for multiple
> evaluation?  No such hazard ever existed before these patches, so I'm
> not very comfortable with adding one.  (Even if all our own code is
> safe, there's third-party code to worry about.)

Hm. I am not really too scared about those dangers I have to admit. If
at all I am caring more for ereport than for elog.
Placing code with side effects as elevel arguments just seems a bit too
absurd.

> When we're using recent gcc, there is an easy fix, which is that the
> macros could do this:

Recent as in 3.1 or so ;)

Its really too bad that there's no __builtin_pure() or
__builtin_side_effect_free() or somesuch :(.

> __builtin_constant_p(elevel) && (elevel) >= ERROR : pg_unreachable() : (void) > 0
>
> This gets rid of both useless code and the risk of undesirable multiple
> evaluation, while not giving up any optimization possibility that
> existed before.  So I think we should add a configure test for
> __builtin_constant_p() and do it like that when possible.

I think there might be code somewhere that decides between FATAL and
PANIC which would not hit that path then. Imo not a good enough reason
not to do it, but I wanted to mention it anyway.

> That still leaves the question of what to do when the compiler doesn't
> have __builtin_constant_p().  Should we use the coding that we have,
> or give up and not try to mark the macros as non-returning?  I'm rather
> inclined to do the latter, because of the multiple-evaluation-risk
> argument.

I dislike the latter somewhat as it would mean not to give that
information at all to msvc and others which seems a bit sad. But I don't
feel particularly strongly.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: log plans of cancelled queries

2013-01-11 Thread Simon Riggs
On 11 January 2013 16:52, Simon Riggs  wrote:

> We already overload the signals, so its just a new type for the signal
> handler to cope with.
>
> See procsignal_sigusr1_handler()

I roughed up something to help you here... this is like 50% of a patch.

pg_explain_backend() calls a SIGUSR1 variant which then allows a call
to RunDynamicExplain() during any call to CHECK_FOR_INTERRUPTS()

That only works when something in the executor has called
SetDynamicExplain(), which later needs to be Unset...

So that's all you need to invoke a dynamic EXPLAIN via a sysadmin
function. All you need to do is generate an EXPLAIN and dump it
somewhere useful, like the server log. Over to you Pavel. There's a
patch somewhere by Greg Stark that generated a plan from a running
server for progress bar stuff, but that was like 5 years ago nearly.
But the explain invocation stuff from that might be useful as a guide.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


signal_dynamic_explain.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Tom Lane
Andres Freund  writes:
> On 2013-01-11 15:05:54 -0500, Tom Lane wrote:
>> And another thing: what if the elevel argument isn't safe for multiple
>> evaluation?  No such hazard ever existed before these patches, so I'm
>> not very comfortable with adding one.  (Even if all our own code is
>> safe, there's third-party code to worry about.)

> Hm. I am not really too scared about those dangers I have to admit.

I agree the scenario doesn't seem all that probable, but what scares me
here is that if we use "__builtin_constant_p(elevel) && (elevel) >= ERROR"
in some builds, and just "(elevel) >= ERROR" in others, then if there is
any code with a multiple-evaluation hazard, it is only buggy in the
latter builds.  That's sufficiently nasty that I'm willing to give up
an optimization that we never had before 9.3 anyway.

> I dislike the latter somewhat as it would mean not to give that
> information at all to msvc and others which seems a bit sad. But I don't
> feel particularly strongly.

It's not like our code isn't rather gcc-biased anyway.  I'd keep the
optimization for other compilers if possible, but not at the cost of
introducing bugs that occur only on those compilers.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Andres Freund
On 2013-01-11 15:52:19 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > On 2013-01-11 15:05:54 -0500, Tom Lane wrote:
> >> And another thing: what if the elevel argument isn't safe for multiple
> >> evaluation?  No such hazard ever existed before these patches, so I'm
> >> not very comfortable with adding one.  (Even if all our own code is
> >> safe, there's third-party code to worry about.)
> 
> > Hm. I am not really too scared about those dangers I have to admit.
> 
> I agree the scenario doesn't seem all that probable, but what scares me
> here is that if we use "__builtin_constant_p(elevel) && (elevel) >= ERROR"
> in some builds, and just "(elevel) >= ERROR" in others, then if there is
> any code with a multiple-evaluation hazard, it is only buggy in the
> latter builds.  That's sufficiently nasty that I'm willing to give up
> an optimization that we never had before 9.3 anyway.

Well, why use it at all then and not just rely on
__builtin_unreachable() in any recent gcc (and llvm fwiw) and abort()
otherwise? Then the code is small for anything recent (gcc 4.4 afair)
and always consistently buggy.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] LLVM / CLang / PostgreSQL

2013-01-11 Thread Joshua D. Drake


Hello,

Has anyone played with this? Seen any results? It looks like most 
testing is being done on Mac OSX (via buildfarm).


JD
--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LLVM / CLang / PostgreSQL

2013-01-11 Thread Peter Eisentraut
On 1/11/13 4:03 PM, Joshua D. Drake wrote:
> Has anyone played with this? Seen any results? It looks like most
> testing is being done on Mac OSX (via buildfarm).

Works fine.  We also have non-OSX tests on the buildfarm for it.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Tom Lane
Andres Freund  writes:
> On 2013-01-11 15:52:19 -0500, Tom Lane wrote:
>> I agree the scenario doesn't seem all that probable, but what scares me
>> here is that if we use "__builtin_constant_p(elevel) && (elevel) >= ERROR"
>> in some builds, and just "(elevel) >= ERROR" in others, then if there is
>> any code with a multiple-evaluation hazard, it is only buggy in the
>> latter builds.  That's sufficiently nasty that I'm willing to give up
>> an optimization that we never had before 9.3 anyway.

> Well, why use it at all then and not just rely on
> __builtin_unreachable() in any recent gcc (and llvm fwiw) and abort()
> otherwise? Then the code is small for anything recent (gcc 4.4 afair)
> and always consistently buggy.

Uh ... because it's *not* unreachable if elevel < ERROR.  Otherwise we'd
just mark errfinish as __attribute((noreturn)) and be done.  Of course,
that's a gcc-ism too.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Andres Freund
On 2013-01-11 16:16:58 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > On 2013-01-11 15:52:19 -0500, Tom Lane wrote:
> >> I agree the scenario doesn't seem all that probable, but what scares me
> >> here is that if we use "__builtin_constant_p(elevel) && (elevel) >= ERROR"
> >> in some builds, and just "(elevel) >= ERROR" in others, then if there is
> >> any code with a multiple-evaluation hazard, it is only buggy in the
> >> latter builds.  That's sufficiently nasty that I'm willing to give up
> >> an optimization that we never had before 9.3 anyway.
>
> > Well, why use it at all then and not just rely on
> > __builtin_unreachable() in any recent gcc (and llvm fwiw) and abort()
> > otherwise? Then the code is small for anything recent (gcc 4.4 afair)
> > and always consistently buggy.
>
> Uh ... because it's *not* unreachable if elevel < ERROR.  Otherwise we'd
> just mark errfinish as __attribute((noreturn)) and be done.  Of course,
> that's a gcc-ism too.

Well, I mean with the double evaluation risk.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Porting to Haiku

2013-01-11 Thread Mark Hellegers
Hello,

I have read the developers FAQ which directed me to this mailing list.
I am still using Postgresql on Zeta (descendant of BeOS), but it is 
becoming time to upgrade, so I installed Haiku (open source 
reimplementation of BeOS) on another machine and downloaded the 9.2.1 
source of Postgresql.
I used an older version of Postgresql which still contained the BeOS 
support to see what needed to be done to get 9.2.1 working on Haiku.
Now that I have Postgresql running (with some issues I am still 
investigating), I would like to create patches to get Haiku support 
added to the official tree of Postgresql.
I do have some questions before I submit a patch:
- Assuming all patches are applied to get Postgresql running, what does 
it require to keep it in there? I noticed you removed the BeOS port, 
because no one was maintaining it. What does it take to maintain a 
port?
- Can I submit patches for smaller parts or do I need to submit all the 
patches in one go? For example, can I submit the patch for the 
configure part first and other changes later?
- I have a problem compiling Postgresql after I installed iodbc. Iodbc 
also contains a header called sqltypes.h, which the compiler picks up 
before the one in the ecpg include directory as it is in the general 
include directory for the system. Is that something someone has seen 
before and knows what is wrong? It could be some mistake I made in my 
changes.

I am planning to get the latest sources from git and apply my patches 
on that source as I am assuming that is preferred and I don't need 
9.2.1 specifically.

Kind regards,

Mark

--
Spangalese for beginners:

`Halley mak ranfuer.'
`Your infant has swallowed my grenade.'




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Tom Lane
Andres Freund  writes:
> On 2013-01-11 16:16:58 -0500, Tom Lane wrote:
>> Uh ... because it's *not* unreachable if elevel < ERROR.  Otherwise we'd
>> just mark errfinish as __attribute((noreturn)) and be done.  Of course,
>> that's a gcc-ism too.

> Well, I mean with the double evaluation risk.

Oh, are you saying you don't want to make the __builtin_constant_p
addition?  I'm not very satisfied with that answer.  Right now, Peter's
patch has added a class of bugs that never existed before 9.3, and yours
would add more.  It might well be that those classes are empty ... but
*we can't know that*.  I don't think that keeping a new optimization for
non-gcc compilers is worth that risk.  Postgres is already full of
gcc-only optimizations, anyway.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Porting to Haiku

2013-01-11 Thread Andrew Dunstan


On 01/11/2013 04:36 PM, Mark Hellegers wrote:

Hello,

I have read the developers FAQ which directed me to this mailing list.
I am still using Postgresql on Zeta (descendant of BeOS), but it is
becoming time to upgrade, so I installed Haiku (open source
reimplementation of BeOS) on another machine and downloaded the 9.2.1
source of Postgresql.
I used an older version of Postgresql which still contained the BeOS
support to see what needed to be done to get 9.2.1 working on Haiku.
Now that I have Postgresql running (with some issues I am still
investigating), I would like to create patches to get Haiku support
added to the official tree of Postgresql.
I do have some questions before I submit a patch:
- Assuming all patches are applied to get Postgresql running, what does
it require to keep it in there? I noticed you removed the BeOS port,
because no one was maintaining it. What does it take to maintain a
port?
- Can I submit patches for smaller parts or do I need to submit all the
patches in one go? For example, can I submit the patch for the
configure part first and other changes later?
- I have a problem compiling Postgresql after I installed iodbc. Iodbc
also contains a header called sqltypes.h, which the compiler picks up
before the one in the ecpg include directory as it is in the general
include directory for the system. Is that something someone has seen
before and knows what is wrong? It could be some mistake I made in my
changes.

I am planning to get the latest sources from git and apply my patches
on that source as I am assuming that is preferred and I don't need
9.2.1 specifically.




I think we'd only support a new platform prospectively, so yes, use git 
master.


You can submit patches in small parts, but I would really hope that the 
changes required are not going to be very big, so there's not a lot of 
point. If they are going to be huge it might in fact make us think twice 
about the value of supporting it.


You will make us a lot more comfortable about supporting Haiku if you 
will undertake to set up and maintain a buildfarm animal running it. One 
thing we don't want is new port that we have no way of making sure stays 
current.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Porting to Haiku

2013-01-11 Thread Tom Lane
"Mark Hellegers"  writes:
> - Assuming all patches are applied to get Postgresql running, what does 
> it require to keep it in there? I noticed you removed the BeOS port, 
> because no one was maintaining it. What does it take to maintain a 
> port?

These days, the expectation is that somebody runs a buildfarm member
on that platform:
http://buildfarm.postgresql.org/index.html
You might want to look into whether you can get the buildfarm script
to run before you go too far.

> - Can I submit patches for smaller parts or do I need to submit all the 
> patches in one go? For example, can I submit the patch for the 
> configure part first and other changes later?

Well, it'd depend on how invasive the earlier patches are.  We might
want to see some evidence that the project will reach completion before
we hack up the code too much.

> - I have a problem compiling Postgresql after I installed iodbc. Iodbc 
> also contains a header called sqltypes.h, which the compiler picks up 
> before the one in the ecpg include directory as it is in the general 
> include directory for the system. Is that something someone has seen 
> before and knows what is wrong? It could be some mistake I made in my 
> changes.

Hm, I don't recall this having been reported before.  Check the order
of -I switches in the ecpg makefiles.  It certainly sounds like a risk
that we might want to fix independently of any particular port.

> I am planning to get the latest sources from git and apply my patches 
> on that source as I am assuming that is preferred and I don't need 
> 9.2.1 specifically.

Yeah, we would probably not consider back-patching such changes, so you
may as well work against master.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Andres Freund
On 2013-01-11 16:28:13 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > On 2013-01-11 16:16:58 -0500, Tom Lane wrote:
> >> Uh ... because it's *not* unreachable if elevel < ERROR.  Otherwise we'd
> >> just mark errfinish as __attribute((noreturn)) and be done.  Of course,
> >> that's a gcc-ism too.
> 
> > Well, I mean with the double evaluation risk.
> 
> Oh, are you saying you don't want to make the __builtin_constant_p
> addition?

Yes, that was what I wanted to say.

> I'm not very satisfied with that answer.  Right now, Peter's
> patch has added a class of bugs that never existed before 9.3, and yours
> would add more.  It might well be that those classes are empty ... but
> *we can't know that*.  I don't think that keeping a new optimization for
> non-gcc compilers is worth that risk.  Postgres is already full of
> gcc-only optimizations, anyway.

ISTM that ereport() already has so strange behaviour wrt evaluation of
arguments (i.e doing it only when the message is really logged) that its
doesn't seem to add a real additional risk.
It also means that we potentially need to add more quieting returns et
al to keep the warning level on non-gcc compatible compilers low
(probably only msvc matters here) which we all don't see on our primary
compilers.

Anyway, youve got a strong opinion and I don't, so ...

Andres

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-11 Thread Tom Lane
Andres Freund  writes:
> On 2013-01-11 16:28:13 -0500, Tom Lane wrote:
>> I'm not very satisfied with that answer.  Right now, Peter's
>> patch has added a class of bugs that never existed before 9.3, and yours
>> would add more.  It might well be that those classes are empty ... but
>> *we can't know that*.  I don't think that keeping a new optimization for
>> non-gcc compilers is worth that risk.  Postgres is already full of
>> gcc-only optimizations, anyway.

> ISTM that ereport() already has so strange behaviour wrt evaluation of
> arguments (i.e doing it only when the message is really logged) that its
> doesn't seem to add a real additional risk.

Hm ... well, that's a point.  I may be putting too much weight on the
fact that any such bug for elevel would be a new one that never existed
before.  What do others think?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Database object names and libpq in UTF-8 locale on Windows

2013-01-11 Thread Andrew Dunstan


On 11/21/2012 12:07 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 11/21/2012 11:11 AM, Tom Lane wrote:

I'm not sure that's the only place we're doing this ...

Oh, Hmm, darn. Where else do you think we might?

Dunno, but grepping for isupper and/or tolower should find any such
places.





I've eliminated some false positives from the grep results. and here's a 
list of list of the remaining files doing things that could be suspect. 
Haven't had time to dig more.


   src/backend/regex/regc_locale.c
   src/backend/regex/regcomp.c
   src/backend/regex/regc_pg_locale.c
   src/backend/tsearch/ts_locale.c
   src/backend/utils/adt/datetime.c
   src/backend/utils/adt/formatting.c
   src/backend/utils/adt/inet_net_pton.c
   src/backend/utils/adt/like.c
   src/backend/utils/misc/tzparser.c


I'd be tempted to say we should fix up the identifier issue regardless 
of the rest, though.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Porting to Haiku

2013-01-11 Thread Mark Hellegers
> "Mark Hellegers"  writes:
> > - Assuming all patches are applied to get Postgresql running, what 
> > does 
> > it require to keep it in there? I noticed you removed the BeOS 
> > port, 
> > because no one was maintaining it. What does it take to maintain a 
> > port?
> 
> These days, the expectation is that somebody runs a buildfarm member
> on that platform:
> http://buildfarm.postgresql.org/index.html
> You might want to look into whether you can get the buildfarm script
> to run before you go too far.

I will check that this weekend. Thanks.
Does this buildfarm member need to run continuously?

> > - Can I submit patches for smaller parts or do I need to submit all 
> > the 
> > patches in one go? For example, can I submit the patch for the 
> > configure part first and other changes later?
> 
> Well, it'd depend on how invasive the earlier patches are.  We might
> want to see some evidence that the project will reach completion 
> before
> we hack up the code too much.

I just did a quick check of my changes and the changes seem minor.  The 
only major changes are the port specific parts in src/backend/port.
Postgresql is running. I can login with psql and create a table, insert 
data and query it. ODBC access is also working. There does seem to be a 
problem with starting autovacuum, which I am still investigating.
 
> > - I have a problem compiling Postgresql after I installed iodbc. 
> > Iodbc 
> > also contains a header called sqltypes.h, which the compiler picks 
> > up 
> > before the one in the ecpg include directory as it is in the 
> > general 
> > include directory for the system. Is that something someone has 
> > seen 
> > before and knows what is wrong? It could be some mistake I made in 
> > my 
> > changes.
> 
> Hm, I don't recall this having been reported before.  Check the order
> of -I switches in the ecpg makefiles.  It certainly sounds like a 
> risk
> that we might want to fix independently of any particular port.

I will try to check on that this weekend.
I also found (I think) two places where the implementation of a 
function contained slightly different parameters from the declaration 
which breaks on Haiku as an int is not the same as int32 (see src/
backend/catalog/dependency.c function deleteOneObject for one of 
those).
 
> > I am planning to get the latest sources from git and apply my 
> > patches 
> > on that source as I am assuming that is preferred and I don't need 
> > 9.2.1 specifically.
> 
> Yeah, we would probably not consider back-patching such changes, so 
> you
> may as well work against master.
> 


--
Spangalese for beginners:

`Halley mak ranfuer.'
`Your infant has swallowed my grenade.'




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Porting to Haiku

2013-01-11 Thread Tom Lane
"Mark Hellegers"  writes:
>> You might want to look into whether you can get the buildfarm script
>> to run before you go too far.

> I will check that this weekend. Thanks.
> Does this buildfarm member need to run continuously?

Once a day is sufficient if that's all you can manage, though several
times a day is nicer.

> I just did a quick check of my changes and the changes seem minor.

Yeah, as Andrew remarked, it shouldn't be that hard given that the code
used to run on BeOS.  You should check the commits around where that
support was removed, if you didn't already.

> I also found (I think) two places where the implementation of a 
> function contained slightly different parameters from the declaration 
> which breaks on Haiku as an int is not the same as int32 (see src/
> backend/catalog/dependency.c function deleteOneObject for one of 
> those).

This is the sort of thing that gets noticed much quicker if there's a
buildfarm member that complains about it ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] AIX buildfarm member

2013-01-11 Thread John R Pierce

On 1/11/2013 6:56 AM, Steve Singer wrote:
If someone else in the community is running PostgreSQL on AIX then it 
would be good if they setup a buildfarm member, perhaps with a more 
recent version of AIX. 


I am and I'd love to, however, sigh, its deep behind corporate firewalls 
and any attempt at doing something like that would land me in all kinda 
hurt. I run AIX 6.1 and 7 LPAR's (IBM's equivalent of a VM) on our 
AIX dev box, which is a 2 core Power 6.   Another complexity is that the 
licensing for XL C is per 'seat' (named user).


I'm really surprised IBM doesn't have some program to support a worthy 
project like this.





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] AIX buildfarm member

2013-01-11 Thread Josh Berkus

> I'm really surprised IBM doesn't have some program to support a worthy
> project like this.

They probably do.  The problem is that we don't know who to contact.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-11 Thread Amit kapila
On Saturday, January 12, 2013 12:23 AM Simon Riggs wrote:
On 11 January 2013 18:11, Amit kapila  wrote:

>>> Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?
>>   As per current algorithm, we can't as it is based on offsets.
>>   What I mean to say is that the basic idea to reconstruct tuple during 
>> recovery
>>   is copy data from old tuple offset-wise (offsets stored in encoded tuple) 
>> and use new data (modified column data)
>>   from encoded tuple directly. So we don't need exact column numbers.

> Another patch is going through next CF related to reassembling changes
> from WAL records.

> To do that efficiently, we would want to store a bitmap showing which
> columns had changed in each update. Would that be an easy addition, or
> is that blocked by some aspect of the current design?

  I don't think it should be a problem, as it can go in current way of WAL 
tuple construction as 
  we do in this patch when old and new buf are different. This differentiation 
is done in 
  log_heap_update.

  IMO, for now we can avoid this optimization (way we have done incase updated 
tuple is not on same page) 
  for the bitmap storing patch and later we can evaluate if we can do this 
optimization for 
  the feature of that patch.
  

> The idea would be that we could re-construct an UPDATE statement that
> would perform exactly the same change, yet without needing to refer to
> a base tuple.

  I understood, that such a functionality would be needed by logical 
replication.


With Regards,
Amit Kapila. 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Wide area replication postgres 9.1.6 slon 2.1.2 large table failure.

2013-01-11 Thread Tory M Blue
So I started this thread on the slon forum, and they mentioned that I/we
should ask here.

Postgres 9.1.4 slon 2.1.1
-and-
Postgres 9.1.6 slon 2.1.2

Scenario:

Node 1, is on gig circut and is the master  (West Coast)

Node 2, is also on a gig circuit and is the slave (Georgia)

Symptoms, slon immediately dies after transferring the biggest table in the
set (this happens with 2 of 3 sets, the set that actually completes has no
large tables).

Set 1 has a table that takes just under 6000 seconds, and set 2 has a table
that takes double that, and again it completes.

1224459-2013-01-11 14:21:10 PST CONFIG remoteWorkerThread_1: 5760.913
seconds to copy table "cls"."listings"
1224560-2013-01-11 14:21:10 PST CONFIG remoteWorkerThread_1: copy table
"cls"."customers"
1224642-2013-01-11 14:21:10 PST CONFIG remoteWorkerThread_1: Begin COPY of
table "cls"."customers"
1224733-2013-01-11 14:21:10 PST ERROR  remoteWorkerThread_1: "select
"_admissioncls".copyFields(8);"  <--- this has the proper data
1224827:2013-01-11 14:21:10 PST WARN   remoteWorkerThread_1: data copy for
set 1 failed 1 times - sleep 15 seconds

Now in terms of postgres, if I do a copy from node 1 to node 2 the large
table (<2 hors) completes without issue.

>From Node 2:
-bash-4.1$ psql -h idb02 -d admissionclsdb -c "copy cls.listings to stdout"
| wc
 4199441 600742784 6621887401

This worked fine.

I get no errors in the postgres logs, there is no network disconnect and
since I can do a copy over the wire that completes, I'm at a loss.  I don't
know what to look at, what to look for or what to do.  Obviously this is
the wrong place to slon issues.

One of the slon developers stated;
"I wonder if there's something here that should get bounced over to
pgsql-hackers or such; we're poking at a scenario here where the use
of COPY to stream data between systems is proving troublesome, and
perhaps there may be meaningful opinions over there on that."

If a copy of the same table that seems to be at the end of a slon failed
attempt and it will complete with a copy, I'm just not sure what is going
on.

Any suggestions, please ask for more data, I can do anything to the slave
node, it's a bit tougher on the source, but I can arrange to make changes
to it if need be.


I just upgraded to 9.1.6 and slon 2.1.2 but prior tests were on 9.1.4 and
slon 2.1.1 and a mix of postgres 9.1.4 slon 2.1.1 and postgres 9.1.6 slon
2.1.1 (node 2)

The other difference is node 1 is running on Fedora12 and node 2 is running
CentOS 6.2

Thanks in advance
Tory


Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-11 Thread Amit kapila

On Friday, January 11, 2013 10:03 PM Boszormenyi Zoltan wrote:
> Hi,

2013-01-09 10:08 keltezéssel, Amit kapila írta:
> On Sunday, January 06, 2013 10:26 AM Boszormenyi Zoltan wrote:
> On Saturday, January 05, 2013 12:35 PM Boszormenyi Zoltan wrote:
> 2013-01-05 05:58 keltezéssel, Amit kapila írta:
>> On Friday, January 04, 2013 10:57 PM Boszormenyi Zoltan wrote:
>>> Hi,
>>> I am reviewing your patch.
>> Thank you very much.
>>
> All comments are addressed as below:
>


>>> Can't we add a new LWLock and use it as a critical section instead
>>> of waiting for 10 seconds?
>> Added LWLock and removed sleep logic. After adding the LWLock the lock file 
>> name can be changed
>> as temp file. but presently the patch contains the name as lock file only. 
>> please provide the
>> suggestions.

> Current state of affairs:

> a.) The whole operation is protected by the LWLock so only one backend
> can do this any time. Clean operation is ensured.

> b.) The code creates the lock file and fails if it the file exists. This 
> protects
> against nasties done externally. The current logic to bail out with an error
> is OK, I can know that there is a stupid intruder on the system. But then
> they can also remove the original .auto.conf file too and anything else and
> I lost anyway.

> c.) In reaper(), the code cleans up the lock file and since there can
> be only one lock file, no need to search for temp files, a straightforward
> unlink() call does its job.


> This may be changed in two ways to make it more comfortable:

> 1. Simply unlink() and retry with creat() once.

> Comments:
> unlink() may fail under Windows if someone keeps this file open.
> Then you can either give up with ereport(ERROR) just like now.

I think as this is an internal file, user is not supposed to play with file and 
incase he does so,
then I think current implementation is okay, means during open (O_CREAT) it 
gives error and the message
is also suggesting the same.



> 2. Create the tempfile. There will be one less error case, the file creation
> may only fail in case you are out of disk space.

> Creating the tempfile is tempting. But then reaper() will need a little
> more code to remove all the tempfiles.

By reaper you mean to say CATCH block?

In that case, I would prefer to keep the current implementation as it is.

Actualy I was thinking of just changing the extension from current .lock to 
.tmp, so in that case
the same problems can happen with this also.

> I just noticed that you are using "%m" in format strings twice. man 3 printf 
> says:

>m  (Glibc extension.)  Print output of strerror(errno). No 
> argument is required.

> This doesn't work anywhere else, only on GLIBC systems, it means Linux.
> I also like the brevity of this extension but PostgreSQL is a portable system.
> You should use %s and strerror(errno) as argument explicitly.

%m is used at other places in code as well.

Thank you for feedback.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Validation in to_date()

2013-01-11 Thread Hitoshi Harada
to_date() doesn't check the date range, which results in unreadable
data like this.

foo=# create table t as select to_date('-12-10 BC', '-MM-DD
BC')::timestamp;
SELECT 1
foo=# table t;
ERROR:  timestamp out of range

Attached is to add IS_VALID_JULIAN() as we do like in date_in().  I
think to_date() should follow it because it is the entrance place to
check sanity.

Thanks,
-- 
Hitoshi Harada


to_date_check.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-11 Thread Boszormenyi Zoltan

2013-01-12 06:51 keltezéssel, Amit kapila írta:

On Friday, January 11, 2013 10:03 PM Boszormenyi Zoltan wrote:

Hi,

2013-01-09 10:08 keltezéssel, Amit kapila írta:

On Sunday, January 06, 2013 10:26 AM Boszormenyi Zoltan wrote:
On Saturday, January 05, 2013 12:35 PM Boszormenyi Zoltan wrote:
2013-01-05 05:58 keltezéssel, Amit kapila írta:

On Friday, January 04, 2013 10:57 PM Boszormenyi Zoltan wrote:

Hi,
I am reviewing your patch.

Thank you very much.


All comments are addressed as below:




Can't we add a new LWLock and use it as a critical section instead
of waiting for 10 seconds?

Added LWLock and removed sleep logic. After adding the LWLock the lock file 
name can be changed
as temp file. but presently the patch contains the name as lock file only. 
please provide the
suggestions.

Current state of affairs:
a.) The whole operation is protected by the LWLock so only one backend
can do this any time. Clean operation is ensured.
b.) The code creates the lock file and fails if it the file exists. This 
protects
against nasties done externally. The current logic to bail out with an error
is OK, I can know that there is a stupid intruder on the system. But then
they can also remove the original .auto.conf file too and anything else and
I lost anyway.
c.) In reaper(), the code cleans up the lock file and since there can
be only one lock file, no need to search for temp files, a straightforward
unlink() call does its job.



This may be changed in two ways to make it more comfortable:
1. Simply unlink() and retry with creat() once.
Comments:
unlink() may fail under Windows if someone keeps this file open.
Then you can either give up with ereport(ERROR) just like now.

I think as this is an internal file, user is not supposed to play with file and 
incase he does so,
then I think current implementation is okay, means during open (O_CREAT) it 
gives error and the message
is also suggesting the same.


That's what I meant in b) above.






2. Create the tempfile. There will be one less error case, the file creation
may only fail in case you are out of disk space.
Creating the tempfile is tempting. But then reaper() will need a little
more code to remove all the tempfiles.

By reaper you mean to say CATCH block?


No, I mean the reaper(SIGNAL_ARGS) function in
src/backend/postmaster/postmaster.c where your patch has this:

*** a/src/backend/postmaster/postmaster.c
--- b/src/backend/postmaster/postmaster.c
***
*** 2552,2557  reaper(SIGNAL_ARGS)
--- 2552,2569 
continue;
}

+   /* Delete the postgresql.auto.conf.lock file if exists 
*/
+   {
+   char LockFileName[MAXPGPATH];
+
+   strlcpy(LockFileName, ConfigFileName, 
sizeof(LockFileName));
+   get_parent_directory(LockFileName);
+   join_path_components(LockFileName, LockFileName, 
AutoConfigLockFilename);

+   canonicalize_path(LockFileName);
+
+   unlink(LockFileName);
+   }
+
/*
 * Startup succeeded, commence normal operations
 */




In that case, I would prefer to keep the current implementation as it is.


I also said above the current logic is OK for me.
I just gave food for thought to provoke discussion from others.



Actualy I was thinking of just changing the extension from current .lock to 
.tmp, so in that case
the same problems can happen with this also.


Indeed.




I just noticed that you are using "%m" in format strings twice. man 3 printf 
says:
m  (Glibc extension.)  Print output of strerror(errno). No argument 
is required.
This doesn't work anywhere else, only on GLIBC systems, it means Linux.
I also like the brevity of this extension but PostgreSQL is a portable system.
You should use %s and strerror(errno) as argument explicitly.

%m is used at other places in code as well.

Thank you for feedback.


You're welcome.



With Regards,
Amit Kapila.






--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers