Re: [HACKERS] dynamic background workers

2013-06-24 Thread Michael Paquier
Hi,

Please find some review for the 2nd patch, with the 1st patch applied
on top of it.

On Sat, Jun 15, 2013 at 6:00 AM, Robert Haas  wrote:
> The second patch, dynamic-bgworkers-v1.patch, revises the background
> worker API to allow background workers to be started dynamically.
> This requires some communication channel from ordinary workers to the
> postmaster, because it is the postmaster that must ultimately start
> the newly-registered workers.  However, that communication channel has
> to be designed pretty carefully, lest a shared memory corruption take
> out the postmaster and lead to inadvertent failure to restart after a
> crash.  Here's how I implemented that: there's an array in shared
> memory of a size equal to max_worker_processes.  This array is
> separate from the backend-private list of workers maintained by the
> postmaster, but the two are kept in sync.  When a new background
> worker registration is added to the shared data structure, the backend
> adding it uses the existing pmsignal mechanism to kick the postmaster,
> which then scans the array for new registrations.  I have attempted to
> make the code that transfers the shared_memory state into the
> postmaster's private state as paranoid as humanly possible.  The
> precautions taken are documented in the comments.  Conversely, when a
> background worker flagged as BGW_NEVER_RESTART is considered for
> restart (and we decide against it), the corresponding slot in the
> shared memory array is marked as no longer in use, allowing it to be
> reused for a new registration.
>
> Since the postmaster cannot take locks, synchronization between the
> postmaster and other backends using the shared memory segment has to
> be lockless.  This mechanism is also documented in the comments.  An
> lwlock is used to prevent two backends that are both registering a new
> worker at about the same time from stomping on each other, but the
> postmaster need not care about that lwlock.
>
> This patch also extends worker_spi as a demonstration of the new
> interface.  With this patch, you can CREATE EXTENSION worker_spi and
> then call worker_spi_launch(int4) to launch a new background worker,
> or combine it with generate_series() to launch a bunch at once.  Then
> you can kill them off with pg_terminate_backend() and start some new
> ones.  That, in my humble opinion, is pretty cool.

The patch applies cleanly, I found a couple of whitespaces though:
/home/ioltas/download/dynamic-bgworkers-v1.patch:452: space before tab
in indent.
 slot = &BackgroundWorkerData->slot[rw->rw_shmem_slot];
/home/ioltas/download/dynamic-bgworkers-v1.patch:474: space before tab
in indent.
 slot = &BackgroundWorkerData->slot[slotno];
/home/ioltas/download/dynamic-bgworkers-v1.patch:639: trailing whitespace.
success = true;
warning: 3 lines add whitespace errors.

The code compiles, has no warnings, and make check passes.

Then, here are some impressions after reading the code. It is good to
see that all the bgworker APIs are moved under the same banner
bgworker.c.
1) Just for clarity, I think that this code in worker_spi.c deserves a
comment mentioning that this code path cannot cannot be taken for a
bgworker not loaded via shared_preload_libraries.
+
+   if (!process_shared_preload_libraries_in_progress)
+   return;
+
2) s/NUL-terminated/NULL-terminated @ bgworker.c
3) Why not adding an other function in worker_spi.c being the opposite
of worker_spi_launch to stop dynamic bgworkers for a given index
number? This would be only a wrapper of pg_terminate_backend, OK, but
at least it would give the user all the information needed to start
and to stop a dynamic bgworker with a single extension, here
worker_spi.c. It can be painful to stop
4) Not completely related to this patch, but one sanity check in
SanityCheckBackgroundWorker:bgworker.c is not listed in the
documentation: when requesting a database connection, bgworker needs
to have access to shmem. It looks that this should be also fixed in
REL9_3_STABLE.
5) Why not adding some documentation? Both dynamic and static
bgworkers share the same infrastructure, so some lines in the existing
chapter might be fine?
6) Just wondering something: it looks that the current code is not
able to identify what was the way used to start a given bgworker.
Would it be interesting to be able to identify if a bgworker has been
registered though RegisterBackgroundWorker or
RegisterDynamicBackgroundWorker?

I have also done some tests, and the infrastructure is working nicely.
The workers started dynamically are able to receive SIGHUP and
SIGTERM. Workers are also not started if the maximum number of
bgworkers authorized is reached. It is really a nice feature!

Also, I will try to do some more tests to test the robustness of the
slist and the protocol used.

Regards,
--
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.

Re: [Review] Re: [HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2013-06-24 Thread Cédric Villemain
Hello Fabien,

> > I flag it 'return with feedback', please update the patch so it builds.
> > Everything else is ok.
> 
> Here it is.

The patch does not apply and git also whines about trailing space.
It needs a v3...
Please note that a community-agreed behavior on this patch is not yet 
acquired, you should consider that too.
-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] proposal 9.4 plpgsql: allows access to call stack from GET DIAGNOSTICS statement

2013-06-24 Thread Rushabh Lathia
Hi,

Use of this feature is to get  call stack for debugging without raising
exception. This definitely seems very useful.

Here are my comments about the submitted patch:

Patch gets applied cleanly (there are couple of white space warning but
that's
into test case file). make and make install too went smooth. make check
was smooth too. Did some manual testing about feature and its went smooth.

However would like to share couple of points:

1) I noticed that you did the initialization of edata manually, just
wondering
can't we directly use CopyErrorData() which will do that job ?

I found that inside CopyErrorData() there is Assert:

Assert(CurrentMemoryContext != ErrorContext);

And in the patch we are setting using ErrorContext as short living context,
is
it the reason of not using CopyErrorData() ?


2) To reset stack to empty, FlushErrorState() can be used.

3) I was think about the usability of the feature and wondering how about if
we add some header to the stack, so user can differentiate between STACK and
normal NOTICE message ?

Something like:

postgres=# select outer_outer_func(10);
NOTICE:  - Call Stack -
PL/pgSQL function inner_func(integer) line 4 at GET DIAGNOSTICS
PL/pgSQL function outer_func(integer) line 3 at RETURN
PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
CONTEXT:  PL/pgSQL function outer_func(integer) line 3 at RETURN
PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
 outer_outer_func
--
   20
(1 row)

I worked on point 2) and 3) and PFA patch for reference.

Thanks,
Rushabh



On Sat, Feb 2, 2013 at 2:53 PM, Pavel Stehule wrote:

> Hello
>
> I propose enhancing GET DIAGNOSTICS statement about new field
> PG_CONTEXT. It is similar to GET STACKED DIAGNOSTICS'
> PG_EXCEPTION_CONTEXT.
>
> Motivation for this proposal is possibility to get  call stack for
> debugging without raising exception.
>
> This code is based on cleaned code from Orafce, where is used four
> years without any error reports.
>
> CREATE OR REPLACE FUNCTION public."inner"(integer)
>  RETURNS integer
>  LANGUAGE plpgsql
> AS $function$
> declare _context text;
> begin
>   get diagnostics _context = pg_context;
>   raise notice '***%***', _context;
>   return 2 * $1;
> end;
> $function$
>
> postgres=# select outer_outer(10);
> NOTICE:  ***PL/pgSQL function "inner"(integer) line 4 at GET DIAGNOSTICS
> PL/pgSQL function "outer"(integer) line 3 at RETURN
> PL/pgSQL function outer_outer(integer) line 3 at RETURN***
> CONTEXT:  PL/pgSQL function "outer"(integer) line 3 at RETURN
> PL/pgSQL function outer_outer(integer) line 3 at RETURN
>  outer_outer
> ─
>   20
> (1 row)
>
> Ideas, comments?
>
> Regards
>
> Pavel Stehule
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


-- 
Rushabh Lathia


get_diagnostics_context_initial_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY and Volatile default expressions

2013-06-24 Thread Kohei KaiGai
Hi Simon,

I checked this patch. One thing I could comment on is, do you think it is a good
idea to have oid of exception function list on
contain_volatile_functions_walker()?

The walker function is static thus here is no impact for other caller, and its
"context" argument is unused.
My proposition is to enhance 2nd argument of contain_volatile_functions_walker()
to deliver list of exceptional functions, then
contain_volatile_functions_not_nextval()
calls contain_volatile_functions_walker() with list_make1_oid(F_NEXTVAL_OID) to
handle nextval() as exception.
Otherwise, all we need to do is put NIL as 2nd argument.

It kills code duplication and reduces future maintenance burden.
How about your opinion?

Thanks,

2013/4/16 Simon Riggs :
> On 16 April 2013 13:57, Heikki Linnakangas  wrote:
>
>> You still need to check the args, if the function is nextval, otherwise you
>> incorrectly perform the optimization for something like
>> "nextval(myvolatilefunc())".
>
> Guess so. At least its an easy change.
>
> Thanks for checking.
>
> --
>  Simon Riggs   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [Review] Re: [HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2013-06-24 Thread Fabien COELHO



Here it is.


The patch does not apply and git also whines about trailing space.
It needs a v3...


The attachement here works for me.
Could you be more precise about the issue?

 postgresql> git branch test master
 postgresql> git checkout test
 Switched to branch 'test'
 postgresql> patch -p1 < ../as_explicit-v2.patch
 patching file doc/src/sgml/ref/create_cast.sgml
 patching file src/backend/parser/gram.y
 patching file src/include/parser/kwlist.h
 patching file src/test/regress/expected/create_cast.out
 patching file src/test/regress/sql/create_cast.sql


Please note that a community-agreed behavior on this patch is not yet
acquired, you should consider that too.


Sure. I've sent my argumentation against Robert objections.

--
Fabien.diff --git a/doc/src/sgml/ref/create_cast.sgml b/doc/src/sgml/ref/create_cast.sgml
index 29ea298..0ace996 100644
--- a/doc/src/sgml/ref/create_cast.sgml
+++ b/doc/src/sgml/ref/create_cast.sgml
@@ -20,15 +20,15 @@
 
 CREATE CAST (source_type AS target_type)
 WITH FUNCTION function_name (argument_type [, ...])
-[ AS ASSIGNMENT | AS IMPLICIT ]
+[ AS ASSIGNMENT | AS EXPLICIT | AS IMPLICIT ]
 
 CREATE CAST (source_type AS target_type)
 WITHOUT FUNCTION
-[ AS ASSIGNMENT | AS IMPLICIT ]
+[ AS ASSIGNMENT | AS EXPLICIT | AS IMPLICIT ]
 
 CREATE CAST (source_type AS target_type)
 WITH INOUT
-[ AS ASSIGNMENT | AS IMPLICIT ]
+[ AS ASSIGNMENT | AS EXPLICIT | AS IMPLICIT ]
 
  
 
@@ -74,7 +74,8 @@ SELECT CAST(42 AS float8);
   
 
   
-   By default, a cast can be invoked only by an explicit cast request,
+   By default, or if the cast is declared AS EXPLICIT,
+   a cast can be invoked only by an explicit cast request,
that is an explicit CAST(x AS
typename) or
x::typename
@@ -239,6 +240,21 @@ SELECT CAST ( 2 AS numeric ) + 4.0;
 
 
 
+ AS EXPLICIT
+
+ 
+  
+   Indicates that the cast can be invoked only with an explicit
+   cast request, that is an explicit CAST(x AS
+   typename) or
+   x::typename
+   construct.
+   This is the default.
+  
+ 
+
+
+
  AS IMPLICIT
 
  
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 5094226..2c0694f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -533,7 +533,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	DICTIONARY DISABLE_P DISCARD DISTINCT DO DOCUMENT_P DOMAIN_P DOUBLE_P DROP
 
 	EACH ELSE ENABLE_P ENCODING ENCRYPTED END_P ENUM_P ESCAPE EVENT EXCEPT
-	EXCLUDE EXCLUDING EXCLUSIVE EXECUTE EXISTS EXPLAIN
+	EXCLUDE EXCLUDING EXCLUSIVE EXECUTE EXISTS EXPLAIN EXPLICIT
 	EXTENSION EXTERNAL EXTRACT
 
 	FALSE_P FAMILY FETCH FIRST_P FLOAT_P FOLLOWING FOR FORCE FOREIGN FORWARD
@@ -6720,6 +6720,7 @@ CreateCastStmt: CREATE CAST '(' Typename AS Typename ')'
 
 cast_context:  AS IMPLICIT_P	{ $$ = COERCION_IMPLICIT; }
 		| AS ASSIGNMENT			{ $$ = COERCION_ASSIGNMENT; }
+		| AS EXPLICIT			{ $$ = COERCION_EXPLICIT; }
 		| /*EMPTY*/{ $$ = COERCION_EXPLICIT; }
 		;
 
@@ -12723,6 +12724,7 @@ unreserved_keyword:
 			| EXCLUSIVE
 			| EXECUTE
 			| EXPLAIN
+			| EXPLICIT
 			| EXTENSION
 			| EXTERNAL
 			| FAMILY
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 68a13b7..f97389b 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -149,6 +149,7 @@ PG_KEYWORD("exclusive", EXCLUSIVE, UNRESERVED_KEYWORD)
 PG_KEYWORD("execute", EXECUTE, UNRESERVED_KEYWORD)
 PG_KEYWORD("exists", EXISTS, COL_NAME_KEYWORD)
 PG_KEYWORD("explain", EXPLAIN, UNRESERVED_KEYWORD)
+PG_KEYWORD("explicit", EXPLICIT, UNRESERVED_KEYWORD)
 PG_KEYWORD("extension", EXTENSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("external", EXTERNAL, UNRESERVED_KEYWORD)
 PG_KEYWORD("extract", EXTRACT, COL_NAME_KEYWORD)
diff --git a/src/test/regress/expected/create_cast.out b/src/test/regress/expected/create_cast.out
index 56cd86e..a8858fa 100644
--- a/src/test/regress/expected/create_cast.out
+++ b/src/test/regress/expected/create_cast.out
@@ -27,8 +27,8 @@ ERROR:  function casttestfunc(text) does not exist
 LINE 1: SELECT casttestfunc('foo'::text);
^
 HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
--- Try binary coercion cast
-CREATE CAST (text AS casttesttype) WITHOUT FUNCTION;
+-- Try binary coercion cast and verbose AS EXPLICIT
+CREATE CAST (text AS casttesttype) WITHOUT FUNCTION AS EXPLICIT;
 SELECT casttestfunc('foo'::text); -- doesn't work, as the cast is explicit
 ERROR:  function casttestfunc(text) does not exist
 LINE 1: SELECT casttestfunc('foo'::text);
@@ -54,7 +54,7 @@ SELECT 1234::int4::casttesttype; -- No cast yet, should fail
 ERROR:  cannot cast type integer to casttesttype
 LINE 1: SELECT 1234::int4::casttesttype;
  ^
-CREATE CAST (int4 AS casttesttype) WITH INOUT;
+CREATE CAST (int4 AS casttesttype) WITH INOUT; -- default AS EXPLICIT
 SEL

Re: [Review] Re: [HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2013-06-24 Thread Andres Freund
On 2013-06-22 15:10:07 -0400, Robert Haas wrote:
> On Sat, Jun 22, 2013 at 9:16 AM, Cédric Villemain
>  wrote:
> > patch is in unified format and apply on HEAD.
> > patch contains documentation, however I believe 'AS IMPLICIT' is a 
> > PostgreSQL
> > extension with special behavior and 'AS EXPLICIT' respect the standard 
> > except
> > that PostgreSQL adds only the expression 'AS EXPLICIT' (it is also the 
> > default
> > in the standard).
> 
> I object to this patch.  This patch a new keyword, EXPLICIT, for
> reasons that are strictly cosmetic.  Everything that you can do with
> this patch can also be done without this patch.  It is not a good idea
> to slow down parsing of every SQL statement we have just so that
> someone can write CREATE CAST .. AS EXPLICIT.  Granted, the parsing
> slowdown for just one keyword is probably not noticeable, but it's
> cumulative with every new keyword we add.  Adding them to support new
> features is one thing, but adding them to support purely optional
> syntax is, I think, going too far.

What about simply not using a keyword at that location at all? Something
like the attached hack?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 5094226..8021f96 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -6718,8 +6718,15 @@ CreateCastStmt: CREATE CAST '(' Typename AS Typename ')'
 }
 		;
 
-cast_context:  AS IMPLICIT_P	{ $$ = COERCION_IMPLICIT; }
-		| AS ASSIGNMENT			{ $$ = COERCION_ASSIGNMENT; }
+cast_context:  AS name
+{
+	if (pg_strcasecmp($2, "EXPLICIT") == 0)
+		$$ = COERCION_EXPLICIT;
+	else if (pg_strcasecmp($2, "IMPLICIT") == 0)
+		$$ = COERCION_IMPLICIT;
+	else
+		elog(ERROR, "frak!");
+}
 		| /*EMPTY*/{ $$ = COERCION_EXPLICIT; }
 		;
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Floating point error

2013-06-24 Thread Albe Laurenz
Abhijit Menon-Sen wrote:
> Sorry to nitpick, but I don't like that either, on the grounds that if I
> had been in Tom Duffey's place, this addition to the docs wouldn't help
> me to understand and resolve the problem.
> 
> I'm not entirely convinced that any brief mention of extra_float_digits
> would suffice, but here's a proposed rewording:
> 
> The  setting controls the
> number of extra significant digits included when a floating point
> value is converted to text for output. With the default value of
> 0, the output is portable to every platform
> supported by PostgreSQL. Increasing it will produce output that
> is closer to the stored value, but may be unportable.
> 
> It's a pretty generic sort of warning, but maybe it would help?

If I understood the original complaint right, anything that gives you
a hint that the textual representation is "truncated" would be helpful.

I like your suggestion and elaborated it a little.

What do you think of the attached version?

Yours,
Laurenz Albe


float-3.patch
Description: float-3.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [Review] Re: [HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2013-06-24 Thread Cédric Villemain
Le lundi 24 juin 2013 11:44:21, Fabien COELHO a écrit :
> >> Here it is.
> > 
> > The patch does not apply and git also whines about trailing space.
> > It needs a v3...
> 
> The attachement here works for me.
> Could you be more precise about the issue?
> 
>   postgresql> git branch test master
>   postgresql> git checkout test
>   Switched to branch 'test'
>   postgresql> patch -p1 < ../as_explicit-v2.patch
>   patching file doc/src/sgml/ref/create_cast.sgml
>   patching file src/backend/parser/gram.y
>   patching file src/include/parser/kwlist.h
>   patching file src/test/regress/expected/create_cast.out
>   patching file src/test/regress/sql/create_cast.sql

Ah, got it. 'git apply' is more strict. Patch apply with patch -p1 ( I though 
I tryed, but it seems not)

==
patch -p1 < as_explicit-v2.patch 
(Stripping trailing CRs from patch.)
patching file doc/src/sgml/ref/create_cast.sgml
(Stripping trailing CRs from patch.)
patching file src/backend/parser/gram.y
(Stripping trailing CRs from patch.)
patching file src/include/parser/kwlist.h
(Stripping trailing CRs from patch.)
patching file src/test/regress/expected/create_cast.out
(Stripping trailing CRs from patch.)
patching file src/test/regress/sql/create_cast.sql
==

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-06-24 Thread Andres Freund
On 2013-06-24 07:46:34 +0900, Michael Paquier wrote:
> On Mon, Jun 24, 2013 at 7:22 AM, Fujii Masao  wrote:
> > Compile error ;)
> It looks like filterdiff did not work correctly when generating the
> latest patch with context diffs, I cannot apply it cleanly wither.
> This is perhaps due to a wrong manipulation from me. Please try the
> attached that has been generated as a raw git output. It works
> correctly with a git apply. I just checked.

Did you check whether that introduces a performance regression?


>  /* --
> + * toast_get_valid_index
> + *
> + *   Get the valid index of given toast relation. A toast relation can only
> + *   have one valid index at the same time. The lock taken on the index
> + *   relations is released at the end of this function call.
> + */
> +Oid
> +toast_get_valid_index(Oid toastoid, LOCKMODE lock)
> +{
> + ListCell   *lc;
> + List   *indexlist;
> + int num_indexes, i = 0;
> + Oid validIndexOid;
> + RelationvalidIndexRel;
> + Relation   *toastidxs;
> + Relationtoastrel;
> +
> + /* Get the index list of relation */
> + toastrel = heap_open(toastoid, lock);
> + indexlist = RelationGetIndexList(toastrel);
> + num_indexes = list_length(indexlist);
> +
> + /* Open all the index relations */
> + toastidxs = (Relation *) palloc(num_indexes * sizeof(Relation));
> + foreach(lc, indexlist)
> + toastidxs[i++] = index_open(lfirst_oid(lc), lock);
> +
> + /* Fetch valid toast index */
> + validIndexRel = toast_index_fetch_valid(toastidxs, num_indexes);
> + validIndexOid = RelationGetRelid(validIndexRel);
> +
> + /* Close all the index relations */
> + for (i = 0; i < num_indexes; i++)
> + index_close(toastidxs[i], lock);
> + pfree(toastidxs);
> + list_free(indexlist);
> +
> + heap_close(toastrel, lock);
> + return validIndexOid;
> +}

Just to make sure, could you check we've found a valid index?

>  static bool
> -toastrel_valueid_exists(Relation toastrel, Oid valueid)
> +toastrel_valueid_exists(Relation toastrel, Oid valueid, LOCKMODE lockmode)
>  {
>   boolresult = false;
>   ScanKeyData toastkey;
>   SysScanDesc toastscan;
> + int i = 0;
> + int num_indexes;
> + Relation   *toastidxs;
> + Relationvalidtoastidx;
> + ListCell   *lc;
> + List   *indexlist;
> +
> + /* Ensure that the list of indexes of toast relation is computed */
> + indexlist = RelationGetIndexList(toastrel);
> + num_indexes = list_length(indexlist);
> +
> + /* Open each index relation necessary */
> + toastidxs = (Relation *) palloc(num_indexes * sizeof(Relation));
> + foreach(lc, indexlist)
> + toastidxs[i++] = index_open(lfirst_oid(lc), lockmode);
> +
> + /* Fetch a valid index relation */
> + validtoastidx = toast_index_fetch_valid(toastidxs, num_indexes);

Those 10 lines are repeated multiple times, in different
functions. Maybe move them into toast_index_fetch_valid and rename that
to
Relation *
toast_open_indexes(Relation toastrel, LOCKMODE mode, size_t *numindexes, size_t 
valididx);

That way we also wouldn't fetch/copy the indexlist twice in some
functions.

> + /* Clean up */
> + for (i = 0; i < num_indexes; i++)
> + index_close(toastidxs[i], lockmode);
> + list_free(indexlist);
> + pfree(toastidxs);

The indexlist could already be freed inside the function proposed
above...


> diff --git a/src/backend/commands/tablecmds.c 
> b/src/backend/commands/tablecmds.c
> index 8294b29..2b777da 100644
> --- a/src/backend/commands/tablecmds.c
> +++ b/src/backend/commands/tablecmds.c
> @@ -8782,7 +8783,13 @@ ATExecSetTableSpace(Oid tableOid, Oid newTableSpace, 
> LOCKMODE lockmode)
>errmsg("cannot move temporary tables of other 
> sessions")));
>  

> + foreach(lc, reltoastidxids)
> + {
> + Oid toastidxid = lfirst_oid(lc);
> + if (OidIsValid(toastidxid))
> + ATExecSetTableSpace(toastidxid, newTableSpace, 
> lockmode);
> + }

Copy & pasted OidIsValid(), shouldn't be necessary anymore.


Otherwise I think there's not really much left to be done. Fujii?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to add support of "IF NOT EXISTS" to others "CREATE" statements

2013-06-24 Thread Andres Freund
On 2013-06-12 14:29:59 -0300, Fabrízio de Royes Mello wrote:
> On Fri, May 24, 2013 at 12:22 PM, Fabrízio de Royes Mello <
> fabriziome...@gmail.com> wrote:
> 
> > Hi all,
> >
> > I working in a patch to include support of "IF NOT EXISTS" into "CREATE"
> > statements that not have it yet.
> >
> > I started with "DefineStmt" section from "src/backend/parser/gram.y":
> > - CREATE AGGREGATE [ IF NOT EXISTS ] ...
> > - CREATE OPERATOR [ IF NOT EXISTS ] ...
> > - CREATE TYPE [ IF NOT EXISTS ] ... [AS [{ENUM | RANGE}] (...)]
> > - CREATE TEXT SEARCH {PARSER | DITIONARY | TEMPLATE | CONFIGURATION} [ IF
> > NOT EXISTS ] ...
> > - CREATE COLLATION [ IF NOT EXISTS ] ...
> >
> >
> The attached patch add support to "IF NOT EXISTS" to "CREATE" statements
> listed below:
> 
> - CREATE AGGREGATE [ IF NOT EXISTS ] ...
> - CREATE CAST [ IF NOT EXISTS ] ...
> - CREATE COLLATION [ IF NOT EXISTS ] ...
> - CREATE OPERATOR [ IF NOT EXISTS ] ...
> - CREATE TEXT SEARCH {PARSER | DICTIONARY | TEMPLATE | CONFIGURATION} [ IF
> NOT EXISTS ] ...
> - CREATE TYPE [ IF NOT EXISTS ] ... [AS [{ENUM | RANGE}] (...)]

I'd argue if we go that way - which seems to be a good idea - we really
ought to make a complete pass and add it to all commands where it's
currently missing.

* CREATE DOMAIN
* CREATE GROUP
* CREATE TABLE AS
* CREATE MATERIALIZED VIEW
* CREATE SEQUENCE (we have ALTER but not CREATE?)
* CREATE TABLESPACE (arguably slightly harder)
* CREATE FOREIGN DATA WRAPPER
* CREATE SERVER
* CREATE DATABASE
* CREATE USER MAPPING
* CREATE TRIGGER
* CREATE EVENT TRIGGER
* CREATE INDEX
* CLUSTER

Cases that seem useful, even though we have OR REPLACE:
* CREATE VIEW
* CREATE FUNCTION

Of dubious use:
* CREATE OPERATOR CLASS
* CREATE OPERATOR FAMILY
* CREATE RULE
* CREATE CONVERSION

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)

2013-06-24 Thread Dimitri Fontaine
Jaime Casanova  writes:
> just tried to build this one, but it doesn't apply cleanly anymore...
> specially the ColId_or_Sconst contruct in gram.y

Please find attached a new version of the patch, v7, rebased to current
master tree and with some more cleanup. I've been using the new grammar
entry NonReservedWord_or_Sconst, I'm not sure about that.

In particular it seems I had forgotten to fix the owner support in
pg_dump as noted by Heikki, it's now properly addressed in the attached
version.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



templates.v7.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BUG #7493: Postmaster messages unreadable in a Windows console

2013-06-24 Thread Alexander Law

Hello Noah,

Thanks for your work, your patch is definitely better. I agree that this 
approach much more generic.

23.06.2013 20:53, Noah Misch wrote:

The attached revision fixes all above points.  Would you look it over?  The
area was painfully light on comments, so I added some.  I renamed
pgwin32_toUTF16(), which ceases to be a good name now that it converts from
message encoding, not database encoding.  GetPlatformEncoding() became unused,
so I removed it.  (If we have cause reintroduce the exact same concept later,
GetTTYEncoding() would name it more accurately.)
Yes, the patch works for me. I have just a little question about 
pgwin32_message_to_UTF16. Do we need to convert SQL_ASCII through UTF8 
or should SQL_ASCII be mapped to 20127 (US-ASCII (7-bit))?

What should we do for the back branches, if anything?  Fixes for garbled
display on consoles and event logs are fair to back-patch, but users might be
accustomed to the present situation for SQL_ASCII databases.  Given the low
incidence of complaints and the workaround of using logging_collector, I am
inclined to put the whole thing in master only.
I thought that the change could be a first step to the PosgreSQL log 
encoding normalization. Today the log may contain messages with 
different encodings (we had a long discussion a year ago: 
http://www.postgresql.org/message-id/5007c399.6000...@gmail.com)
Now the new function GetMessageEncoding allows to convert all the 
messages consistently. If the future log encoding fix will be considered 
as important enough to be backported, then this patch should be 
backported too.


Thanks again!

Best regards,
Alexander



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2013-06-24 Thread Andres Freund
On 2013-05-28 09:21:27 -0400, Robert Haas wrote:
> As a general statement, I view this work as something that is likely
> needed no matter which one of the "remove freezing" approaches that
> have been proposed we choose to adopt.  It does not fix anything in
> and of itself, but it (hopefully) removes an objection to the entire
> line of inquiry.

Agreed. And it also improves the status quo when debugging. I wish this
would have been the representation since the beginning.

Some remarks:
* I don't really like that HeapTupleHeaderXminFrozen() now is frequently
  performed without checking for FrozenTransactionId. I think the places
  where that's done are currently safe, but it seems likely that we
  introduce bugs due to people writing similar code.
  I think replacing *GetXmin by a wrapper that returns
  FrozenTransactionId if the hint bit tell us so would be a good
  idea. Then add *GetRawXmin for the rest (probably mostly display).
  Seems like it would make the patch actually smaller.

* The PLs imo should set fn_xmin to FrozenTransactionId if the hint bit
  tell us the tuple is frozen.

* I think rewrite_heap_dead_tuple needs to check for a frozen xmin and
  store that. We might looking at a chain which partially was done in
  <9.4. Not sure if that's a realistic scenario, but I'd rather be safe.

Otherwise I don't see much that needs to be done before this can get
committed.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Andrew Dunstan


On 06/24/2013 12:41 AM, Josh Berkus wrote:

Folks,

For 9.2, we adopted it as policy that anyone submitting a patch to a
commitfest is expected to review at least one patch submitted by someone
else.  And that failure to do so would affect the attention your patches
received in the future.  For that reason, I'm publishing the list below
of people who submitted patches and have not yet claimed any patch in
the commitfest to review.

For those of you who are contributing patches for your company, please
let your boss know that reviewing is part of contributing, and that if
you don't do the one you may not be able to do the other.

The two lists below, idle submitters and committers, constitutes 26
people.  This *outnumbers* the list of contributors who are busy
reviewing patches -- some of them four or more patches.  If each of
these people took just *one* patch to review, it would almost entirely
clear the list of patches which do not have a reviewer.  If these folks
continue not to do reviews, this commitfest will drag on for at least 2
weeks past its closure date.

Andrew Geirth
Nick White
Peter Eisentrout
Alexander Korotkov
Etsuro Fujita
Hari Babu
Jameison Martin
Jon Nelson
Oleg Bartunov
Chris Farmiloe
Samrat Revagade
Alexander Lakhin
Mark Kirkwood
Liming Hu
Maciej Gajewski
Josh Kuperschmidt
Mark Wong
Gurjeet Singh
Robins Tharakan
Tatsuo Ishii
Karl O Pinc

Additionally, the following committers are not listed as reviewers on
any patch.  Note that I have no way to search which ones might be
*committers* on a patch, so these folks are not necessarily slackers
(although with Bruce, we know for sure):

Bruce Momjian (momjian)
Michael Meskes (meskes)
Teodor Sigaev (teodor)
Andrew Dunstan (adunstan)
Magnus Hagander (mha)
Itagaki Takahiro (itagaki)



I think we maybe need to be a bit more careful about a name and shame 
policy, or it will be ignored. I actually reviewed Peter's emacs config 
patch just yesterday, although I didn't note that on the Commitfest app. 
(Incidentally, based on Tom's later comments I think we should probably 
mark that one as rejected.)


Anyway, I have claimed the VPATH patches for review, and will look at 
them today.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Andres Freund
On 2013-01-25 09:06:12 +0530, Amit Kapila wrote:
> On 01/25/2013 03:43 AM, Jameison Martin wrote:
> > there have been a lot of different threads on this patch. i'm going to
> take a stab at > teasing them out, summarizing them, and addressing them
> individually.
> 
> > Is this patch on the CF app? I can't seem to find it in 2013-01 or
> 2013-next, and I 
> > wanted to add your summary.
> 
> It is in previous CF (2012-11) in Returned with Feedback section
> https://commitfest.postgresql.org/action/commitfest_view?id=16

I'd argue that the patch ought to be still marked as "Returned with
Feedback" instead of again being marked as "Needs Review". I don't
really see anything new that has come up?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add more regression tests for dbcommands

2013-06-24 Thread Andres Freund
On 2013-05-21 02:58:25 +0530, Robins Tharakan wrote:
> Attached is an updated patch that does only 1 CREATE DATABASE and reuses
> that for all other tests.
> The code-coverage with this patch goes up from 36% to 70%.

Even creating one database seems superfluous. The plain CREATE DATABASE
has been tested due to the creation of the regression database itself
anyway?
All the other tests should be doable using the normal regression db?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches & git tree

2013-06-24 Thread Kevin Grittner
Andres Freund  wrote:
> On 2013-06-23 08:27:32 -0700, Kevin Grittner wrote:

>> make maintainer-clean ; ./configure --prefix=$PWD/Debug --enable-debug
>> --enable-cassert --enable-depend --with-libxml --with-libxslt --with-openssl
>> --with-perl --with-python && make -j4 world

>> [ build failure referencing pg_receivellog.o ]

> I have seen that once as well. It's really rather strange since
> pg_receivellog.o is a clear prerequisite for pg_receivellog. I couldn't
> reproduce it reliably though, even after doing some dozen rebuilds or so.
>
>> It works with this patch-on-patch:

>>  clean distclean maintainer-clean:
>> -   rm -f pg_basebackup$(X) pg_receivexlog$(X) $(OBJS) pg_basebackup.o
>> pg_receivexlog.o pg_receivellog.o
>> +   rm -f pg_basebackup$(X) pg_receivexlog$(X) pg_receivellog$(X) $(OBJS)
>> pg_basebackup.o pg_receivexlog.o pg_receivellog.o

>> > +  rm -f '$(DESTDIR)$(bindir)/pg_receivellog$(X)'
>> Oops.  That part is not needed.
>
> Hm. Why not?

Well, I could easily be wrong on just about anything to do with
make files, but on a second look that appeared to be dealing with
eliminating an installed pg_receivellog binary, which is not
created.

> I don't think either hunk has anything to do with that buildfailure
> though - can you reproduce the error without?

I tried that scenario three times and it failed three times.  Then
I made the above changes and it worked.  Then I eliminated the one
on the uninstall target and tried a couple more times and it worked
on both attempts.  The scenario is to have a `make world` build in
the source tree, and run the above line starting with `make
maintainer-clean` and going to `make -j4 world`.

I did notice that without that change to the maintainer-clean
target I did not get a pg_receivellog.Po file in
src/bin/pg_basebackup/.deps/ -- and with it I do.  I admit to being
at about a 1.5 on a 10 point scale of make file competence -- I
just look for patterns used for things similar to what I want to do
and copy without much understanding of what it all means.  :-(  So 
when I got an error on pg_receivellog which didn't happen on
pg_receivexlog, I looked for differences -- my suggestion has no
more basis than that and the fact that empirical testing seemed to
show that it worked.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-06-24 Thread Sawada Masahiko
On Mon, Jun 17, 2013 at 8:48 PM, Simon Riggs  wrote:
> On 17 June 2013 09:03, Pavan Deolasee  wrote:
>
>> I agree. We should probably find a better name for this. Any suggestions ?
>
> err, I already made one...
>
>>> But that's not the whole story. I can see some utility in a patch that
>>> makes all WAL transfer synchronous, rather than just commits. Some
>>> name like synchronous_transfer might be appropriate. e.g.
>>> synchronous_transfer = all | commit (default).
>
>> Since commits are more foreground in nature and this feature
>> does not require us to wait during common foreground activities, we want a
>> configuration where master can wait for synchronous transfers at other than
>> commits. May we can solve that by having more granular control to the said
>> parameter ?
>>
>>>
>>> The idea of another slew of parameters that are very similar to
>>> synchronous replication but yet somehow different seems weird. I can't
>>> see a reason why we'd want a second lot of parameters. Why not just
>>> use the existing ones for sync rep? (I'm surprised the Parameter
>>> Police haven't visited you in the night...) Sure, we might want to
>>> expand the design for how we specify multi-node sync rep, but that is
>>> a different patch.
>>
>>
>> How would we then distinguish between synchronous and the new kind of
>> standby ?
>
> That's not the point. The point is "Why would we have a new kind of
> standby?" and therefore why do we need new parameters?
>
>> I am told, one of the very popular setups for DR is to have one
>> local sync standby and one async (may be cascaded by the local sync). Since
>> this new feature is more useful for DR because taking a fresh backup on a
>> slower link is even more challenging, IMHO we should support such setups.
>
> ...which still doesn't make sense to me. Lets look at that in detail.
>
> Take 3 servers, A, B, C with A and B being linked by sync rep, and C
> being safety standby at a distance.
>
> Either A or B is master, except in disaster. So if A is master, then B
> would be the failover target. If A fails, then you want to failover to
> B. Once B is the target, you want to failback to A as the master. C
> needs to follow the new master, whichever it is.
>
> If you set up sync rep between A and B and this new mode between A and
> C. When B becomes the master, you need to failback from B from A, but
> you can't because the new mode applied between A and C only, so you
> have to failback from C to A. So having the new mode not match with
> sync rep means you are forcing people to failback using the slow link
> in the common case.
>
> You might observe that having the two modes match causes problems if A
> and B fail, so you are forced to go to C as master and then eventually
> failback to A or B across a slow link. That case is less common and
> could be solved by extending sync transfer to more/multi nodes.
>
> It definitely doesn't make sense to have sync rep on anything other
> than a subset of sync transfer. So while it may be sensible in the
> future to make sync transfer a superset of sync rep nodes, it makes
> sense to make them the same config for now.
>
when 2 servers being synchronous replication, those servers are in
same location in many cases. ( e.g., same server room)
so taking a full backup and sending it to old master is not issue.
this proposal works for situation which those servers are put in
remote location and when main site is powered down due to such as
power failure or natural disaster occurs.
as you said, we can control file (e.g., CLOG, pg_control, etc)
replicating by adding synchronous_transfer option.
but if to add only this parameter, we can handle only following 2 cases.

1. synchronous standby and make same as failback safe standby
2. asynchronous standby and make same as failback safe standby

in above case, adding new parameter might be meaningless. but I think
that we should handle case not only case 1,2  but also following case
3, 4 for DR.

3. synchronous standby and make different asynchronous failback safe standby
4. asynchronous standby and make different asynchronous failback safe standby

To handles following case 3 and 4, we should set parameter to each
standby. so we need to adding new parameter.
if we can structure replication in such situation, replication would
be more useful for user in slow link.

parameter improvement idea is which we extend ini file for to set
parameter each standby. For example :


[Server]
standby_name = 'slave1'
synchronous_transfer = commit
wal_sender_timeout = 30
[Server]
standby_name = 'slave2'
synchronous_transfer = all
wal_sender_timeout = 50
---

there are discussions about such ini file in past. if so, we can set
each parameter to each standby.

please give me feedback.

Regards,
---
Sawada Masahiko


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Possible bug in CASE evaluation

2013-06-24 Thread Albe Laurenz
Noah Misch wrote:
>> If fixing the behaviour is undesirable, at least the documentation
>> should be fixed.
> 
> A brief documentation mention sounds fine.  Perhaps add a paragraph on
> constant folding in general and reference that from the CASE page.

How about the attached?

Yours,
Laurenz Albe


case-doc.patch
Description: case-doc.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [Review] Re: [HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2013-06-24 Thread Tom Lane
Andres Freund  writes:
> What about simply not using a keyword at that location at all? Something
> like the attached hack?

"Hack" is much too polite a word for that.  This will for example fail
to respect the difference between quoted and unquoted words.  If the
argument for this patch is to make the syntax more regular and less
surprising, I hardly think that we should add surprise of a different
sort.

Generally speaking, I agree with Robert's objection.  The patch in
itself adds only one unnecessary keyword, which probably wouldn't be
noticeable, but the argument for it implies that we should be willing
to add a lot more equally-unnecessary keywords, which I'm not.  gram.o
is already about 10% of the entire postgres executable, which probably
goes far towards explaining why its inner loop always shows up high in
profiling: cache misses are routine.  And the size of those tables is
at least linear in the number of keywords --- perhaps worse than linear,
I'm not sure.  Adding a bunch of keywords *will* cost us in performance.
I'm not willing to pay that cost for something that adds neither
features nor spec compliance.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-06-24 Thread Tom Lane
Andres Freund  writes:
> Otherwise I think there's not really much left to be done. Fujii?

Well, other than the fact that we've not got MVCC catalog scans yet.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: UNNEST (and other functions) WITH ORDINALITY

2013-06-24 Thread Dean Rasheed
On 21 June 2013 08:31, Dean Rasheed  wrote:
> On 21 June 2013 08:02, Dean Rasheed  wrote:
>> On 21 June 2013 06:54, David Fetter  wrote:
 For example "SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS file"
>>>
>>> The spec is pretty specific about the "all or none" nature of naming
>>> in the AS clause...unless we can figure out a way of getting around it
>>> somehow.
>>
>> We already support and document the ability to provide a subset of the
>> columns in the alias. I didn't realise that was beyond the spec, but
>> since we already have it...
>>
>>
>>> I'm weighing in on the side of a name that's like ?columnN? elsewhere
>>> in the code, i.e. one that couldn't sanely be an actual column name,
>>> whether in table, view, or SRF.
>>
>> I don't think we need to be overly concerned with coming up with a
>> unique name for the column. Duplicate column names are fine, except if
>> the user wants to refer to them elsewhere in the query, in which case
>> they need to provide aliases to distinguish them, otherwise the query
>> won't be accepted.
>>
>> I'd be happy if you just replaced "?column?" with ordinality in a
>> couple of places in your original patch.
>>
>
> Expanding on that, I think it's perfectly acceptable to allow
> potentially duplicate column names in this context. For the majority
> of simple queries the user wouldn't have to (and wouldn't feel
> compelled to) supply aliases. Where there was ambiguity they would be
> forced to do so, but people are already used to that.
>
> If I write a simple query that selects from a single unnest() with or
> without ordinality, I probably won't want to supply aliases.
>
> If I select from 2 unnest()'s *without* ordinality, I already have to
> provide aliases.
>
> If I select from 2 SRF's functions with ordinality, I won't be too
> surprised if I am also forced to provide aliases.
>

No one else has expressed an opinion about the naming of the new
column. All other review comments have been addressed, and the patch
looks to be in good shape, so I'm marking this as ready for committer.

IMO it's a very useful piece of new functionality. Good job!

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-06-24 Thread Andres Freund
On 2013-06-24 09:57:24 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > Otherwise I think there's not really much left to be done. Fujii?
> 
> Well, other than the fact that we've not got MVCC catalog scans yet.

That statement was only about about the patch dealing the removal of
reltoastidxid.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [Review] Re: [HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2013-06-24 Thread Andres Freund
On 2013-06-24 09:55:22 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > What about simply not using a keyword at that location at all? Something
> > like the attached hack?
> 
> "Hack" is much too polite a word for that.  This will for example fail
> to respect the difference between quoted and unquoted words.

Well. If you you have a better name for some quick proof of concept
patch...

Anyway: The point of the patch is not to suggest the use 'name' for that
- although we already do that in some places - but to prove that we can
get away with sort of "undeclared" keywords in a bunch of places. I
think doing so can reduce the number of keywords in a bunch of
places. E.g. EXPLICIT wouldn't need to be one if we invented
infrastructure for it.
The scanner obviously cannot discern those from real keywords and
literals, but we can easily do a recheck in code in the specific bison
rules as long as we are sure the syntax is unambigous. Which it is a in
a good part of the DDL support which in turn is a good sized part of the
overall grammar.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Amit Kapila
On Monday, June 24, 2013 5:48 PM Andres Freund wrote:
> On 2013-01-25 09:06:12 +0530, Amit Kapila wrote:
> > On 01/25/2013 03:43 AM, Jameison Martin wrote:
> > > there have been a lot of different threads on this patch. i'm going
> to
> > take a stab at > teasing them out, summarizing them, and addressing
> them
> > individually.
> >
> > > Is this patch on the CF app? I can't seem to find it in 2013-01 or
> > 2013-next, and I
> > > wanted to add your summary.
> >
> > It is in previous CF (2012-11) in Returned with Feedback section
> > https://commitfest.postgresql.org/action/commitfest_view?id=16
> 
> I'd argue that the patch ought to be still marked as "Returned with
> Feedback" instead of again being marked as "Needs Review". I don't
> really see anything new that has come up?

You are right that nothing new has come up. The major reason why this patch
could not go into 9.3 was that it is not clearly evident whether
Performance gains of this patch are sufficient to take risks (Tom points out
that bugs caused by such changes can be difficult to identify) of committing
this code. 

I will summarize the results, and if most of us feel that they are not good
enough, then we can return this patch.

Observations from Performance Results for tables with less columns

1. Approximately 13% space savings for table with 12 columns, out of which
last 4 are Nulls. 
   This table schema is such first 3 columns are integers and last 9 are
boolean's. 

Refer below link for detailed results:
http://www.postgresql.org/message-id/00b401cde1d8$746c90f0$5d45b2d0$@kapila@
huawei.com


Observations from Performance Results for tables with large columns

1. There is a visible performance increase when number of columns containing
NULLS are more than > 60~70% in table have large number of columns. 
   Approximately 12% for table (having 800 cols, out of which 600 are nulls
OR having 300 columns, out of which 250 are nulls)   
2. There are visible space savings when number of columns containing NULLS
are more than > 60~70% in table have large number of columns. 
   Approximately 11% for table (having 800 cols, out of which 600 are nulls
OR having 300 columns, out of which 250 are nulls)  

Refer below link for detailed results:
http://www.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C382853AA4
0@szxeml509-mbs


Observation from original pgbench
--

1. There is < 1% performance dip for original pgbench, this could be due to
fluctuation in readings or some consequences of addition of code which is
difficult to reason out. 
Refer below link for detailed results:
http://www.postgresql.org/message-id/00bc01cde1f7$be4b4cb0$3ae1e610$@kapila@
huawei.com


With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches & git tree

2013-06-24 Thread Andres Freund
On 2013-06-24 06:44:53 -0700, Kevin Grittner wrote:
> Andres Freund  wrote:
> > On 2013-06-23 08:27:32 -0700, Kevin Grittner wrote:
> 
> >> make maintainer-clean ; ./configure --prefix=$PWD/Debug --enable-debug
> >> --enable-cassert --enable-depend --with-libxml --with-libxslt 
> >> --with-openssl
> >> --with-perl --with-python && make -j4 world
> 
> >> [ build failure referencing pg_receivellog.o ]
> 
> > I have seen that once as well. It's really rather strange since
> > pg_receivellog.o is a clear prerequisite for pg_receivellog. I couldn't
> > reproduce it reliably though, even after doing some dozen rebuilds or so.
> >
> >> It works with this patch-on-patch:
> 
> >>  clean distclean maintainer-clean:
> >> -   rm -f pg_basebackup$(X) pg_receivexlog$(X) $(OBJS) pg_basebackup.o
> >> pg_receivexlog.o pg_receivellog.o
> >> +   rm -f pg_basebackup$(X) pg_receivexlog$(X) pg_receivellog$(X) $(OBJS)
> >> pg_basebackup.o pg_receivexlog.o pg_receivellog.o
> 
> >> > +  rm -f '$(DESTDIR)$(bindir)/pg_receivellog$(X)'
> >> Oops.  That part is not needed.
> >
> > Hm. Why not?
> 
> Well, I could easily be wrong on just about anything to do with
> make files, but on a second look that appeared to be dealing with
> eliminating an installed pg_receivellog binary, which is not
> created.

I think it actually is?

install: all installdirs
$(INSTALL_PROGRAM) pg_basebackup$(X) 
'$(DESTDIR)$(bindir)/pg_basebackup$(X)'
$(INSTALL_PROGRAM) pg_receivexlog$(X) 
'$(DESTDIR)$(bindir)/pg_receivexlog$(X)'
$(INSTALL_PROGRAM) pg_receivellog$(X) 
'$(DESTDIR)$(bindir)/pg_receivellog$(X)'

> > I don't think either hunk has anything to do with that buildfailure
> > though - can you reproduce the error without?
> 
> I tried that scenario three times and it failed three times.  Then
> I made the above changes and it worked.  Then I eliminated the one
> on the uninstall target and tried a couple more times and it worked
> on both attempts.  The scenario is to have a `make world` build in
> the source tree, and run the above line starting with `make
> maintainer-clean` and going to `make -j4 world`.

Hm. I think it might be something in makes intermediate target logic
biting us. Anyway, if the patch fixes that: Great ;). Merged it logally
since it's obviously missing.

> I did notice that without that change to the maintainer-clean
> target I did not get a pg_receivellog.Po file in
> src/bin/pg_basebackup/.deps/ -- and with it I do.

Yea, according to your log it's not even built before pg_receivellog is
linked.

Thanks,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches & git tree

2013-06-24 Thread Kevin Grittner
Andres Freund  wrote:
> On 2013-06-23 10:32:05 -0700, Kevin Grittner wrote:

>> The contrib/test_logical_decoding/sql/ddl.sql script is generating
>> unexpected results.  For both table_with_pkey and
>> table_with_unique_not_null, updates of the primary key column are
>> showing:
>>
>> old-pkey: id[int4]:0
>>
>> ... instead of the expected value of 2 or -2.
>>
>> See attached.
>
> Hm. Any chance this was an incomplete rebuild?

With my hack on the pg_basebackup Makefile, `make -j4 world` is
finishing with no errors and:

PostgreSQL, contrib, and documentation successfully made. Ready to install.

> I seem to remember having seen that once because some header
> dependency wasn't recognized correctly after applying some patch.

I wonder whether this is related to the build problems we've been
discussing on the other fork of this thread.  I was surprised to
see this error when I got past the maintainer-clean full build
problems, because I thought I had seen clean `make check-world`
regression tests after applying each incremental patch file.  Until
I read this I had been assuming that somehow I missed the error on
the 16th and 17th iterations; but now I'm suspecting that I didn't
miss anything after all -- it may just be another symptom of the
build problems.

> Otherwise, could you give me:
> * the version you aplied the patch on

7dfd5cd21c0091e467b16b31a10e20bbedd0a836

> * os/compiler

Linux Kevin-Desktop 3.5.0-34-generic #55-Ubuntu SMP Thu Jun 6 20:18:19 UTC 2013 
x86_64 x86_64 x86_64 GNU/Linux
gcc (Ubuntu/Linaro 4.7.2-2ubuntu1) 4.7.2

> Because I can't reproduce it, despite some playing around...

Maybe if you can reproduce the build problems I'm seeing

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: FILTER for aggregates [was Re: Department of Redundancy Department: makeNode(FuncCall) division]

2013-06-24 Thread Greg Stark
On Mon, Jun 24, 2013 at 3:50 AM, Tom Lane  wrote:
> Or maybe they really don't give a damn about breaking
> applications every time they invent a new reserved word?

I think this is the obvious conclusion. In the standard the reserved
words are pretty explicitly reserved and not legal column names, no?

I think their model is that applications work with a certain version
of SQL and they're not expected to work with a new version without
extensive updating.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Tom Lane
Amit Kapila  writes:
> I will summarize the results, and if most of us feel that they are not good
> enough, then we can return this patch.

Aside from the question of whether there's really any generally-useful
performance improvement from this patch, it strikes me that this change
forecloses other games that we might want to play with interpretation of
the value of a tuple's natts.

In particular, when I was visiting Salesforce last week, the point came
up that they'd really like ALTER TABLE ADD COLUMN to be "free" even when
the column had a non-null DEFAULT.  It's not too difficult to imagine
how we might support that, at least when the default expression is a
constant: decree that if the requested attribute number is beyond natts,
look aside at the tuple descriptor to see if the column is marked as
having a magic default value, and if so, substitute that rather than
just returning NULL.  (This has to be a "magic" value separate from
the current default, else a subsequent DROP or SET DEFAULT would do
the wrong thing.)

However, this idea conflicts with an optimization that supposes it can
reduce natts to suppress null columns: if the column was actually stored
as null, you'd lose that information, and would incorrectly return the
magic default on subsequent reads.

I think it might be possible to make both ideas play together, by
not reducing natts further than the last column with a magic default.
However, that means extra complexity in heap_form_tuple, which would
invalidate the performance measurements done in support of this patch.

In short, there's no free lunch ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-06-24 Thread Fujii Masao
On Mon, Jun 24, 2013 at 7:39 PM, Andres Freund  wrote:
> On 2013-06-24 07:46:34 +0900, Michael Paquier wrote:
>> On Mon, Jun 24, 2013 at 7:22 AM, Fujii Masao  wrote:
>> > Compile error ;)
>> It looks like filterdiff did not work correctly when generating the
>> latest patch with context diffs, I cannot apply it cleanly wither.
>> This is perhaps due to a wrong manipulation from me. Please try the
>> attached that has been generated as a raw git output. It works
>> correctly with a git apply. I just checked.
>
> Did you check whether that introduces a performance regression?
>
>
>>  /* --
>> + * toast_get_valid_index
>> + *
>> + *   Get the valid index of given toast relation. A toast relation can only
>> + *   have one valid index at the same time. The lock taken on the index
>> + *   relations is released at the end of this function call.
>> + */
>> +Oid
>> +toast_get_valid_index(Oid toastoid, LOCKMODE lock)
>> +{
>> + ListCell   *lc;
>> + List   *indexlist;
>> + int num_indexes, i = 0;
>> + Oid validIndexOid;
>> + RelationvalidIndexRel;
>> + Relation   *toastidxs;
>> + Relationtoastrel;
>> +
>> + /* Get the index list of relation */
>> + toastrel = heap_open(toastoid, lock);
>> + indexlist = RelationGetIndexList(toastrel);
>> + num_indexes = list_length(indexlist);
>> +
>> + /* Open all the index relations */
>> + toastidxs = (Relation *) palloc(num_indexes * sizeof(Relation));
>> + foreach(lc, indexlist)
>> + toastidxs[i++] = index_open(lfirst_oid(lc), lock);
>> +
>> + /* Fetch valid toast index */
>> + validIndexRel = toast_index_fetch_valid(toastidxs, num_indexes);
>> + validIndexOid = RelationGetRelid(validIndexRel);
>> +
>> + /* Close all the index relations */
>> + for (i = 0; i < num_indexes; i++)
>> + index_close(toastidxs[i], lock);
>> + pfree(toastidxs);
>> + list_free(indexlist);
>> +
>> + heap_close(toastrel, lock);
>> + return validIndexOid;
>> +}
>
> Just to make sure, could you check we've found a valid index?
>
>>  static bool
>> -toastrel_valueid_exists(Relation toastrel, Oid valueid)
>> +toastrel_valueid_exists(Relation toastrel, Oid valueid, LOCKMODE lockmode)
>>  {
>>   boolresult = false;
>>   ScanKeyData toastkey;
>>   SysScanDesc toastscan;
>> + int i = 0;
>> + int num_indexes;
>> + Relation   *toastidxs;
>> + Relationvalidtoastidx;
>> + ListCell   *lc;
>> + List   *indexlist;
>> +
>> + /* Ensure that the list of indexes of toast relation is computed */
>> + indexlist = RelationGetIndexList(toastrel);
>> + num_indexes = list_length(indexlist);
>> +
>> + /* Open each index relation necessary */
>> + toastidxs = (Relation *) palloc(num_indexes * sizeof(Relation));
>> + foreach(lc, indexlist)
>> + toastidxs[i++] = index_open(lfirst_oid(lc), lockmode);
>> +
>> + /* Fetch a valid index relation */
>> + validtoastidx = toast_index_fetch_valid(toastidxs, num_indexes);
>
> Those 10 lines are repeated multiple times, in different
> functions. Maybe move them into toast_index_fetch_valid and rename that
> to
> Relation *
> toast_open_indexes(Relation toastrel, LOCKMODE mode, size_t *numindexes, 
> size_t valididx);
>
> That way we also wouldn't fetch/copy the indexlist twice in some
> functions.
>
>> + /* Clean up */
>> + for (i = 0; i < num_indexes; i++)
>> + index_close(toastidxs[i], lockmode);
>> + list_free(indexlist);
>> + pfree(toastidxs);
>
> The indexlist could already be freed inside the function proposed
> above...
>
>
>> diff --git a/src/backend/commands/tablecmds.c 
>> b/src/backend/commands/tablecmds.c
>> index 8294b29..2b777da 100644
>> --- a/src/backend/commands/tablecmds.c
>> +++ b/src/backend/commands/tablecmds.c
>> @@ -8782,7 +8783,13 @@ ATExecSetTableSpace(Oid tableOid, Oid newTableSpace, 
>> LOCKMODE lockmode)
>>errmsg("cannot move temporary tables of other 
>> sessions")));
>>
>
>> + foreach(lc, reltoastidxids)
>> + {
>> + Oid toastidxid = lfirst_oid(lc);
>> + if (OidIsValid(toastidxid))
>> + ATExecSetTableSpace(toastidxid, newTableSpace, 
>> lockmode);
>> + }
>
> Copy & pasted OidIsValid(), shouldn't be necessary anymore.
>
>
> Otherwise I think there's not really much left to be done. Fujii?

Yep, will check.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches & git tree

2013-06-24 Thread Kevin Grittner
Andres Freund  wrote:
> On 2013-06-24 06:44:53 -0700, Kevin Grittner wrote:
>> Andres Freund  wrote:
>>> On 2013-06-23 08:27:32 -0700, Kevin Grittner wrote:

> +  rm -f '$(DESTDIR)$(bindir)/pg_receivellog$(X)'
 Oops.  That part is not needed.
>>>
>>> Hm. Why not?
>>
>> Well, I could easily be wrong on just about anything to do with
>> make files, but on a second look that appeared to be dealing with
>> eliminating an installed pg_receivellog binary, which is not
>> created.
>
> I think it actually is?

Oh, yeah  I see it now.  I warned you I could be wrong.  :-/

I just had a thought thought -- perhaps the dependency information
is being calculated incorrectly.  Attached is the dependency file
from the successful build (with the adjusted Makefile), which still
fails the test_logical_decoding regression test, with the same diff.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Companypg_receivellog.o: pg_receivellog.c ../../../src/include/postgres.h \
 ../../../src/include/c.h ../../../src/include/postgres_ext.h \
 ../../../src/include/pg_config_ext.h ../../../src/include/pg_config.h \
 ../../../src/include/pg_config_manual.h \
 ../../../src/include/pg_config_os.h ../../../src/include/port.h \
 ../../../src/include/utils/elog.h ../../../src/include/utils/errcodes.h \
 ../../../src/include/utils/palloc.h \
 ../../../src/include/common/fe_memutils.h \
 ../../../src/include/utils/palloc.h \
 ../../../src/interfaces/libpq/libpq-fe.h \
 ../../../src/include/postgres_ext.h \
 ../../../src/include/libpq/pqsignal.h \
 ../../../src/include/access/xlog_internal.h \
 ../../../src/include/access/xlogdefs.h \
 ../../../src/include/datatype/timestamp.h \
 ../../../src/include/lib/stringinfo.h ../../../src/include/pgtime.h \
 ../../../src/include/storage/block.h \
 ../../../src/include/storage/relfilenode.h \
 ../../../src/include/storage/backendid.h \
 ../../../src/include/utils/datetime.h ../../../src/include/nodes/nodes.h \
 ../../../src/include/utils/timestamp.h ../../../src/include/fmgr.h \
 receivelog.h streamutil.h ../../../src/include/getopt_long.h

../../../src/include/postgres.h:

../../../src/include/c.h:

../../../src/include/postgres_ext.h:

../../../src/include/pg_config_ext.h:

../../../src/include/pg_config.h:

../../../src/include/pg_config_manual.h:

../../../src/include/pg_config_os.h:

../../../src/include/port.h:

../../../src/include/utils/elog.h:

../../../src/include/utils/errcodes.h:

../../../src/include/utils/palloc.h:

../../../src/include/common/fe_memutils.h:

../../../src/include/utils/palloc.h:

../../../src/interfaces/libpq/libpq-fe.h:

../../../src/include/postgres_ext.h:

../../../src/include/libpq/pqsignal.h:

../../../src/include/access/xlog_internal.h:

../../../src/include/access/xlogdefs.h:

../../../src/include/datatype/timestamp.h:

../../../src/include/lib/stringinfo.h:

../../../src/include/pgtime.h:

../../../src/include/storage/block.h:

../../../src/include/storage/relfilenode.h:

../../../src/include/storage/backendid.h:

../../../src/include/utils/datetime.h:

../../../src/include/nodes/nodes.h:

../../../src/include/utils/timestamp.h:

../../../src/include/fmgr.h:

receivelog.h:

streamutil.h:

../../../src/include/getopt_long.h:

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: FILTER for aggregates [was Re: Department of Redundancy Department: makeNode(FuncCall) division]

2013-06-24 Thread Tom Lane
Greg Stark  writes:
> I think their model is that applications work with a certain version
> of SQL and they're not expected to work with a new version without
> extensive updating.

Hm.  We could invent a "sql_version" parameter and tweak the lexer to
return keywords added in spec versions later than that as IDENT.
However, I fear such a parameter would be a major PITA from the user's
standpoint, just like most of our backwards-compatibility GUCs have
proven to be.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Dimitri Fontaine
Andrew Dunstan  writes:
> I think we maybe need to be a bit more careful about a name and shame
> policy, or it will be ignored.

I very much don't like that idea of publishing a list of names either.
Editing the reviewer field and sending personal notices is fine by me,
but name and shame is walking the line…

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches & git tree

2013-06-24 Thread Andres Freund
On 2013-06-24 07:29:43 -0700, Kevin Grittner wrote:
> Andres Freund  wrote:
> > On 2013-06-23 10:32:05 -0700, Kevin Grittner wrote:
> 
> >> The contrib/test_logical_decoding/sql/ddl.sql script is generating
> >> unexpected results.  For both table_with_pkey and
> >> table_with_unique_not_null, updates of the primary key column are
> >> showing:
> >>
> >> old-pkey: id[int4]:0
> >>
> >> ... instead of the expected value of 2 or -2.
> >>
> >> See attached.
> >
> > Hm. Any chance this was an incomplete rebuild?
> 
> With my hack on the pg_basebackup Makefile, `make -j4 world` is
> finishing with no errors and:

Hm. There were some issues with the test_logical_decoding Makefile not
cleaning up the regression installation properly. Which might have
caused the issue.

Could you try after applying the patches and executing a clean and then
rebuild?

Otherwise, could you try applying my git tree so we are sure we test the
same thing?

$ git remote add af git://git.postgresql.org/git/users/andresfreund/postgres.git
$ git fetch af
$ git checkout -b xlog-decoding af/xlog-decoding-rebasing-cf4
$ ./configure ...
$ make

> > Because I can't reproduce it, despite some playing around...
> 
> Maybe if you can reproduce the build problems I'm seeing

Tried your recipe but still couldn't...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
>From cdd0ed46ab75768f8a2e82394b04e6392d8ed32a Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Mon, 24 Jun 2013 11:52:23 +0200
Subject: [PATCH 1/2] wal_decoding: mergme: Fix pg_basebackup makefile

---
 src/bin/pg_basebackup/Makefile | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/src/bin/pg_basebackup/Makefile b/src/bin/pg_basebackup/Makefile
index a41b73c..c251249 100644
--- a/src/bin/pg_basebackup/Makefile
+++ b/src/bin/pg_basebackup/Makefile
@@ -42,6 +42,9 @@ installdirs:
 uninstall:
 	rm -f '$(DESTDIR)$(bindir)/pg_basebackup$(X)'
 	rm -f '$(DESTDIR)$(bindir)/pg_receivexlog$(X)'
+	rm -f '$(DESTDIR)$(bindir)/pg_receivellog$(X)'
 
 clean distclean maintainer-clean:
-	rm -f pg_basebackup$(X) pg_receivexlog$(X) $(OBJS) pg_basebackup.o pg_receivexlog.o pg_receivellog.o
+	rm -f pg_basebackup$(X) pg_receivexlog$(X) pg_receivellog$(X) \
+		pg_basebackup.o pg_receivexlog.o pg_receivellog.o \
+		$(OBJS)
-- 
1.8.2.rc2.4.g7799588.dirty

>From 022c2da1873de2fbc93ae524819932719ca41bdb Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Mon, 24 Jun 2013 16:47:48 +0200
Subject: [PATCH 2/2] wal_decoding: mergme: Fix test_logical_decoding Makefile

---
 contrib/test_logical_decoding/Makefile | 12 
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/contrib/test_logical_decoding/Makefile b/contrib/test_logical_decoding/Makefile
index 0e7d5d3..3850d44 100644
--- a/contrib/test_logical_decoding/Makefile
+++ b/contrib/test_logical_decoding/Makefile
@@ -4,18 +4,14 @@ OBJS = test_logical_decoding.o
 EXTENSION = test_logical_decoding
 DATA = test_logical_decoding--1.0.sql
 
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
+# Note: because we don't tell the Makefile there are any regression tests,
+# we have to clean those result files explicitly
+EXTRA_CLEAN = -r $(pg_regress_clean_files)
+
 subdir = contrib/test_logical_decoding
 top_builddir = ../..
 include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
-endif
-
-test_logical_decoding.o: test_logical_decoding.c
 
 # Disabled because these tests require "wal_level=logical", which
 # typical installcheck users do not have (e.g. buildfarm clients).
-- 
1.8.2.rc2.4.g7799588.dirty


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Maciej Gajewski
Maybe this policy should be mentioned on the Wiki, so newbies like myself
(who wouldn't even dare reviewing patches submitted be seasoned hackers)
are not surprised by seeing own name on a shame wall?

M


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Andreas Karlsson

On 06/24/2013 05:40 PM, Maciej Gajewski wrote:

Maybe this policy should be mentioned on the Wiki, so newbies like
myself (who wouldn't even dare reviewing patches submitted be seasoned
hackers) are not surprised by seeing own name on a shame wall?


I personally would prefer if the email was sent to those who should be 
reminded instead to the list. My personal experience is that personal 
reminders are more effective than public naming and shaming.


--
Andreas Karlsson


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Joshua D. Drake


On 06/24/2013 08:40 AM, Maciej Gajewski wrote:

Maybe this policy should be mentioned on the Wiki, so newbies like
myself (who wouldn't even dare reviewing patches submitted be seasoned
hackers) are not surprised by seeing own name on a shame wall?


It is mentioned. Of course now I can't find it but it is there.

However, I believe you are taking the wrong perspective on this. This is 
not a shame wall. It is a transparent reminder of the policy and those 
who have not assisted in reviewing a patch but have submitted a patch 
themselves.


In short, leave the ego at the door.

Sincerely,

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Dimitri Fontaine
"Joshua D. Drake"  writes:
> In short, leave the ego at the door.

That's not the problem. Let's welcome those who are able to contribute
their time and skills without making it harder for them. Motivation here
shoulnd't be how to avoid getting enlisted on the shame wall.

My opinion is that if we play the game that way, we will only achieve to
push contributors out of the community and review process. Please kindly
reconsider and don't publish any such backward list again.

Instead, I don't know, fetch some SPI money to offer a special poster or
unique one-time-edition only hoodie or a signed mug or whatever to extra
proficient contributors and turn that into a game people want to win.

-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Hannu Krosing
On 06/24/2013 05:54 PM, Joshua D. Drake wrote:
>
> On 06/24/2013 08:40 AM, Maciej Gajewski wrote:
>> Maybe this policy should be mentioned on the Wiki, so newbies like
>> myself (who wouldn't even dare reviewing patches submitted be seasoned
>> hackers) are not surprised by seeing own name on a shame wall?
>
> It is mentioned. Of course now I can't find it but it is there.
>
> However, I believe you are taking the wrong perspective on this. This
> is not a shame wall. It is a transparent reminder of the policy and
> those who have not assisted in reviewing a patch but have submitted a
> patch themselves.
>
> In short, leave the ego at the door.
Would be much easier to do if you did not post this to the list :)

I guess you can also extract the e-mails and post only to the "slackers".

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

2013-06-24 Thread Noah Misch
On Sat, Jun 22, 2013 at 03:46:49AM -0400, Stephen Frost wrote:
> * Noah Misch (n...@leadboat.com) wrote:
> > The next limit faced by sorts is
> > INT_MAX concurrent tuples in memory, which limits helpful work_mem to about
> > 150 GiB when sorting int4.
> 
> That's frustratingly small. :(

I could appreciate a desire to remove that limit.  The way to do that is to
audit all uses of "int" variables in tuplesort.c and tuplestore.c, changing
them to Size where they can be used as indexes into the memtuples array.
Nonetheless, this new limit is about 50x the current limit; you need an
(unpartitioned) table of 2B+ rows to encounter it.  I'm happy with that.

> > !   if (memtupsize * grow_ratio < INT_MAX)
> > !   newmemtupsize = (int) (memtupsize * grow_ratio);
> > !   else
> > !   newmemtupsize = INT_MAX;
> >   
> > /* We won't make any further enlargement attempts */
> > state->growmemtuples = false;
> 
> I'm not a huge fan of moving directly to INT_MAX.  Are we confident that
> everything can handle that cleanly..?  I feel like it might be a bit
> safer to shy a bit short of INT_MAX (say, by 1K).  Perhaps that's overly
> paranoid, but there's an awful lot of callers and some loop which +2's
> and then overflows would suck, eg:

Where are you seeing "an awful lot of callers"?  The code that needs to be
correct with respect to the INT_MAX limit is all in tuplesort.c/tuplestore.c.
Consequently, I chose to verify that code rather than add a safety factor.  (I
did add an unrelated safety factor to repalloc_huge() itself.)

> Also, could this be used to support hashing larger sets..?  If we change
> NTUP_PER_BUCKET to one, we could end up wanting to create a hash table
> larger than INT_MAX since, with 8-byte pointers, that'd only be around
> 134M tuples.

The INT_MAX limit is an internal limit of tuplesort/tuplestore; other
consumers of the huge allocation APIs are only subject to that limit if they
find reasons to enforce it on themselves.  (Incidentally, the internal limit
in question is INT_MAX tuples, not INT_MAX bytes.)

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

2013-06-24 Thread Noah Misch
On Sat, Jun 22, 2013 at 11:36:45AM +0100, Simon Riggs wrote:
> On 13 May 2013 15:26, Noah Misch  wrote:

> I'm concerned that people will accidentally use MaxAllocSize. Can we
> put in a runtime warning if someone tests AllocSizeIsValid() with a
> larger value?

I don't see how we could.  To preempt a repalloc() failure, you test with
AllocSizeIsValid(); testing a larger value is not a programming error.

> >  To demonstrate, I put this to use in
> > tuplesort.c; the patch also updates tuplestore.c to keep them similar.  
> > Here's
> > the trace_sort from building the pgbench_accounts primary key at scale 
> > factor
> > 7500, maintenance_work_mem = '56GB'; memtuples itself consumed 17.2 GiB:
> >
> > LOG:  internal sort ended, 48603324 KB used: CPU 75.65s/305.46u sec elapsed 
> > 391.21 sec
> >
> > Compare:
> >
> > LOG:  external sort ended, 1832846 disk blocks used: CPU 77.45s/988.11u sec 
> > elapsed 1146.05 sec
> 
> Cool.
> 
> I'd like to put in an explicit test for this somewhere. Obviously not
> part of normal regression, but somewhere, at least, so we have
> automated testing that we all agree on. (yes, I know we don't have
> that for replication/recovery yet, but thats why I don't want to
> repeat that mistake).

Probably the easiest way to test from nothing is to run "pgbench -i -s 7500"
under a high work_mem.  I agree that an automated test suite dedicated to
coverage of scale-dependent matters would be valuable, though I'm disinclined
to start one in conjunction with this particular patch.

> > The comment at MaxAllocSize said that aset.c expects doubling the size of an
> > arbitrary allocation to never overflow, but I couldn't find the code in
> > question.  AllocSetAlloc() does double sizes of blocks used to aggregate 
> > small
> > allocations, so maxBlockSize had better stay under SIZE_MAX/2.  Nonetheless,
> > that expectation does apply to dozens of repalloc() users outside aset.c, 
> > and
> > I preserved it for repalloc_huge().  64-bit builds will never notice, and I
> > won't cry for the resulting 2 GiB limit on 32-bit.
> 
> Agreed. Can we document this for the relevant parameters?

I attempted to cover most of that in the comment above MaxAllocHugeSize, but I
did not mention the maxBlockSize constraint.  I'll add an
Assert(AllocHugeSizeIsValid(maxBlockSize)) and a comment to
AllocSetContextCreate().  Did I miss documenting anything else notable?

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Berkus
On 06/24/2013 08:01 AM, Dimitri Fontaine wrote:
> Andrew Dunstan  writes:
>> I think we maybe need to be a bit more careful about a name and shame
>> policy, or it will be ignored.
> 
> I very much don't like that idea of publishing a list of names either.
> Editing the reviewer field and sending personal notices is fine by me,
> but name and shame is walking the line…

Actually, every submitter on that list -- including Maciej -- was sent a
personal, private email a week ago.  A few (3) chose to take the
opportunity to review things, or promised to do so, including a brand
new Chinese contributor who needed help with English to review his
patch.  The vast majority chose not to respond to my email to them at
all.  When private email fails, the next step is public email.

Maciej is correct that this policy also belongs on the "how to submit a
patch" wiki page.  I will remedy that.

Doing the patch counts yesterday, it became clear to me that the reason
for the patch review pileups was that many people were submitting
patches but not participating in the review process at all.  That is, we
have 100 to 150 people a year submitting patches, but relying entirely
on the committers and a few heroic uber-reviewers to do 90% of the patch
review.  This is the commitfest problem in a nutshell.

The purpose of the list was to make it completely apparent where the
problem in clearing the patch queue lies, and to get some of our
submitters to do patch review.

Per both of my emails yesterday, I am trying to make sure that this CF
finishes on time.  Following the rules passed at the developer meetings
for how CFs are to be run, I am doing so.  If the result is
unsatisfactory, I can always resign as CFM.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Dimitri Fontaine
Josh Berkus  writes:
> patch.  The vast majority chose not to respond to my email to them at
> all.  When private email fails, the next step is public email.

The only problem I have here is that I don't remember about deciding to
publish a list of failures by public email at all. I hope it's not my
memory failing me here, because then I would have to remember why I
didn't speak up against that idea at the time.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request for Patch Feedback: Lag & Lead Window Functions Can Ignore Nulls

2013-06-24 Thread Nicholas White
Good catch - I've attached a patch to address your point 1. It now returns
the below (i.e. correctly doesn't fill in the saved value if the index is
out of the window. However, I'm not sure whether (e.g.) lead-2-ignore-nulls
means count forwards two rows, and if that's null use the last one you've
seen (the current implementation) or count forwards two non-null rows (as
you suggest). The behaviour isn't specified in a (free) draft of the 2003
standard (http://www.wiscorp.com/sql_2003_standard.zip), and I don't have
access to the (non-free) final version. Could someone who does have access
to it clarify this? I've also added your example to the regression test
cases.


select val, lead(val, 2) ignore nulls over (order by id) from test_table;
 val | lead
-+--
   1 |3
   2 |4
   3 |4
   4 |4
 |4
 |5
 |6
   5 |7
   6 |
   7 |
(10 rows)

If the other reviewers are happy with your grammar changes then I'll merge
them into the patch. Alternatively, if departing from the standard is OK
then we could reorder the keywords so that a window function is like SELECT
lag(x,1) OVER RESPECT NULLS (ORDER BY y) - i.e. putting the respect /
ignore tokens after the OVER reserved keyword. Although non-standard it'd
make the grammar change trivial.


> Also, I think someone mentioned this already, but what about
> first_value() and last_value()? Shouldn't we do those at the same time?

I didn't include this functionality for the first / last value window
functions as their implementation is currently a bit different; they just
call WinGetFuncArgInFrame to pick out a single value. Making these
functions respect nulls would involve changing the single lookup to a walk
through the tuples to find the first non-null version, and keeping track of
this index in a struct in the context. As this change is reasonably
orthogonal I was going to submit it as a separate patch.

Thanks -


lead-lag-ignore-nulls.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Berkus
On 06/24/2013 10:02 AM, Dimitri Fontaine wrote:
> Josh Berkus  writes:
>> patch.  The vast majority chose not to respond to my email to them at
>> all.  When private email fails, the next step is public email.
> 
> The only problem I have here is that I don't remember about deciding to
> publish a list of failures by public email at all. I hope it's not my
> memory failing me here, because then I would have to remember why I
> didn't speak up against that idea at the time.

You didn't decide anything.  As the CFM, I did.  My job for this month
is to make sure that 100% of patches in that queue get reviewed and
either committed or bounced by July 15th.  I'm doing my job.

I will be more than happy to resign as CFM and turn it over to someone
else if people have a problem with it.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

2013-06-24 Thread Stephen Frost
On Monday, June 24, 2013, Noah Misch wrote:

> On Sat, Jun 22, 2013 at 03:46:49AM -0400, Stephen Frost wrote:
> > * Noah Misch (n...@leadboat.com ) wrote:
> > > The next limit faced by sorts is
> > > INT_MAX concurrent tuples in memory, which limits helpful work_mem to
> about
> > > 150 GiB when sorting int4.
> >
> > That's frustratingly small. :(
>
> I could appreciate a desire to remove that limit.  The way to do that is to
> audit all uses of "int" variables in tuplesort.c and tuplestore.c, changing
> them to Size where they can be used as indexes into the memtuples array.


Right, that's about what I figured would need to be done.


> Nonetheless, this new limit is about 50x the current limit; you need an
> (unpartitioned) table of 2B+ rows to encounter it.  I'm happy with that.


Definitely better but I could see cases with that many tuples in the
not-too-distant future, esp. when used with MinMax indexes...


> > > !   if (memtupsize * grow_ratio < INT_MAX)
> > > !   newmemtupsize = (int) (memtupsize * grow_ratio);
> > > !   else
> > > !   newmemtupsize = INT_MAX;
> > >
> > > /* We won't make any further enlargement attempts */
> > > state->growmemtuples = false;
> >
> > I'm not a huge fan of moving directly to INT_MAX.  Are we confident that
> > everything can handle that cleanly..?  I feel like it might be a bit
> > safer to shy a bit short of INT_MAX (say, by 1K).  Perhaps that's overly
> > paranoid, but there's an awful lot of callers and some loop which +2's
> > and then overflows would suck, eg:
>
> Where are you seeing "an awful lot of callers"?  The code that needs to be
> correct with respect to the INT_MAX limit is all in
> tuplesort.c/tuplestore.c.
> Consequently, I chose to verify that code rather than add a safety factor.
>  (I
> did add an unrelated safety factor to repalloc_huge() itself.)


Ok, I was thinking this code was used beyond tuplesort (I was thinking it
was actually associated with palloc). Apologies for the confusion. :)


> > Also, could this be used to support hashing larger sets..?  If we change
> > NTUP_PER_BUCKET to one, we could end up wanting to create a hash table
> > larger than INT_MAX since, with 8-byte pointers, that'd only be around
> > 134M tuples.
>
> The INT_MAX limit is an internal limit of tuplesort/tuplestore; other
> consumers of the huge allocation APIs are only subject to that limit if
> they
> find reasons to enforce it on themselves.  (Incidentally, the internal
> limit
> in question is INT_MAX tuples, not INT_MAX bytes.)


There's other places where we use integers for indexes into arrays of
tuples (at least hashing is another area..) and those are then also subject
to INT_MAX, which was really what I was getting at.  We might move the
hashing code to use the _huge functions and would then need to adjust that
code to use Size for the index into the hash table array of pointers.

Thanks,

Stephen


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Andres Freund
On 2013-06-24 10:10:11 -0700, Josh Berkus wrote:
> On 06/24/2013 10:02 AM, Dimitri Fontaine wrote:
> > Josh Berkus  writes:
> >> patch.  The vast majority chose not to respond to my email to them at
> >> all.  When private email fails, the next step is public email.
> > 
> > The only problem I have here is that I don't remember about deciding to
> > publish a list of failures by public email at all. I hope it's not my
> > memory failing me here, because then I would have to remember why I
> > didn't speak up against that idea at the time.
> 
> You didn't decide anything.  As the CFM, I did.  My job for this month
> is to make sure that 100% of patches in that queue get reviewed and
> either committed or bounced by July 15th.  I'm doing my job.
> 
> I will be more than happy to resign as CFM and turn it over to someone
> else if people have a problem with it.

Heck, Josh. People have to be allowed to critize *a small part* of your
work without you understanding it as a fundamental request to step back
from being CFM.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Berkus

> Instead, I don't know, fetch some SPI money to offer a special poster or
> unique one-time-edition only hoodie or a signed mug or whatever to extra
> proficient contributors and turn that into a game people want to win.

I like that idea too.  Provided that we allocate enough funding that I
can have a paid admin handle the shipping etc.  Frankly, I'd be up for
the idea that patch reviewers get a special t-shirt for each PostgresQL
release, for reviewing even *one* patch.

Mind you, we wouldn't be able to reward a few reviewers, because they
live in countries to which it's impossible to ship from abroad.

I have previously proposed that all of the reviewers of a given
PostgreSQL release be honored in the release notes as a positive
incentive, and was denied on this from doing so.  Not coincidentally, we
don't seem to have any reviewers-at-large anymore.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Joshua D. Drake


On 06/24/2013 10:10 AM, Josh Berkus wrote:


On 06/24/2013 10:02 AM, Dimitri Fontaine wrote:

Josh Berkus  writes:

patch.  The vast majority chose not to respond to my email to them at
all.  When private email fails, the next step is public email.


The only problem I have here is that I don't remember about deciding to
publish a list of failures by public email at all. I hope it's not my
memory failing me here, because then I would have to remember why I
didn't speak up against that idea at the time.


You didn't decide anything.  As the CFM, I did.  My job for this month
is to make sure that 100% of patches in that queue get reviewed and
either committed or bounced by July 15th.  I'm doing my job.

I will be more than happy to resign as CFM and turn it over to someone
else if people have a problem with it.


Let's not be hasty :)

I think JoshB is spot on in this. He sent previous private emails, and a 
week later opened up the transparency so that everyone understood what 
was going on.


What I find unfortunate is people are spending a bunch of time on this 
argument which has been clearl thought out by Josh instead of reviewing 
patches.


I repeat:

Leave your ego at the door. Josh is doing what could be considered one 
of the most thankless (public) jobs in this project. How about we 
support him in getting these patches taken care of instead of whining 
about the fact that he called us out for not doing our jobs (reviewing 
patches) in the first place.


Sincerely,

Joshua D. Drkae

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Berkus

>> I will be more than happy to resign as CFM and turn it over to someone
>> else if people have a problem with it.
> 
> Heck, Josh. People have to be allowed to critize *a small part* of your
> work without you understanding it as a fundamental request to step back
> from being CFM.

Criticize, yes.  Which I'm responding to.

But if this group votes that I am not to publish a public list of
submitters who aren't reviewing again, or otherwise votes to restrict my
ability to enforce the rules which the Developer Meetings have decided
on for CFs, I will resign as CFM.  The job is too hard to do with one
hand tied behind my back. That's what I'm saying.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Atri Sharma
On Mon, Jun 24, 2013 at 10:54 PM, Josh Berkus  wrote:
>
>>> I will be more than happy to resign as CFM and turn it over to someone
>>> else if people have a problem with it.
>>
>> Heck, Josh. People have to be allowed to critize *a small part* of your
>> work without you understanding it as a fundamental request to step back
>> from being CFM.
>
> Criticize, yes.  Which I'm responding to.
>
> But if this group votes that I am not to publish a public list of
> submitters who aren't reviewing again, or otherwise votes to restrict my
> ability to enforce the rules which the Developer Meetings have decided
> on for CFs, I will resign as CFM.  The job is too hard to do with one
> hand tied behind my back. That's what I'm saying.

Hi Josh,

Not sure if this is out of line, but I would be glad to help you out
in any way possible for this CF. Please let me know if I can lighten
your burden in any way.

Regards,

Atri


--
Regards,

Atri
l'apprenant


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Joshua D. Drake


On 06/24/2013 10:22 AM, Josh Berkus wrote:


Mind you, we wouldn't be able to reward a few reviewers, because they
live in countries to which it's impossible to ship from abroad.

I have previously proposed that all of the reviewers of a given
PostgreSQL release be honored in the release notes as a positive
incentive, and was denied on this from doing so.  Not coincidentally, we
don't seem to have any reviewers-at-large anymore.


I don't like idea of sending gifts. I do like the idea of public thanks. 
We should put full recognition in the release notes for someone who 
reviews a patch. If they didn't review the patch, the person that wrote 
the patch would not have gotten the patch committed anyway. Writing the 
patch is only have the battle.


Heck, think about the FKLocks patch, Alvaro wrote that patch but I know 
that Noah (as well as others) put a herculean effort into helping get it 
committed.


Reviewer recognition should be on the same level as the submitter.

JD







--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Move unused buffers to freelist

2013-06-24 Thread Robert Haas
On Thu, Jun 6, 2013 at 3:01 AM, Amit Kapila  wrote:
> To avoid above 3 factors in test readings, I used below steps:
> 1. Initialize the database with scale factor such that database size +
> shared_buffers = RAM (shared_buffers = 1/4 of RAM).
>For example:
>Example -1
> if RAM = 128G, then initialize db with scale factor = 6700
> and shared_buffers = 32GB.
> Database size (98 GB) + shared_buffers (32GB) = 130 (which
> is approximately equal to total RAM)
>Example -2 (this is based on your test m/c)
> If RAM = 64GB, then initialize db with scale factor = 3400
> and shared_buffers = 16GB.
> 2. reboot m/c
> 3. Load all buffers with data (tables/indexes of pgbench) using pg_prewarm.
> I had loaded 3 times, so that usage count of buffers will be approximately
> 3.

Hmm.  I don't think the usage count will actually end up being 3,
though, because the amount of data you're loading is sized to 3/4 of
RAM, and shared_buffers is just 1/4 of RAM, so I think that each run
of pg_prewarm will end up turning over the entire cache and you'll
never get any usage counts more than 1 this way.  Am I confused?

I wonder if it would be beneficial to test the case where the database
size is just a little more than shared_buffers.  I think that would
lead to a situation where the usage counts are high most of the time,
which - now that you mention it - seems like the sweet spot for this
patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Claudio Freire
On Mon, Jun 24, 2013 at 2:22 PM, Josh Berkus  wrote:
> I have previously proposed that all of the reviewers of a given
> PostgreSQL release be honored in the release notes as a positive
> incentive, and was denied on this from doing so.  Not coincidentally, we
> don't seem to have any reviewers-at-large anymore.


You know... that's a very good idea.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-24 Thread Andrew Dunstan



On 06/18/2013 09:52 AM, Cédric Villemain wrote:

I've rebased the current set of pending patches I had, to fix pgxs and added 2
new patches.

Bugfixes have already been presented and sent in another thread, except the
last one which only fix comment in pgxs.mk.

The new feature consists in a new variable to allow the installation of
contrib header files.
A new variable MODULEHEADER can be used in extension Makefile to list the
header files to install.

The installation path default to $includedir/$includedir_contrib/$extension.
2 new variables are set to define directories: $includedir_contrib and
$includedir_extension, the former default to include/contrib and the later to
$includedir_contrib/$extension ($extension is the name of the extension).

This allows for example to install hstore header and be able to include them
in another extension like that:

   # include "contrib/hstore/hstore.h"

For packagers of PostgreSQL, this allows to have a postgresql-X.Y-contrib-dev
package.
For developers of extension it's killing the copy-and-paste-this-function
dance previously required.

I've not updated contribs' Makfefile: I'm not sure what we want to expose.

I've 2 other patches to write to automatize a bit more the detection of things
to do when building with USE_PGXS, based on the layout. Better get a good
consensus on this before writing them.


Bugfix:
0001-fix-SHLIB_PREREQS-when-building-with-USE_PGXS.patch
0002-Create-data-directory-if-extension-when-required.patch
0003-set-VPATH-for-out-of-tree-extension-builds.patch
0004-adds-support-for-VPATH-with-USE_PGXS.patch
0006-Fix-suggested-layout-for-extension.patch

New feature:
0005-Allows-extensions-to-install-header-file.patch

I'll do a documentation patch based on what is accepted in HEAD and/or
previous branches.


I have just spent an hour or two wrestling with the first four of these. 
Items 5 and six are really separate items, which I'm not considering at 
the moment.


The trouble I have is that there are no instructions on how to modify 
your Makefile or prepare a vpath tree, so I have been flying blind. I 
started out doing what Postgres does, namely to prepare the tree using 
config/prep_buildtree. This doesn't work at all - the paths don't get 
set properly unless you invoke make with the path to the real makefile, 
and I'm not sure they all get set correctly then. But that's not what we 
expect of a vpath setup, or at least not what I expect. I expect that 
with an appropriately set up make file and a correctly set up build tree 
I can use an identical make invocation to the one I would use for an 
in-tree build. That is, after setting up I should be able to ignore the 
fact that it's a vpath build.


FYI I deliberately chose a non-core extension with an uncommon layout 
for testing: json_build 


So, patches 1, 2 and 6 look sane enough, and I'm inclined to commit them 
just to get them out of the way. That means two of the commitfest items 
I am signed up for will be committed and two marked "Returned with 
comments". I think we need some more discussion about how VPATH builds 
should work with extensions, and subject to what limitations if any.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Andres Freund
On 2013-06-24 10:37:02 -0700, Joshua D. Drake wrote:
> 
> On 06/24/2013 10:22 AM, Josh Berkus wrote:
> 
> >Mind you, we wouldn't be able to reward a few reviewers, because they
> >live in countries to which it's impossible to ship from abroad.
> >
> >I have previously proposed that all of the reviewers of a given
> >PostgreSQL release be honored in the release notes as a positive
> >incentive, and was denied on this from doing so.  Not coincidentally, we
> >don't seem to have any reviewers-at-large anymore.
> 
> I don't like idea of sending gifts. I do like the idea of public thanks. We
> should put full recognition in the release notes for someone who reviews a
> patch. If they didn't review the patch, the person that wrote the patch
> would not have gotten the patch committed anyway. Writing the patch is only
> have the battle.
> 
> Heck, think about the FKLocks patch, Alvaro wrote that patch but I know that
> Noah (as well as others) put a herculean effort into helping get it
> committed.
> 
> Reviewer recognition should be on the same level as the submitter.

The problem with that is that that HUGELY depends on the patch and the
review. There are patches where reviewers do a good percentage of the
work and others where they mostly tell that "compiles & runs".

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Berkus
JD said:
> Leave your ego at the door. Josh is doing what could be considered one
> of the most thankless (public) jobs in this project. How about we
> support him in getting these patches taken care of instead of whining
> about the fact that he called us out for not doing our jobs (reviewing
> patches) in the first place.

Actually, I think this is a very important thing for us to discuss, in
fact more important than reviewing any individual patch.  9.3 CF4 took
almost **4 months** so it's clear that the system isn't working as
designed.  So let's hash this out during CF1 rather than during CF4.

We've had the policy of "submit one, review one" since the 9.2 dev
cycle.  However, to my knowledge nobody has actually tried to enforce
this before, or even published stats on it.  Once I did that for this
CF, it became readily apparent to me that the failure of this policy is
at least 50% of the problem with finishing commitfests -- and releases
-- on time.

The vast majority of submitters aren't reviewing other people's patches,
even ones who have the time and resources to do so.  You'll notice that
most of the people on the List aren't new contributors to PostgreSQL; if
anything, the new contributors have been exemplary in responding to
private email that they need to do some review.

More, on the slacker list are 6-8 people who I happen to know are paid
by their employers to work on PostgreSQL.   Those are the folks I'm
particularly targeting with the Slacker list; I want to make it
transparently clear to those folks' bosses that they have to give their
staff time for patch review if they expect to get the features *they*
want into PostgreSQL.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Kupershmidt
On Mon, Jun 24, 2013 at 12:57 PM, Josh Berkus  wrote:
> Actually, every submitter on that list -- including Maciej -- was sent a
> personal, private email a week ago.  A few (3) chose to take the
> opportunity to review things, or promised to do so, including a brand
> new Chinese contributor who needed help with English to review his
> patch.  The vast majority chose not to respond to my email to them at
> all.  When private email fails, the next step is public email.

Hrm, I'm on the slackers list, and I didn't see an email directed to
me from JB in the last week about the CF.

Anyway, I am hoping to take at least one patch this CF, though the
recent "review it within 5 days or else" policy combined with a ton of
my own work has kept me back so far.

Josh


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Simon Riggs
On 24 June 2013 18:10, Josh Berkus  wrote:

> I will be more than happy to resign as CFM and turn it over to someone
> else if people have a problem with it.

Please don't do that (until at least the end of the CF ;-) )

It's a difficult job and I'm happy you're doing it, though I suggest
an optimal setting of 2-3 is likely to work best:
http://en.wikipedia.org/wiki/Mean_Machine_Angel

I would guess that many people probably don't actually understand the
problems or the policies that have emerged as a result.

Anyway, I won't say anymore because I'll probably be on some list or
other myself sometime.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Berkus

> Hrm, I'm on the slackers list, and I didn't see an email directed to
> me from JB in the last week about the CF.

Really?  Hmmm.  I'm going to send you a test email privately, please
verify whether or not you get it.

> Anyway, I am hoping to take at least one patch this CF, though the
> recent "review it within 5 days or else" policy combined with a ton of
> my own work has kept me back so far.

Thanks!  Your help is much appreciated, on whatever schedule you can do it!

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Claudio Freire
On Mon, Jun 24, 2013 at 2:41 PM, Andres Freund  wrote:
>> I don't like idea of sending gifts. I do like the idea of public thanks. We
>> should put full recognition in the release notes for someone who reviews a
>> patch. If they didn't review the patch, the person that wrote the patch
>> would not have gotten the patch committed anyway. Writing the patch is only
>> have the battle.
>>
>> Heck, think about the FKLocks patch, Alvaro wrote that patch but I know that
>> Noah (as well as others) put a herculean effort into helping get it
>> committed.
>>
>> Reviewer recognition should be on the same level as the submitter.
>
> The problem with that is that that HUGELY depends on the patch and the
> review. There are patches where reviewers do a good percentage of the
> work and others where they mostly tell that "compiles & runs".


Well, you can't so arbitrarily pick who you're recognizing as
contributor and who you aren't. So why not mention them all? They did
work for it, some more than others, but they all worked. And since
whoever submitted a patch (and got it committed) must have reviewed
something as well, they'd be recognized for both reviewing and
submitting.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Berkus

> The problem with that is that that HUGELY depends on the patch and the
> review. There are patches where reviewers do a good percentage of the
> work and others where they mostly tell that "compiles & runs".

This project is enormously stingy with giving credit to people.  It's
not like it costs us money, you know.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Joshua D. Drake


On 06/24/2013 10:48 AM, Claudio Freire wrote:


Reviewer recognition should be on the same level as the submitter.


The problem with that is that that HUGELY depends on the patch and the
review. There are patches where reviewers do a good percentage of the
work and others where they mostly tell that "compiles & runs".



Well, you can't so arbitrarily pick who you're recognizing as
contributor and who you aren't. So why not mention them all? They did
work for it, some more than others, but they all worked. And since
whoever submitted a patch (and got it committed) must have reviewed
something as well, they'd be recognized for both reviewing and
submitting.



Exactly. Just make it a simple policy:

Submitters and Reviewers are listed in that order:

Submitter, reviewer, reviewer

That way submitter gets first bill, satisfying the ego (as well as 
professional consideration) but reviewers are also fully recognized.


Sincerely,

Joshua D. Drkae

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Andres Freund
On 2013-06-24 14:48:32 -0300, Claudio Freire wrote:
> On Mon, Jun 24, 2013 at 2:41 PM, Andres Freund  wrote:
> >> I don't like idea of sending gifts. I do like the idea of public thanks. We
> >> should put full recognition in the release notes for someone who reviews a
> >> patch. If they didn't review the patch, the person that wrote the patch
> >> would not have gotten the patch committed anyway. Writing the patch is only
> >> have the battle.
> >>
> >> Heck, think about the FKLocks patch, Alvaro wrote that patch but I know 
> >> that
> >> Noah (as well as others) put a herculean effort into helping get it
> >> committed.
> >>
> >> Reviewer recognition should be on the same level as the submitter.
> >
> > The problem with that is that that HUGELY depends on the patch and the
> > review. There are patches where reviewers do a good percentage of the
> > work and others where they mostly tell that "compiles & runs".
> 
> 
> Well, you can't so arbitrarily pick who you're recognizing as
> contributor and who you aren't. So why not mention them all? They did
> work for it, some more than others, but they all worked. And since
> whoever submitted a patch (and got it committed) must have reviewed
> something as well, they'd be recognized for both reviewing and
> submitting.

Because spending a year working on a feature isn't the same as spending
an hour or day on it. And the proposal was to generally list them at the
same level.
At least the 9.3 release notes seem to list people that reviewed
extensively prominently on the patches...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Berkus

> Because spending a year working on a feature isn't the same as spending
> an hour or day on it. And the proposal was to generally list them at the
> same level.
> At least the 9.3 release notes seem to list people that reviewed
> extensively prominently on the patches...

My proposal was to have a compiled list of reviewers at the end of the
release notes, in the form of:

"Reviewers and Testers for #.# Included: name, name, name, name"

That way we can pick up the trivial reviewers as well, and even testers
who are not otherwise contributors.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Andres Freund
On 2013-06-24 10:50:42 -0700, Josh Berkus wrote:
> 
> > The problem with that is that that HUGELY depends on the patch and the
> > review. There are patches where reviewers do a good percentage of the
> > work and others where they mostly tell that "compiles & runs".
> 
> This project is enormously stingy with giving credit to people.  It's
> not like it costs us money, you know.

Listing a reviewer that didn't do all that much at the same level as the
author or an somebody having done an extensive review will cost you
contributors in the long run.

I am all for introducing a "Contributed by reviewing: ..." section in
the release notes.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] XLogInsert scaling, revisited

2013-06-24 Thread Andres Freund
On 2013-06-22 14:32:46 +0300, Heikki Linnakangas wrote:
> Attached is a new version that fixes at least the problem I saw. Not sure if
> it fixes what you saw, but it's worth a try. How easily can you reproduce
> that?

Ok, I started to look at this:

* Could you document the way slots prevent checkpoints from occurring
  when XLogInsert rechecks for full page writes? I think it's correct -
  but not very obvious on a glance.

* The read of Insert->RedoRecPtr while rechecking whether it's out of
  date now is unlocked, is that correct? And why?

* Have you measured whether it works to just keep as many slots as
  max_backends requires around and not bothering with dynamically
  allocating them to inserters?
  That seems to require to keep actually waiting slots in a separate
  list which very well might make that too expensive.

  Correctness wise the biggest problem for that probably is exlusive
  acquiration of all slots CreateCheckpoint() does...

* What about using some sort of linked list of unused slots for
  WALInsertSlotAcquireOne? Everytime we're done we put it to the *end*
  of the list so it's less likely to have been grabbed by somebody else
  so we can reuse it.
  a) To grab a new one go to the head of the linked list spinlock it and
  recheck whether it's still free. If not, restart. Otherwise, remove
  from list.
  b) To reuse a previously used slot

  That way we only have to do the PGSemaphoreLock() dance if there
  really aren't any free slots.

* The queuing logic around slots seems to lack documentation. It's
  complex enough to warrant that imo.

* Not a big fan of the "holdingAll" variable name, for a file-global one
  that seems a bit too generic.

* Could you add a #define or comment for the 64 used in
  XLogInsertSlotPadded? Not everyone might recognize that immediately as
  the most common cacheline size.
  Commenting on the reason we pad in general would be a good idea as
  well.

* Is it correct that WALInsertSlotAcquireOne() resets xlogInsertingAt of
  all slots *before* it has acquired locks in all of them? If yes, why
  haven't we already reset it to something invalid?

* Is GetXLogBuffer()'s unlocked read correct without a read barrier?

* XLogBytePosToEndRecPtr() seems to have a confusing name to me. At
  least the comment needs to better explain that it's named that way
  because of the way it's used.
  Also, doesn't
seg_offset += fullpages * XLOG_BLCKSZ + bytesleft + SizeOfXLogShortPHD;
  potentially point into the middle of a page?

* I wish we could get rid of the bytepos notion, it - while rather
  clever - complicates things. But that's probably not going to happen
  unless we get rid of short/long page headers :/

Cool stuff!

Greetings,

Andres Freund

PS: Btw, git diff|... -w might be more helpful than not indenting a
block.

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Joshua D. Drake


On 06/24/2013 10:59 AM, Andres Freund wrote:


On 2013-06-24 10:50:42 -0700, Josh Berkus wrote:



The problem with that is that that HUGELY depends on the patch and the
review. There are patches where reviewers do a good percentage of the
work and others where they mostly tell that "compiles & runs".


This project is enormously stingy with giving credit to people.  It's
not like it costs us money, you know.


Listing a reviewer that didn't do all that much at the same level as the
author or an somebody having done an extensive review will cost you
contributors in the long run.

I am all for introducing a "Contributed by reviewing: ..." section in
the release notes.


It should be listed with the specific feature.

JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Robert Haas
On Mon, Jun 24, 2013 at 1:24 PM, Joshua D. Drake  wrote:
> Leave your ego at the door. Josh is doing what could be considered one of
> the most thankless (public) jobs in this project. How about we support him
> in getting these patches taken care of instead of whining about the fact
> that he called us out for not doing our jobs (reviewing patches) in the
> first place.

Quite.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request for Patch Feedback: Lag & Lead Window Functions Can Ignore Nulls

2013-06-24 Thread Robert Haas
On Fri, Jun 21, 2013 at 6:29 PM, Troels Nielsen  wrote:
> The grammar conflict appears to be because of ambiguities in:
> 1. table_ref (used exclusively in FROM clauses)
> 2. index_elem (used exclusively in INDEX creation statements).
>
> Now, this doesn't seem to make much sense, as AFAICT window functions
> are explicitly disallowed in these contexts (transformWindowFuncCall
> will yield errors, and I can't really wrap my head around what a
> window function call would mean there).
>
> I therefore propose a simple rearrangement of the grammar,
> syntactically disallowing window functions in the outer part of those
> contexts (a_expr's inside can't and shouldn't be done much about)
> which will allow both RESPECT and IGNORE to become unreserved
> keywords, without doing any lexer hacking or abusing the grammar.

I reviewed this today and I think this is a very nice approach.
Thanks for working on it!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Szymon Guz
I'm just wondering about newbies...

I've created my first patch, so I'm one of them, I think.

I've reviewed some patches, but only some easier ones, like pure regression
tests. Unfortunately my knowledge is not enough to review patches making
very deep internal changes, or some efficiency tweaks. I'm not sure if
those patches should be reviewed by newbies like me, as I just don't know
what is good and what is bad, even if a patch looks OK for me. What's the
use of my review, if I don't understand the internals enough, I can apply
the patch, run tests, look inside and I'm sure I won't find any problems?

Maybe this is the reason why there are not so many reviewers?

I'm not sure if such a strict policy will bring anything good. If newbies
won't be able to review patches, they won't be committing simple patches,
as they won't be able to review others.

If this policy will be so strict, I will spend huge amount of time to
understand the whole Postgres code before sending my next patch, as most
probably I will have problems with making the reviews.


Szymon


Re: [HACKERS] dump difference between 9.3 and master after upgrade

2013-06-24 Thread Robert Haas
On Sat, Jun 22, 2013 at 3:55 PM, Andrew Dunstan  wrote:
>> Essentially, cross version upgrade testing runs pg_dumpall from the new
>> version on the old cluster, runs pg_upgrade, and then runs pg_dumpall on the
>> upgraded cluster, and compares the two outputs. This is what we get when the
>> new version is HEAD and the old version is 9.3.
>>
>> The reason this hasn't been caught by the standard same-version upgrade
>> tests is that this module uses a more extensive cluster, which has had not
>> only the core regression tests run but also all the contrib and pl
>> regression tests, and this problem seems to be exposed by the postgres_fdw
>> tests.
>>
>> At first glance to me like pg_dump in binary-upgrade mode is not
>> suppressing something that it should be suppressing.
>
> Yeah, after examination I don't see why we should output anything for
> dropped columns of a foreign table in binary upgrade mode. This looks to me
> like it's been a bug back to 9.1 where we introduced foreign tables.
>
> I think something like the attached should fix it, although I'm not sure if
> that's the right place for the fix.

We probably do need to preserve attribute numbers across an upgrade,
even for foreign tables.  I think those could make their way into
other places.  Consider:

rhaas=# create foreign data wrapper dummy;
CREATE FOREIGN DATA WRAPPER
rhaas=# create server s1 foreign data wrapper dummy;
CREATE SERVER
rhaas=# create foreign table ft1 (a int, b text) server s1;
CREATE FOREIGN TABLE
rhaas=# create table sam (x ft1);
CREATE TABLE

If the new and old clusters don't agree on the attribute numbers for
ft1, post-upgrade attempts to access sam.x will likely crash the
server.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Josh Berkus
Szymon,

> I've reviewed some patches, but only some easier ones, like pure regression
> tests. 

Actually, you were one of the people I was thinking of when I said
"mostly the new submitters have been exemplary in claiming some review
work". You're helping a lot.

> Unfortunately my knowledge is not enough to review patches making
> very deep internal changes, or some efficiency tweaks. I'm not sure if
> those patches should be reviewed by newbies like me, as I just don't know
> what is good and what is bad, even if a patch looks OK for me. What's the
> use of my review, if I don't understand the internals enough, I can apply
> the patch, run tests, look inside and I'm sure I won't find any problems?

Actually, something like 50% of all patches get sent back to the
submitter on the basis of a build/test/functionality check/completeness
check/standards compliance/do we really want this?.  The fact that you
are doing those steps instead of a committer, and thus letting the
committer look at 50% fewer patches, *does* help.

In fact, the single most important part of the regression test reviews
is "does this new regression test test anything worthwhile?" and "does
this regression test return the same results on different machines?".
You already know enough to address both of those questions, at least
enough to bring up any potential problems on this list.

In the future, I would like to have automated systems do the
apply/build/regression test check, leaving new reviewers to check only
functionality and completeness and other things which require a human.
But we don't have the technology for that yet.

> Maybe this is the reason why there are not so many reviewers?
> 
> I'm not sure if such a strict policy will bring anything good. If newbies
> won't be able to review patches, they won't be committing simple patches,
> as they won't be able to review others.

Commit a simple patch, review a simple patch.  If we have 20 submitters
of simple patches, then we have 20 simple patches to review, no?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-24 Thread Bruce Momjian
On Mon, Jun 24, 2013 at 10:40:48AM -0700, Josh Berkus wrote:
> More, on the slacker list are 6-8 people who I happen to know are paid
> by their employers to work on PostgreSQL.   Those are the folks I'm
> particularly targeting with the Slacker list; I want to make it
> transparently clear to those folks' bosses that they have to give their
> staff time for patch review if they expect to get the features *they*
> want into PostgreSQL.

I am confused.  Why is everyone interpreting "slacker" negatively.  ;-)  LOL

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PLR does not install language templates

2013-06-24 Thread Josh Berkus
Joe, all:

Ok, this is wierd.  This is PostgreSQL 9.2.4, on Centos 6, installed
using the packages at yum.postgresql.org.  Is the below an issue with
PL/R, the packages, or PostgreSQL?

Seems like if a language is actually *installed*, it needs to have
templates ...


analytics=# \dL
 List of languages
   Name|  Owner   | Trusted |   Description
---+--+-+--
 plpgsql   | postgres | t   | PL/pgSQL procedural language
 plpythonu | postgres | f   | PL/PythonU untrusted procedural language
 plr   | postgres | f   |


analytics=# select * from pg_pltemplate ;
  tmplname  | tmpltrusted | tmpldbacreate |  tmplhandler   |
tmplinline|tmplvalidator|tmpllibrary| tmp
lacl
+-+---++--+-+---+
-
 plpgsql| t   | t | plpgsql_call_handler   |
plpgsql_inline_handler   | plpgsql_validator   | $libdir/plpgsql   |
 pltcl  | t   | t | pltcl_call_handler |
  | | $libdir/pltcl |
 pltclu | f   | f | pltclu_call_handler|
  | | $libdir/pltcl |
 plperl | t   | t | plperl_call_handler|
plperl_inline_handler| plperl_validator| $libdir/plperl|
 plperlu| f   | f | plperlu_call_handler   |
plperlu_inline_handler   | plperlu_validator   | $libdir/plperl|
 plpythonu  | f   | f | plpython_call_handler  |
plpython_inline_handler  | plpython_validator  | $libdir/plpython2 |
 plpython2u | f   | f | plpython2_call_handler |
plpython2_inline_handler | plpython2_validator | $libdir/plpython2 |
 plpython3u | f   | f | plpython3_call_handler |
plpython3_inline_handler | plpython3_validator | $libdir/plpython3 |
(8 rows)



-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-06-24 Thread Alvaro Herrera
MauMau escribió:
> From: "Alvaro Herrera" 

> >Actually, in further testing I noticed that the fast-path you introduced
> >in BackendCleanup (or was it HandleChildCrash?) in the immediate
> >shutdown case caused postmaster to fail to clean up properly after
> >sending the SIGKILL signal, so I had to remove that as well.  Was there
> >any reason for that?
> 
> You are talking about the code below at the beginning of
> HandleChildCrash(), aren't you?
> 
> /* Do nothing if the child terminated due to immediate shutdown */
> if (Shutdown == ImmediateShutdown)
>  return;
> 
> If my memory is correct, this was necessary; without this,
> HandleChildCrash() or LogChildExit() left extra messages in the
> server log.
> 
> LOG:  %s (PID %d) exited with exit code %d
> LOG:  terminating any other active server processes
> 
> These messages are output because postmaster sends SIGQUIT to its
> children and wait for them to terminate.  The children exit with
> non-zero status when they receive SIGQUIT.

Yeah, I see that --- after removing that early exit, there are unwanted
messages.  And in fact there are some signals sent that weren't
previously sent.  Clearly we need something here: if we're in immediate
shutdown handler, don't signal anyone (because they have already been
signalled) and don't log any more messages; but the cleaning up of
postmaster's process list must still be carried out.

Would you please add that on top of the attached cleaned up version of
your patch?


Noah Misch escribió:
> On Sun, Jun 23, 2013 at 01:55:19PM +0900, MauMau wrote:

> > the clients at the immediate shutdown.  The code gets much simpler.  In  
> > addition, it may be better that we similarly send SIGKILL in backend 
> > crash (FatalError) case, eliminate the use of SIGQUIT and remove 
> > quickdie() and other SIGQUIT handlers.
> 
> My take is that the client message has some value, and we shouldn't just
> discard it to simplify the code slightly.  Finishing the shutdown quickly has
> value, of course.  The relative importance of those things should guide the
> choice of a timeout under method #2, but I don't see a rigorous way to draw
> the line.  I feel five seconds is, if anything, too high.

There's obviously a lot of disagreement on 5 seconds being too high or
too low.  We have just followed SysV init's precedent of waiting 5
seconds by default between sending signals TERM and QUIT during a
shutdown.  I will note that during a normal shutdown, services are
entitled to do much more work than just signal all their children to
exit immediately; and yet I don't find much evidence that this period is
inordinately short.  I don't feel strongly that it couldn't be shorter,
though.

> What about using deadlock_timeout?  It constitutes a rough barometer on the
> system's tolerance for failure detection latency, and we already overload it
> by having it guide log_lock_waits.  The default of 1s makes sense to me for
> this purpose, too.  We can always add a separate immediate_shutdown_timeout if
> there's demand, but I don't expect such demand.  (If we did add such a GUC,
> setting it to zero could be the way to request method 1.  If some folks
> strongly desire method 1, that's probably the way to offer it.)

I dunno.  Having this be configurable seems overkill to me.  But perhaps
that's the way to satisfy most people: some people could set it very
high so that they could have postmaster wait longer if they believe
their server is going to be overloaded; people wishing immediate SIGKILL
could get that too, as you describe.

I think this should be a separate patch, however.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
*** a/doc/src/sgml/runtime.sgml
--- b/doc/src/sgml/runtime.sgml
***
*** 1362,1372  echo -1000 > /proc/self/oom_score_adj
   

This is the Immediate Shutdown mode.
!   The master postgres process will send a
!   SIGQUIT to all child processes and exit
!   immediately, without properly shutting itself down. The child processes
!   likewise exit immediately upon receiving
!   SIGQUIT. This will lead to recovery (by
replaying the WAL log) upon next start-up. This is recommended
only in emergencies.

--- 1362,1372 
   

This is the Immediate Shutdown mode.
!   The server will send SIGQUIT to all child
!   processes and wait for them to terminate.  Those that don't terminate
!   within 5 seconds, will be sent SIGKILL by the
!   master postgres process, which will then terminate
!   without further waiting.  This will lead to recovery (by
replaying the WAL log) upon next start-up. This is recommended
only in emergencies.

*** a/src/backend/postmaster/postmaster.c
--- b/src/backend/postmaster/postmaster.c
***
*** 275,280  static pid_t StartupPID = 0,
--- 275,281 
  #define			NoShutdown		0
  #de

Re: [HACKERS] dump difference between 9.3 and master after upgrade

2013-06-24 Thread Tom Lane
Robert Haas  writes:
> We probably do need to preserve attribute numbers across an upgrade,
> even for foreign tables.  I think those could make their way into
> other places.

Hm ... this argument would also apply to composite types; do we get it
right for those?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PLR does not install language templates

2013-06-24 Thread Andres Freund
On 2013-06-24 12:24:29 -0700, Josh Berkus wrote:
> Joe, all:
> 
> Ok, this is wierd.  This is PostgreSQL 9.2.4, on Centos 6, installed
> using the packages at yum.postgresql.org.  Is the below an issue with
> PL/R, the packages, or PostgreSQL?
> 
> Seems like if a language is actually *installed*, it needs to have
> templates ...

No. Why? We don't really need templates at all anymore since the advent
of extensions.

The original reason for pltemplates was that the user shouldn't have to
manually know the handler names you need to pass to CREATE LANGUAGE. Now
we have CREATE EXTENSION making that unneccessary. So it's only backward
compat with older dumps that contain CREATE LANGUAGE verbatim instead of
CREATE EXTENSION.

How should pl/r automagically be installed into pl_template anyway? The
initdb may very well have been executed before installing pl/r.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PLR does not install language templates

2013-06-24 Thread Tom Lane
Josh Berkus  writes:
> Ok, this is wierd.  This is PostgreSQL 9.2.4, on Centos 6, installed
> using the packages at yum.postgresql.org.  Is the below an issue with
> PL/R, the packages, or PostgreSQL?

> Seems like if a language is actually *installed*, it needs to have
> templates ...

Not necessarily --- that's an optional feature.  In fact, I am not eager
to encourage third-party PLs to start installing pg_pltemplate entries
anymore, because that's mostly vestigial in the extensions universe.
We should be encouraging use of CREATE EXTENSION not CREATE LANGUAGE to
install PLs, and for that you don't need a pltemplate entry.  It's
likely that pg_pltemplate will disappear entirely before long.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PLR does not install language templates

2013-06-24 Thread Josh Berkus

> Not necessarily --- that's an optional feature.  In fact, I am not eager
> to encourage third-party PLs to start installing pg_pltemplate entries
> anymore, because that's mostly vestigial in the extensions universe.
> We should be encouraging use of CREATE EXTENSION not CREATE LANGUAGE to
> install PLs, and for that you don't need a pltemplate entry.  It's
> likely that pg_pltemplate will disappear entirely before long.

Ah, ok.  Thanks!

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-24 Thread Cédric Villemain
Le lundi 24 juin 2013 19:40:19, Andrew Dunstan a écrit :
> On 06/18/2013 09:52 AM, Cédric Villemain wrote:
> > I've rebased the current set of pending patches I had, to fix pgxs and
> > added 2 new patches.
> > 
> > Bugfixes have already been presented and sent in another thread, except
> > the last one which only fix comment in pgxs.mk.
> > 
> > The new feature consists in a new variable to allow the installation of
> > contrib header files.
> > A new variable MODULEHEADER can be used in extension Makefile to list the
> > header files to install.
> > 
> > The installation path default to
> > $includedir/$includedir_contrib/$extension. 2 new variables are set to
> > define directories: $includedir_contrib and $includedir_extension, the
> > former default to include/contrib and the later to
> > $includedir_contrib/$extension ($extension is the name of the
> > extension).
> > 
> > This allows for example to install hstore header and be able to include
> > them
> > 
> > in another extension like that:
> ># include "contrib/hstore/hstore.h"
> > 
> > For packagers of PostgreSQL, this allows to have a
> > postgresql-X.Y-contrib-dev package.
> > For developers of extension it's killing the copy-and-paste-this-function
> > dance previously required.
> > 
> > I've not updated contribs' Makfefile: I'm not sure what we want to
> > expose.
> > 
> > I've 2 other patches to write to automatize a bit more the detection of
> > things to do when building with USE_PGXS, based on the layout. Better
> > get a good consensus on this before writing them.
> > 
> > 
> > Bugfix:
> > 0001-fix-SHLIB_PREREQS-when-building-with-USE_PGXS.patch
> > 0002-Create-data-directory-if-extension-when-required.patch
> > 0003-set-VPATH-for-out-of-tree-extension-builds.patch
> > 0004-adds-support-for-VPATH-with-USE_PGXS.patch
> > 0006-Fix-suggested-layout-for-extension.patch
> > 
> > New feature:
> > 0005-Allows-extensions-to-install-header-file.patch
> > 
> > I'll do a documentation patch based on what is accepted in HEAD and/or
> > previous branches.
> 
> I have just spent an hour or two wrestling with the first four of these.
> Items 5 and six are really separate items, which I'm not considering at
> the moment.
> 
> The trouble I have is that there are no instructions on how to modify
> your Makefile or prepare a vpath tree, so I have been flying blind. I
> started out doing what Postgres does, namely to prepare the tree using
> config/prep_buildtree. This doesn't work at all - the paths don't get
> set properly unless you invoke make with the path to the real makefile,
> and I'm not sure they all get set correctly then. But that's not what we
> expect of a vpath setup, or at least not what I expect. I expect that
> with an appropriately set up make file and a correctly set up build tree
> I can use an identical make invocation to the one I would use for an
> in-tree build. That is, after setting up I should be able to ignore the
> fact that it's a vpath build.
> 
> FYI I deliberately chose a non-core extension with an uncommon layout
> for testing: json_build 
> 
> So, patches 1, 2 and 6 look sane enough, and I'm inclined to commit them
> just to get them out of the way. That means two of the commitfest items
> I am signed up for will be committed and two marked "Returned with
> comments". I think we need some more discussion about how VPATH builds
> should work with extensions, and subject to what limitations if any.

Thank you for reviewing those patches.
I'm sorry I haven't reproduced nor explained that much what is expected with 
VPATH.

We have 2 main options with core postgresql: in-tree build or out-of-tree. The 
former is the classical build process, the later is what's frequently use for 
packaging because it allows not to pollute the source tree with intermediate 
or fully built files.

You're right that we don't need to set VPATH explicitely, it is set by 
'configure' when you exec configure out-of-source-tree. It is transparent to 
the 
user.

Those 2 options are working as expected.

Now, the extensions/contribs. We have 2^2 ways to build them.

Either with PGXS or not, either with VPATH or not.
NO_PGXS is how we build contribs shipped with postgresql.
USE_PGXS is used by most of the others extensions (and contribs).

WIth extension, we do have to set VPATH explicitely if we want to use VPATH 
(note that contribs/extensions must not care that postgresql has been built 
with or without VPATH set). My patches try to fix that.

So the point is to be able to do exactly that :

$ mkdir /tmp/json_build && cd /tmp/json_build
$ make USE_PGXS=1 -f /path/to/json_build/Makefile

There is another way to do the same thing which is:
$ mkdir /tmp/json_build
$ make USE_PGXS=1 -C /tmp/json_build -f /path/to/json_build/Makefile

Thus packagers can now use : 
$ make USE_PGXS=1 -C $vtarget -f /path/to/Makefile 
DESTDIR=/my/packaging/area/json_build

instead of (short version):
$ make USE_PGXS=1 -C $vtarge

Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Simon Riggs
On 24 June 2013 15:49, Tom Lane  wrote:
> Amit Kapila  writes:
>> I will summarize the results, and if most of us feel that they are not good
>> enough, then we can return this patch.
>
> Aside from the question of whether there's really any generally-useful
> performance improvement from this patch, it strikes me that this change
> forecloses other games that we might want to play with interpretation of
> the value of a tuple's natts.
>
> In particular, when I was visiting Salesforce last week, the point came
> up that they'd really like ALTER TABLE ADD COLUMN to be "free" even when
> the column had a non-null DEFAULT.  It's not too difficult to imagine
> how we might support that, at least when the default expression is a
> constant: decree that if the requested attribute number is beyond natts,
> look aside at the tuple descriptor to see if the column is marked as
> having a magic default value, and if so, substitute that rather than
> just returning NULL.  (This has to be a "magic" value separate from
> the current default, else a subsequent DROP or SET DEFAULT would do
> the wrong thing.)

Now that is a mighty fine idea.

> However, this idea conflicts with an optimization that supposes it can
> reduce natts to suppress null columns: if the column was actually stored
> as null, you'd lose that information, and would incorrectly return the
> magic default on subsequent reads.
>
> I think it might be possible to make both ideas play together, by
> not reducing natts further than the last column with a magic default.
> However, that means extra complexity in heap_form_tuple, which would
> invalidate the performance measurements done in support of this patch.
>
> In short, there's no free lunch ...

Agreed.

I think its rather a shame that the proponents of this patch have
tried so hard to push this through without working variations on the
theme. Please guys, go away and get creative; rethink the approach so
that you can have your lunch without anybody else losing theirs. I
would add that I have seen the use case and want to support it, but
we're looking to add to the codebase not just steal small bites of
performance from existing use cases.

As a practical suggestion, why not change the macro so it doesn't even
try to do anything different unless the number of columns is > 72. A
constant comparison should go very quickly and isolate the code better
from the more typical code path. If not, lets try some other ideas,
like Tom just did.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal 9.4 plpgsql: allows access to call stack from GET DIAGNOSTICS statement

2013-06-24 Thread Pavel Stehule
Hello

This is fragment ofmy old code from orafce package - it is functional,
but it is written little bit more generic due impossible access to
static variables (in elog.c)

static char*
dbms_utility_format_call_stack(char mode)
{
   MemoryContext oldcontext = CurrentMemoryContext;
   ErrorData *edata;
   ErrorContextCallback *econtext;
   StringInfo sinfo;

   #if PG_VERSION_NUM >= 80400
  errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO, TEXTDOMAIN);
   #else
  errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO);
#endif

   MemoryContextSwitchTo(oldcontext);

   for (econtext = error_context_stack;
 econtext != NULL;
 econtext = econtext->previous)
(*econtext->callback) (econtext->arg);

   edata = CopyErrorData();
   FlushErrorState();

https://github.com/orafce/orafce/blob/master/utility.c

2013/6/24 Rushabh Lathia :
> Hi,
>
> Use of this feature is to get  call stack for debugging without raising
> exception. This definitely seems very useful.
>
> Here are my comments about the submitted patch:
>
> Patch gets applied cleanly (there are couple of white space warning but
> that's
> into test case file). make and make install too went smooth. make check
> was smooth too. Did some manual testing about feature and its went smooth.
>
> However would like to share couple of points:
>

My main motive is concentration to stack info string only. So I didn't
use err_start function, and I didn't use CopyErrorData too. These
routines does lot of other work.


> 1) I noticed that you did the initialization of edata manually, just
> wondering
> can't we directly use CopyErrorData() which will do that job ?
>

yes, we can, but in this context on "context" field is interesting for us.

> I found that inside CopyErrorData() there is Assert:
>
> Assert(CurrentMemoryContext != ErrorContext);
>
> And in the patch we are setting using ErrorContext as short living context,
> is
> it the reason of not using CopyErrorData() ?

it was not a primary reason, but sure - this routine is not designed
for direct using from elog module. Next, we can save one palloc call.

>
>
> 2) To reset stack to empty, FlushErrorState() can be used.
>

yes, it can be. I use explicit rows due different semantics, although
it does same things. FlushErrorState some global ErrorState and it is
used from other modules. I did a reset of ErrorContext. I fill a small
difference there - so FlushErrorState is not best name for my purpose.
What about modification:

static void
resetErrorStack()
{
errordata_stack_depth = -1;
recursion_depth = 0;
/* Delete all data in ErrorContext */
MemoryContextResetAndDeleteChildren(ErrorContext);
}

and then call in  InvokeErrorCallbacks -- resetErrorStack()

and rewrote flushErrorState like

void FlushErrorState(void)
{
  /* reset ErrorStack is enough */
  resetErrorStack();
}

???

but I can live well with direct call of FlushErrorState() - it is only
minor change.


> 3) I was think about the usability of the feature and wondering how about if
> we add some header to the stack, so user can differentiate between STACK and
> normal NOTICE message ?
>
> Something like:
>
> postgres=# select outer_outer_func(10);
> NOTICE:  - Call Stack -
> PL/pgSQL function inner_func(integer) line 4 at GET DIAGNOSTICS
> PL/pgSQL function outer_func(integer) line 3 at RETURN
> PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
> CONTEXT:  PL/pgSQL function outer_func(integer) line 3 at RETURN
> PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
>  outer_outer_func
> --
>20
> (1 row)

I understand, but I don't think so it is good idea. Sometimes, you
would to use context for different purposes than debug log - for
example - you should to identify top most call or near call - and any
additional lines means some little bit more difficult parsing. You can
add this line simply (if you want)

RAISE NOTICE e'--- Call Stack ---\n%', stack

but there are more use cases, where you would not this information (or
you can use own header (different language)), and better returns only
clean data.

>
> I worked on point 2) and 3) and PFA patch for reference.

so

*1 CopyErrorData does one useless palloc more and it is too generic,

*2 it is possible - I have not strong opinion

*3 no, I would to return data in most simply and clean form without any sugar

What do you think?

Regards

Pavel

>
> Thanks,
> Rushabh
>
>
>
> On Sat, Feb 2, 2013 at 2:53 PM, Pavel Stehule 
> wrote:
>>
>> Hello
>>
>> I propose enhancing GET DIAGNOSTICS statement about new field
>> PG_CONTEXT. It is similar to GET STACKED DIAGNOSTICS'
>> PG_EXCEPTION_CONTEXT.
>>
>> Motivation for this proposal is possibility to get  call stack for
>> debugging without raising exception.
>>
>> This code is based on cleaned code from Orafce, where is used four
>> years without any error reports.
>>
>> CREATE OR REPLACE FUNCTION public."inner"(integer)
>>  RETUR

Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Robert Haas
On Mon, Jun 24, 2013 at 4:05 PM, Simon Riggs  wrote:
> I think its rather a shame that the proponents of this patch have
> tried so hard to push this through without working variations on the
> theme. Please guys, go away and get creative; rethink the approach so
> that you can have your lunch without anybody else losing theirs. I
> would add that I have seen the use case and want to support it, but
> we're looking to add to the codebase not just steal small bites of
> performance from existing use cases.

If there's an actual performance consequence of applying this patch,
then I think that's a good reason for rejecting it.  But if the best
argument we can come up with is that we might someday try to do even
more clever things with the tuple's natts value, I guess I'm not very
impressed.  The reason why we have to rewrite the table when someone
adds a column with a non-NULL default is because we need to store the
new value in every row.  Sure, we could defer that in this particular
case.  But users might be mighty dismayed to see CLUSTER or VACUUM
FULL -- or a dump-and-reload! -- cause the table to become much LARGER
than it was before.  Having some kind of column-oriented compression
would be darn nifty, but this particular path doesn't seem
particularly promising to me.

I think the larger and more fundamental problem with natts is that
there's just not very much bit space available there.  Even as it is,
someone who adds and drops columns with any regularity will eventually
hit the wall, and there aren't any good alternatives at that point.
Were there more bit space available here, we could even do things like
allow some special cases of ALTER TABLE .. SET DATA TYPE not to
rewrite the table; natts could become a sort of tuple version number,
and we'd remember how to convert to newer versions on the fly.  But
with only 2048 values available, it's not really feasible to start
consuming them for any more operations than we already do.  I'm
generally a big fan of the principle that no single patch should be
allowed to crowd out room for future extensibility in this particular
area, but in this case I think the bit space available is *already* so
limited that we're not likely to get much further with it without
changing the storage format.

So, Tom, how's that pluggable storage format coming?  :-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Josh Berkus
Simon,

> I think its rather a shame that the proponents of this patch have
> tried so hard to push this through without working variations on the
> theme. Please guys, go away and get creative; rethink the approach so
> that you can have your lunch without anybody else losing theirs. I
> would add that I have seen the use case and want to support it, but
> we're looking to add to the codebase not just steal small bites of
> performance from existing use cases.

Do we really have ironclad numbers which show that the patch affects
performance negatively?  I didn't think the previous performance test
was comprehensive; I was planning to develop one myself this week.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... ALTER CONSTRAINT

2013-06-24 Thread Jeff Janes
On Sun, Jun 23, 2013 at 8:58 PM, Abhijit Menon-Sen wrote:

> At 2013-06-08 21:45:24 +0100, si...@2ndquadrant.com wrote:
> >
> > ALTER TABLE foo
> >ALTER CONSTRAINT fktable_fk_fkey DEFERRED INITIALLY IMMEDIATE;
>
> I read the patch (looks good), applied it on HEAD (fine), ran make check
> (fine), and played with it in a test database. It looks great, and from
> previous responses it's something a lot of people have wished for.
>
> I'm marking this ready for committer.
>

After the commit, I'm now getting the compiler warning:

tablecmds.c: In function 'ATPrepCmd':
tablecmds.c:2953: warning: 'pass' may be used uninitialized in this function


case AT_AlterConstraint (line 3130) is the only case branch that does not
set pass.

Cheers,

Jeff


Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Tom Lane
Robert Haas  writes:
> If there's an actual performance consequence of applying this patch,
> then I think that's a good reason for rejecting it.  But if the best
> argument we can come up with is that we might someday try to do even
> more clever things with the tuple's natts value, I guess I'm not very
> impressed.  The reason why we have to rewrite the table when someone
> adds a column with a non-NULL default is because we need to store the
> new value in every row.  Sure, we could defer that in this particular
> case.  But users might be mighty dismayed to see CLUSTER or VACUUM
> FULL -- or a dump-and-reload! -- cause the table to become much LARGER
> than it was before.  Having some kind of column-oriented compression
> would be darn nifty, but this particular path doesn't seem
> particularly promising to me.

The point of what I was suggesting isn't to conserve storage, but to
reduce downtime during a schema change.  Remember that a rewriting ALTER
TABLE locks everyone out of that table for a long time.

> So, Tom, how's that pluggable storage format coming?  :-)

Well, actually, it's looking to me like heap_form_tuple will be
underneath the API cut, because alternate storage managers will probably
have other tuple storage formats --- column stores being the poster
child here, but in any case the tuple header format is very unlikely
to be universal.

Which means that whether this patch gets applied to mainline is going
to be moot for Salesforce's purposes; they will certainly want the
equivalent logic in their storage code, because they've got tables with
many hundreds of mostly-null columns, but whether heap_form_tuple acts
this way or not won't affect them.

So unless we consider that many-hundreds-of-columns is a design center
for general purpose use of Postgres, we should be evaluating this patch
strictly on its usefulness for more typical table widths.  And my take
on that is that (1) lots of columns isn't our design center (for the
reasons you mentioned among others), and (2) the case for the patch
looks pretty weak otherwise.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Simon Riggs
On 24 June 2013 21:50, Tom Lane  wrote:


> > So, Tom, how's that pluggable storage format coming?  :-)
>
> Well, actually, it's looking to me like heap_form_tuple will be
> underneath the API cut, because alternate storage managers will probably
> have other tuple storage formats --- column stores being the poster
> child here, but in any case the tuple header format is very unlikely
> to be universal.
>

Hopefully we would allow multiple storage managers to be active at once,
one per table?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Jameison Martin
i believe the last submission of the patch had no negative performance impact 
on any of the tested use cases, but you'd have to go back and see the last 
exchange on that. 

i think it was the concern about potentially regressing the codeline in 
unforeseen ways without a clear benefit to all but one use case (wide tables) 
that stalled things.



>
> From: Josh Berkus 
>To: Simon Riggs  
>Cc: Tom Lane ; Amit Kapila ; 
>Andres Freund ; Craig Ringer ; 
>Jameison Martin ; Noah Misch ; Kevin 
>Grittner ; robertmh...@gmail.com; 
>pgsql-hackers@postgresql.org 
>Sent: Monday, June 24, 2013 10:32 PM
>Subject: Re: [HACKERS] patch submission: truncate trailing nulls from heap 
>rows to reduce the size of the null bitmap [Review]
> 
>
>Simon,
>
>> I think its rather a shame that the proponents of this patch have
>> tried so hard to push this through without working variations on the
>> theme. Please guys, go away and get creative; rethink the approach so
>> that you can have your lunch without anybody else losing theirs. I
>> would add that I have seen the use case and want to support it, but
>> we're looking to add to the codebase not just steal small bites of
>> performance from existing use cases.
>
>Do we really have ironclad numbers which show that the patch affects
>performance negatively?  I didn't think the previous performance test
>was comprehensive; I was planning to develop one myself this week.
>
>-- 
>Josh Berkus
>PostgreSQL Experts Inc.
>http://pgexperts.com
>
>
>

Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Josh Berkus
On 06/24/2013 01:50 PM, Tom Lane wrote:
> The point of what I was suggesting isn't to conserve storage, but to
> reduce downtime during a schema change.  Remember that a rewriting ALTER
> TABLE locks everyone out of that table for a long time.

Right, but I'm worried about the "surprise!" factor.  That is, if we
make the first default "free" by using a magic value, then a SET DEFAULT
on that column is going to have some very surprising results as suddenly
the whole table needs to get written out for the old default.  In many
use cases, this would still be a net win, since 80% of the time users
don't change defaults after column creation.  But we'd have to make it
much less surprising somehow.  Also for the reason Tom pointed out, the
optimization would only work on with NOT NULL columns ... leading to
another potential unexpected surprise when the column is marked NULLable.

> So unless we consider that many-hundreds-of-columns is a design center
> for general purpose use of Postgres, we should be evaluating this patch
> strictly on its usefulness for more typical table widths.  And my take
> on that is that (1) lots of columns isn't our design center (for the
> reasons you mentioned among others), and (2) the case for the patch
> looks pretty weak otherwise.

Well, actually, hundreds of columns is reasonably common for a certain
user set (ERP, CRM, etc.).  If we could handle that use case very
efficiently, then it would win us some users, since other RDMBSes don't.
 However, there are multiple issues with having hundreds of columns, of
which storage optimization is only one ... and probably the smallest one
at that.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] is it bug? - printing boolean domains in sql/xml function

2013-06-24 Thread Szymon Guz
On 14 March 2013 03:45, Peter Eisentraut  wrote:

> On Mon, 2013-03-04 at 08:35 +0100, Pavel Stehule wrote:
> > in this use case I am think so some regression test is important - It
> > should not be mine, but missing more explicit regression test is
> > reason, why this bug was not detected some years.
>
> I've added the tests.
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Hi,
how should I apply the patch from fix-xmlmap.patch? I've run out of ideas.

When I run it normally, I get:

$ patch --verbose --dry-run -p1 < fix-xmlmap.patch
Hmm...(Fascinating -- this is really a new-style context diff but without
the telltale extra asterisks on the *** line that usually indicate
the new style...)
  Looks like a context diff to me...
The text leading up to this was:
--
|*** a/src/backend/utils/adt/xml.c
|--- b/src/backend/utils/adt/xml.c
--
Patching file src/backend/utils/adt/xml.c using Plan A...
(Fascinating -- this is really a new-style context diff but without
the telltale extra asterisks on the *** line that usually indicate
the new style...)
Hunk #1 succeeded at 2002 with fuzz 2 (offset 1 line).
Hmm...(Fascinating -- this is really a new-style context diff but without
the telltale extra asterisks on the *** line that usually indicate
the new style...)
  The next patch looks like a context diff to me...
The text leading up to this was:
--
|*** a/src/test/regress/expected/xmlmap.out
|--- b/src/test/regress/expected/xmlmap.out
--
Patching file src/test/regress/expected/xmlmap.out using Plan A...
(Fascinating -- this is really a new-style context diff but without
the telltale extra asterisks on the *** line that usually indicate
the new style...)
Hunk #1 succeeded at 1201 (offset 27 lines).
Hmm...(Fascinating -- this is really a new-style context diff but without
the telltale extra asterisks on the *** line that usually indicate
the new style...)
  The next patch looks like a context diff to me...
The text leading up to this was:
--
|*** a/src/test/regress/sql/xmlmap.sql
|--- b/src/test/regress/sql/xmlmap.sql
--
Patching file src/test/regress/sql/xmlmap.sql using Plan A...
(Fascinating -- this is really a new-style context diff but without
the telltale extra asterisks on the *** line that usually indicate
the new style...)
Hunk #1 FAILED at 39.
1 out of 1 hunk FAILED -- saving rejects to file
src/test/regress/sql/xmlmap.sql.rej
done


thanks,
Szymon


Re: [HACKERS] Request for Patch Feedback: Lag & Lead Window Functions Can Ignore Nulls

2013-06-24 Thread Nicholas White
OK - I've attached another iteration of the patch with Troels' grammar
changes. I think the only issue remaining is what the standard says about
lead semantics. Thanks -


lead-lag-ignore-nulls.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FILTER for aggregates [was Re: Department of Redundancy Department: makeNode(FuncCall) division]

2013-06-24 Thread Kevin Grittner
Greg Stark  wrote:
> On Mon, Jun 24, 2013 at 3:50 AM, Tom Lane  wrote:
>> Or maybe they really don't give a damn about breaking
>> applications every time they invent a new reserved word?
>
> I think this is the obvious conclusion. In the standard the reserved
> words are pretty explicitly reserved and not legal column names, no?
>
> I think their model is that applications work with a certain version
> of SQL and they're not expected to work with a new version without
> extensive updating.

IMO it is insanity to write an application of any significant
complexity without a data abstraction layer sitting on a data
access layer, and it is silly to write such layers which are
intended to interface to SQL in a portable way without quoting
*all* identifiers sent to the server.  As a developer, new reserved
words never bothered me in the slightest.  At Wisconsin Courts the
most heavily used table has been called "Case" since 1989, and the
table to hold a row for each paper document printed to pay someone
is called "Check".  No need to change the names just because SQL
started using those words for new language features after we had
the tables.  And there is no reason to assume that any particular
word won't become reserved in the future.

I think the most likely explanation is not that they don't mind
breaking applications, but that they don't understand that there
are significant numbers of people who choose to write code in a
fashion which leaves them vulnerable to breakage when new reserved
words are added.

Being closer to the wide variety of users we know that there are
many such people out there, and we try to look after them as best
we can; which is entirely appropriate.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... ALTER CONSTRAINT

2013-06-24 Thread Simon Riggs
On 24 June 2013 21:42, Jeff Janes  wrote:

> On Sun, Jun 23, 2013 at 8:58 PM, Abhijit Menon-Sen 
> wrote:
>
>> At 2013-06-08 21:45:24 +0100, si...@2ndquadrant.com wrote:
>> >
>> > ALTER TABLE foo
>> >ALTER CONSTRAINT fktable_fk_fkey DEFERRED INITIALLY IMMEDIATE;
>>
>> I read the patch (looks good), applied it on HEAD (fine), ran make check
>> (fine), and played with it in a test database. It looks great, and from
>> previous responses it's something a lot of people have wished for.
>>
>> I'm marking this ready for committer.
>>
>
> After the commit, I'm now getting the compiler warning:
>
> tablecmds.c: In function 'ATPrepCmd':
> tablecmds.c:2953: warning: 'pass' may be used uninitialized in this
> function
>
>
> case AT_AlterConstraint (line 3130) is the only case branch that does not
> set pass.
>

The investigation is into why my current compiler didn't report that.
Thanks though.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-24 Thread Tom Lane
Josh Berkus  writes:
> On 06/24/2013 01:50 PM, Tom Lane wrote:
>> The point of what I was suggesting isn't to conserve storage, but to
>> reduce downtime during a schema change.  Remember that a rewriting ALTER
>> TABLE locks everyone out of that table for a long time.

> Right, but I'm worried about the "surprise!" factor.  That is, if we
> make the first default "free" by using a magic value, then a SET DEFAULT
> on that column is going to have some very surprising results as suddenly
> the whole table needs to get written out for the old default.

No, that's why we'd store the magic default separately.  That will be
permanent and unaffected by later SET DEFAULT operations.  (This
requires that every subsequently created tuple store the column
explicitly so that the magic default doesn't affect it; which is exactly
why there's a conflict with the currently-proposed patch.)

> ...  Also for the reason Tom pointed out, the
> optimization would only work on with NOT NULL columns ... leading to
> another potential unexpected surprise when the column is marked NULLable.

Huh?  We already have the case of null default handled.

> Well, actually, hundreds of columns is reasonably common for a certain
> user set (ERP, CRM, etc.).  If we could handle that use case very
> efficiently, then it would win us some users, since other RDMBSes don't.
>  However, there are multiple issues with having hundreds of columns, of
> which storage optimization is only one ... and probably the smallest one
> at that.

Agreed; there are a lot of things we'd have to address if we really
wanted to claim this is a domain we work well in.  (I suspect Salesforce
will be chipping away at some of those issues, but as I said,
heap_form_tuple is not in their critical path.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] is it bug? - printing boolean domains in sql/xml function

2013-06-24 Thread Pavel Stehule
Hello

you can try fresh patch

git format-patch -1 788bce13d3249ddbcdf3443ee078145f4888ab45

regards

Pavel

2013/6/24 Szymon Guz :
> On 14 March 2013 03:45, Peter Eisentraut  wrote:
>>
>> On Mon, 2013-03-04 at 08:35 +0100, Pavel Stehule wrote:
>> > in this use case I am think so some regression test is important - It
>> > should not be mine, but missing more explicit regression test is
>> > reason, why this bug was not detected some years.
>>
>> I've added the tests.
>>
>>
>>
>> --
>> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-hackers
>
>
>
> Hi,
> how should I apply the patch from fix-xmlmap.patch? I've run out of ideas.
>
> When I run it normally, I get:
>
> $ patch --verbose --dry-run -p1 < fix-xmlmap.patch
> Hmm...(Fascinating -- this is really a new-style context diff but without
> the telltale extra asterisks on the *** line that usually indicate
> the new style...)
>   Looks like a context diff to me...
> The text leading up to this was:
> --
> |*** a/src/backend/utils/adt/xml.c
> |--- b/src/backend/utils/adt/xml.c
> --
> Patching file src/backend/utils/adt/xml.c using Plan A...
> (Fascinating -- this is really a new-style context diff but without
> the telltale extra asterisks on the *** line that usually indicate
> the new style...)
> Hunk #1 succeeded at 2002 with fuzz 2 (offset 1 line).
> Hmm...(Fascinating -- this is really a new-style context diff but without
> the telltale extra asterisks on the *** line that usually indicate
> the new style...)
>   The next patch looks like a context diff to me...
> The text leading up to this was:
> --
> |*** a/src/test/regress/expected/xmlmap.out
> |--- b/src/test/regress/expected/xmlmap.out
> --
> Patching file src/test/regress/expected/xmlmap.out using Plan A...
> (Fascinating -- this is really a new-style context diff but without
> the telltale extra asterisks on the *** line that usually indicate
> the new style...)
> Hunk #1 succeeded at 1201 (offset 27 lines).
> Hmm...(Fascinating -- this is really a new-style context diff but without
> the telltale extra asterisks on the *** line that usually indicate
> the new style...)
>   The next patch looks like a context diff to me...
> The text leading up to this was:
> --
> |*** a/src/test/regress/sql/xmlmap.sql
> |--- b/src/test/regress/sql/xmlmap.sql
> --
> Patching file src/test/regress/sql/xmlmap.sql using Plan A...
> (Fascinating -- this is really a new-style context diff but without
> the telltale extra asterisks on the *** line that usually indicate
> the new style...)
> Hunk #1 FAILED at 39.
> 1 out of 1 hunk FAILED -- saving rejects to file
> src/test/regress/sql/xmlmap.sql.rej
> done
>
>
> thanks,
> Szymon


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] is it bug? - printing boolean domains in sql/xml function

2013-06-24 Thread Pavel Stehule
2013/6/24 Pavel Stehule :
> Hello
>
> you can try fresh patch
>
> git format-patch -1 788bce13d3249ddbcdf3443ee078145f4888ab45

and git format-patch -1 bc61878682051678ade5f59da7bfd90ab72ce13b

>
> regards
>
> Pavel
>
> 2013/6/24 Szymon Guz :
>> On 14 March 2013 03:45, Peter Eisentraut  wrote:
>>>
>>> On Mon, 2013-03-04 at 08:35 +0100, Pavel Stehule wrote:
>>> > in this use case I am think so some regression test is important - It
>>> > should not be mine, but missing more explicit regression test is
>>> > reason, why this bug was not detected some years.
>>>
>>> I've added the tests.
>>>
>>>
>>>
>>> --
>>> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-hackers
>>
>>
>>
>> Hi,
>> how should I apply the patch from fix-xmlmap.patch? I've run out of ideas.
>>
>> When I run it normally, I get:
>>
>> $ patch --verbose --dry-run -p1 < fix-xmlmap.patch
>> Hmm...(Fascinating -- this is really a new-style context diff but without
>> the telltale extra asterisks on the *** line that usually indicate
>> the new style...)
>>   Looks like a context diff to me...
>> The text leading up to this was:
>> --
>> |*** a/src/backend/utils/adt/xml.c
>> |--- b/src/backend/utils/adt/xml.c
>> --
>> Patching file src/backend/utils/adt/xml.c using Plan A...
>> (Fascinating -- this is really a new-style context diff but without
>> the telltale extra asterisks on the *** line that usually indicate
>> the new style...)
>> Hunk #1 succeeded at 2002 with fuzz 2 (offset 1 line).
>> Hmm...(Fascinating -- this is really a new-style context diff but without
>> the telltale extra asterisks on the *** line that usually indicate
>> the new style...)
>>   The next patch looks like a context diff to me...
>> The text leading up to this was:
>> --
>> |*** a/src/test/regress/expected/xmlmap.out
>> |--- b/src/test/regress/expected/xmlmap.out
>> --
>> Patching file src/test/regress/expected/xmlmap.out using Plan A...
>> (Fascinating -- this is really a new-style context diff but without
>> the telltale extra asterisks on the *** line that usually indicate
>> the new style...)
>> Hunk #1 succeeded at 1201 (offset 27 lines).
>> Hmm...(Fascinating -- this is really a new-style context diff but without
>> the telltale extra asterisks on the *** line that usually indicate
>> the new style...)
>>   The next patch looks like a context diff to me...
>> The text leading up to this was:
>> --
>> |*** a/src/test/regress/sql/xmlmap.sql
>> |--- b/src/test/regress/sql/xmlmap.sql
>> --
>> Patching file src/test/regress/sql/xmlmap.sql using Plan A...
>> (Fascinating -- this is really a new-style context diff but without
>> the telltale extra asterisks on the *** line that usually indicate
>> the new style...)
>> Hunk #1 FAILED at 39.
>> 1 out of 1 hunk FAILED -- saving rejects to file
>> src/test/regress/sql/xmlmap.sql.rej
>> done
>>
>>
>> thanks,
>> Szymon


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dump difference between 9.3 and master after upgrade

2013-06-24 Thread Andrew Dunstan


On 06/24/2013 03:39 PM, Tom Lane wrote:

Robert Haas  writes:

We probably do need to preserve attribute numbers across an upgrade,
even for foreign tables.  I think those could make their way into
other places.

Hm ... this argument would also apply to composite types; do we get it
right for those?




Yes we do. It's handled at pg_dump.c starting about line 8936.

It looks like we need to add cases of foreign tables to the block that 
starts around line 13117.


We should also have a test of this in the core regression tests so that 
doing the wrong thing might be caught by regular upgrade testing.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-06-24 Thread Alexander Korotkov
On Wed, Jun 19, 2013 at 12:44 AM, Alexander Korotkov
wrote:

> On Mon, Jun 17, 2013 at 4:54 PM, Alexander Korotkov 
> wrote:
>
>> On Fri, Jun 14, 2013 at 12:09 AM, Alexander Korotkov <
>> aekorot...@gmail.com> wrote:
>>
>>> Revised version of patch for additional information storage in GIN is
>>> attached. Changes are mostly bug fixes.
>>>
>>
>> New version of patch is attached with some more refactoring and bug fixes.
>>
>
> Another revision with more bug fixes.
>

New revision of patch is attached. Now it includes some docs.

--
With best regards,
Alexander Korotkov.


ginaddinfo.7.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-24 Thread Andrew Dunstan


On 06/24/2013 04:02 PM, Cédric Villemain wrote:





WIth extension, we do have to set VPATH explicitely if we want to use VPATH
(note that contribs/extensions must not care that postgresql has been built
with or without VPATH set). My patches try to fix that.



No, this is exactly what I'm objecting to. I want to be able to do:

   invoke_vpath_magic
   standard_make_commands_as_for_non_vpath

Your patches have been designed to overcome your particular issues, but 
they don't meet the general case, IMNSHO. This is why I want to have 
more discussion about how vpath builds could work for PGXS modules.




So the point is to be able to do exactly that :

$ mkdir /tmp/json_build && cd /tmp/json_build
$ make USE_PGXS=1 -f /path/to/json_build/Makefile



It doesn't work:

   [andrew@emma vpath.json_build]$ PATH=../../inst.vpgxs.5706/bin:$PATH
   make -f `pwd`/../json_build/Makefile
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   grep: json_build.control: No such file or directory
   gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
   -Wdeclaration-after-statement -Wendif-labels
   -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
   -fwrapv -fexcess-precision=standard -g -fpic -shared -o
   json_build.so  -L/home/pgl/npgl/inst.vpgxs.5706/lib -Wl,--as-needed
   -Wl,-rpath,'/home/pgl/npgl/inst.vpgxs.5706/lib',--enable-new-dtags
   cp
   /home/andrew/pgl/extns/vpath.json_build/../json_build/sql/json_build.sql
   sql/json_build--.sql
   cp: cannot create regular file `sql/json_build--.sql': No such file
   or directory
   make: *** [sql/json_build--.sql] Error 1
   [andrew@emma vpath.json_build]$


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >