[GSoC] Summery of pg performance farm

2018-08-19 Thread Hongyuan Ma
Hello monters and hackers,

This is a late summary of pg performance farm in gsoc. Although it is not
yet perfect, but it has began to take shape.

1. The implementation of the basic test results upload interface to ensure
that the upload operation for the atomic level (including different client
numbers in different modes (rw, ro) corresponding to the results of the
value).

2. Implementation of the data report related to the list page. Compare each
metrics whith the previous results. If any of the metrics are a 5%
improvement( or regression),  there is one aspect that is progressive (or
regressive). This means there may be aspects of "improvement", "regression"
and "status quo" in one test result.This is the report List page for an
example: http://140.211.168.111/#/machineInfo/pmJEjJjSk3WREM3Q

3.The details page of the test report is implemented.  The test results in
the "read only" and "read & write" mode of this report are detailed in this
page, as well as the link of  previous report and other details.
Here is an example:
http://140.211.168.111/#/detailInfo/KScN58FCUWRD2fuC2R7VeY

4. A simple user center was implemented to allow users to browse the owning
machine and apply for a new machine (I originally planned to access the
community's user information interface at logon verification, but this
seems to require some waiting...So this part is temporarily not present.)
5. Implement the action of approvaling machine and sending notification
email in django admin.

We plan to add some useful features and write test cases to pg performance
farm in the future. If anyone has any good ideas or opinions. Please feel
free to email my mentors and me. And You are also welcome to  leave a
message  on this issue page: https://github.com/PGPerfFarm/pgperffarm/issues


Best Regards,
Hongyuan Ma


Re: [GSoC] Summery of pg performance farm

2018-08-19 Thread Alvaro Herrera
On 2018-Aug-19, Hongyuan Ma wrote:

> Hello monters and hackers,

Is that "monsters" or "mentors"?

> 2. Implementation of the data report related to the list page. Compare each
> metrics whith the previous results. If any of the metrics are a 5%
> improvement( or regression),  there is one aspect that is progressive (or
> regressive). This means there may be aspects of "improvement", "regression"
> and "status quo" in one test result.This is the report List page for an
> example: http://140.211.168.111/#/machineInfo/pmJEjJjSk3WREM3Q

Great stuff!

I think the visualization that many had in mind was that the numbers
would be displayed in charts there time is the horizontal axis, and the
numerical performance result number is the other axis.   That way, we
can see how the results go up or down with commits.

Individual happy/sad faces for individual commits are not enough to see
the bigger picture.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Fix for REFRESH MATERIALIZED VIEW ownership error message

2018-08-19 Thread Jonathan S. Katz

> On Aug 18, 2018, at 11:59 PM, Alvaro Herrera  wrote:
> 
> On 2018-Aug-18, Jonathan S. Katz wrote:
> 
>> 
>>> On Aug 18, 2018, at 8:52 PM, Jonathan S. Katz  wrote:
>>> 
 
 On Aug 18, 2018, at 8:45 PM, Michael Paquier  wrote:
 
 I am not so sure about v11 as it is very close to release, surely we can
 do something for HEAD as that's cosmetic.  Anyway, if something is
 proposed, could a patch be posted?  The only patch I am seeing on this
 thread refers to improvements for error messages of procedures.
>>> 
>>> Oops, too much multitasking. I will attach the correct patch when I get 
>>> home.
>> 
>> Here is the correct patch, sorry about that. This includes aforementioned
>> tests.
> 
> I ran the test without the code change, and it passes.  I don't think
> it's testing what you think it is testing.

So I ran the tests against 10.5 unpatched and it failed as expected. I then
ran it against HEAD unpatched and it passed.

Digging into it, it appears the issue was resolved in this commit[1] for 11
and beyond. As the plan is not to backpatch, I’ll withdraw my patch.

Thanks for the help,

Jonathan

https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=8b9e9644dc6a9bd4b7a97950e6212f63880cf18b
 



signature.asc
Description: Message signed with OpenPGP


Re: Allowing printf("%m") only where it actually works

2018-08-19 Thread Nico Williams
On Sun, Aug 19, 2018 at 01:15:58AM -0400, Tom Lane wrote:
> Nico Williams  writes:
> > On Sat, Aug 18, 2018 at 04:34:50PM -0400, Tom Lane wrote:
> >> So now I'm about ready to propose that we just *always* use
> >> snprintf.c, and forget all of the related configure probing.
> 
> > You'd also get to ensure that all uses from *die() are
> > async-signal-safe.
> 
> [ raised eyebrow... ] That seems like more than I care to promise
> here.  But even if snprintf itself were unconditionally safe,
> there's plenty of other stuff in that code path that isn't.

One step at a time, no?  And there's the other benefits.



Re: Allowing printf("%m") only where it actually works

2018-08-19 Thread Tom Lane
I wrote:
>> Consider the following approach:
>> 1. Teach src/port/snprintf.c about %m.  While I've not written a patch
>> for this, it looks pretty trivial.
>> 2. Teach configure to test for %m and if it's not there, use the
>> replacement snprintf.  (Note: we're already forcing snprintf replacement
>> in cross-compiles, so the added run-time test isn't losing anything.)
>> 3. Get rid of elog.c's hand-made substitution of %m strings, and instead
>> just let it pass the correct errno value down.  (We'd likely need to do
>> some fooling in appendStringInfoVA and related functions to preserve
>> call-time errno, but that's not complicated, nor expensive.)
>> 4. (Optional) Get rid of strerror(errno) calls in favor of %m, even in
>> frontend code.

> So I started to hack on this, and soon noticed that actually, what elog.c
> replaces %m with is *not* the result of strerror(), it's the result of
> useful_strerror().

After further thought, I realized that the best way to handle that is to
turn useful_strerror() into a globally available wrapper pg_strerror()
that replaces strerror() everyplace.  That way we get its protections in
frontend code as well as backend, and we ensure that the results of
printing strerror(errno) match what %m would do (so that step 4 above is
just cosmetic code simplification and doesn't change behavior).  We'd
speculated about doing that back when 8e68816cc went in, but not actually
pulled the trigger.

So the first attached patch does that, and then the second one implements
%m in snprintf.c and causes it to be used all the time.  I've not touched
step 4 yet, that could be done later/piecemeal.

Although the attached causes strerror.c to be included in libpq, I think
it's actually dead code at the moment, because on any reasonably modern
platform (including *all* of the buildfarm) libpq does not depend on
strerror but strerror_r, cf pqStrerror().  It's tempting to expand
strerror.c to include a similar wrapper for strerror_r, so that the
extra functionality exists for libpq's usage too.  (Also, it'd likely
be better for snprintf.c to depend on strerror_r not strerror, to avoid
unnecessary thread-unsafety.)  But I've left that for later.

A couple of additional notes for review:

* The 0002 patch will conflict with my snprintf-speedup patch, but
resolving that is simple (just need to move one of the %m hunks around).

* src/port/strerror.c already exists, but as far as I can see it's been
dead code for decades; no ANSI-C-compliant platform lacks strerror()
altogether.  Moreover, ecpg never got taught to include it, so obviously
we've not built on a platform with that problem anytime recently.
So I just removed the former contents of that file.

* The most nervous-making aspect of this patch, IMO, is that there's an
addition to the API spec for appendStringInfoVA and pvsnprintf: callers
have to preserve errno when looping.  Fortunately there are very few
direct callers of those, but I'm slightly worried that extensions might
do so.  I don't see any way to avoid that change though.

* I dropped configure's checks for existence/declaration of snprintf
and vsnprintf, since (a) we no longer care, and (b) those are pretty
much useless anyway; no active buildfarm member fails those checks.

* The Windows aspects of this are untested.  It seems like importing
pgwin32_socket_strerror's behavior into the frontend ought to be a
bug fix, though: win32_port.h redefines socket error symbols whether
FRONTEND is set or not, so aren't we printing bogus info for socket
errors in frontend right now?

regards, tom lane

diff --git a/configure b/configure
index 836d68d..fadd06e 100755
*** a/configure
--- b/configure
*** esac
*** 15537,1 
  
  fi
  
- ac_fn_c_check_func "$LINENO" "strerror" "ac_cv_func_strerror"
- if test "x$ac_cv_func_strerror" = xyes; then :
-   $as_echo "#define HAVE_STRERROR 1" >>confdefs.h
- 
- else
-   case " $LIBOBJS " in
-   *" strerror.$ac_objext "* ) ;;
-   *) LIBOBJS="$LIBOBJS strerror.$ac_objext"
-  ;;
- esac
- 
- fi
- 
  ac_fn_c_check_func "$LINENO" "strlcat" "ac_cv_func_strlcat"
  if test "x$ac_cv_func_strlcat" = xyes; then :
$as_echo "#define HAVE_STRLCAT 1" >>confdefs.h
--- 15537,15542 
diff --git a/configure.in b/configure.in
index 6e14106..3adec10 100644
*** a/configure.in
--- b/configure.in
*** else
*** 1649,1655 
AC_CHECK_FUNCS([fpclass fp_class fp_class_d class], [break])
  fi
  
! AC_REPLACE_FUNCS([crypt fls getopt getrusage inet_aton mkdtemp random rint srandom strerror strlcat strlcpy strnlen])
  
  case $host_os in
  
--- 1649,1655 
AC_CHECK_FUNCS([fpclass fp_class fp_class_d class], [break])
  fi
  
! AC_REPLACE_FUNCS([crypt fls getopt getrusage inet_aton mkdtemp random rint srandom strlcat strlcpy strnlen])
  
  case $host_os in
  
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index f4356fe..af35cfb 100644
*** a/src/backend/port/win32/socket.c
--- b/src

Re: Performance improvements for src/port/snprintf.c

2018-08-19 Thread Tom Lane
I wrote:
> [ snprintf-speedups-1.patch ]

Here's a slightly improved version of that, with two changes:

* Given the current state of the what-about-%m thread, it's no longer
academic how well this performs relative to glibc's version.  I poked
at that and found that a lot of the discrepancy came from glibc using
strchrnul() to find the next format specifier --- apparently, that
function is a *lot* faster than the equivalent manual loop.  So this
version uses that if available.

* I thought of a couple of easy wins for fmtfloat.  We can pass the
precision spec down to the platform's sprintf using "*" notation instead
of converting it to text and back, and that also simplifies matters enough
that we can avoid using an sprintf call to build the simplified format
string.  This seems to get us down to the vicinity of a 10% speed penalty
on microbenchmarks of just float conversion, which is enough to satisfy
me given the other advantages of switching to our own snprintf.

regards, tom lane

diff --git a/configure b/configure
index 836d68d..dff9f0c 100755
*** a/configure
--- b/configure
*** fi
*** 15032,15038 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! for ac_func in cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l
  do :
as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh`
  ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var"
--- 15032,15038 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! for ac_func in cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open strchrnul symlink sync_file_range utime utimes wcstombs_l
  do :
as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh`
  ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var"
diff --git a/configure.in b/configure.in
index 6e14106..c00bb8f 100644
*** a/configure.in
--- b/configure.in
*** PGAC_FUNC_WCSTOMBS_L
*** 1535,1541 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! AC_CHECK_FUNCS([cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l])
  
  AC_REPLACE_FUNCS(fseeko)
  case $host_os in
--- 1535,1541 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! AC_CHECK_FUNCS([cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open strchrnul symlink sync_file_range utime utimes wcstombs_l])
  
  AC_REPLACE_FUNCS(fseeko)
  case $host_os in
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 827574e..da9cfa7 100644
*** a/src/include/pg_config.h.in
--- b/src/include/pg_config.h.in
***
*** 519,524 
--- 519,527 
  /* Define to 1 if you have the  header file. */
  #undef HAVE_STDLIB_H
  
+ /* Define to 1 if you have the `strchrnul' function. */
+ #undef HAVE_STRCHRNUL
+ 
  /* Define to 1 if you have the `strerror' function. */
  #undef HAVE_STRERROR
  
diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32
index 46ce49d..73d7424 100644
*** a/src/include/pg_config.h.win32
--- b/src/include/pg_config.h.win32
***
*** 390,395 
--- 390,398 
  /* Define to 1 if you have the  header file. */
  #define HAVE_STDLIB_H 1
  
+ /* Define to 1 if you have the `strchrnul' function. */
+ /* #undef HAVE_STRCHRNUL */
+ 
  /* Define to 1 if you have the `strerror' function. */
  #ifndef HAVE_STRERROR
  #define HAVE_STRERROR 1
diff --git a/src/port/snprintf.c b/src/port/snprintf.c
index 851e2ae..66151c2 100644
*** a/src/port/snprintf.c
--- b/src/port/snprintf.c
*** flushbuffer(PrintfTarget *target)
*** 295,301 
  }
  
  
! static void fmtstr(char *value, int leftjust, int minlen, int maxwidth,
  	   int pointflag, PrintfTarget *target);
  static void fmtptr(void *value, PrintfTarget *target);
  static void fmtint(int64 value, char type, int forcesign,
--- 295,303 
  }
  
  
! static bool find_arguments(const char *format, va_list args,
! 			   PrintfArgValue *argvalues);
! static void fmtstr(const char *value, int leftjust, int minlen, int maxwidth,
  	   int pointflag, PrintfTarget *target);
  static void fmtptr(void *value, PrintfTarget *target);
  static void fmtint(int64 value, char type, int forcesign,
*** static void fmtfloa

Re: [sqlsmith] ERROR: partition missing from subplans

2018-08-19 Thread David Rowley
On 17 August 2018 at 04:01, Alvaro Herrera  wrote:
> Looks good, pushed.  I edited the comment a little bit.

Thanks for pushing.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: How to estimate the shared memory size required for parallel scan?

2018-08-19 Thread Thomas Munro
On Sun, Aug 19, 2018 at 4:28 PM, Masayuki Takahashi
 wrote:
>> If you just supply an IsForeignScanParallelSafe function that returns
>> true, that would allow your FDW to be used inside parallel workers and
>> wouldn't need any extra shared memory, but it wouldn't be a "parallel
>> scan".  It would just be "parallel safe".  Each process that does a
>> scan of your FDW would expect a full normal scan (presumably returning
>> the same tuples in each process).
>
> I think that parallel scan mechanism uses this each worker's full
> normal scan to partitioned records, right?
> For example, I turned IsForeignScanParallelSafe to true in cstore_fdw
> and compared partitioned/non-partitioned scan.
>
> https://gist.github.com/masayuki038/daa63a21f8c16ffa8138b50db9129ced
>
> This shows that counted by each partition and 'Gather Merge' merge results.
> As a result, parallel scan and aggregation shows the correct count.

Ah, so here you have a Parallel Append node.  That is a way to get
coarse-grained parallelism when you have only parallel-safe (not
parallel-aware) scans, but you have partitions.  Technically (in our
jargon) there is no parallel scan happening here, but Parallel Append
is smart enough to scan each partition in a different worker.  That
means that the 'granularity' of parallelism is whole tables
(partitions), so if you have (say) 3 partitions of approximately the
same size and 2 processes, you'll probably see that one of the
processes scans 1 partition and the other process scans 2 partitions,
so the work can be quite unbalanced.  But if you have lots of
partitions, it's good, and in any case it's certainly better than no
parallelism.

> Then, in the case of cstore_fdw, it may not be necessary to reserve
> the shared memory in EstimateDSMForeignScan.

Correct.  If all you need is parallel-safe scans, then you probably
don't need any shared memory.

BTW to be truly pedantically parallel-safe, I think it should ideally
be the case that each process has the same "snapshot" when scanning,
or subtle inconsistencies could arise (a transaction could be visible
to one process, but not to another; this would be weirder if it
applied to concurrent scans of the *same* foreign table, but it could
still be strange when scanning different partitions in a Parallel
Append).  For file_fdw, we just didn't worry about that because plain
old text files are not transactional anyway, so we shrugged and
declared its scans to be parallel safe.  I suppose that any FDW that
is backed by a non-snapshot-based system (including other RDBMSs)
would probably have no way to do better than that, and you might make
the same decision we made for file_fdw.  When the foreign table is
PostgreSQL, or an extension that is tightly integrated into our
transaction system, I suppose you might want to think harder and maybe
even give the user some options?

>> So I guess this hasn't been done before and would require some more
>> research.
>
> I agree. I will try some query patterns.
> thanks.

Just to be clear, there I was talking about true Parallel Foreign
Scan, which is aiming a bit higher than mere parallel safety.  After
looking at this again, this time with the benefit of coffee, I *think*
it should be possible without modifying core, if you do this:

1.  As already mentioned, you need to figure out a way for cstore_fdw
to hand out a disjoint set of tuples to different processes.  That
seems quite doable, since cstore is apparently block-structured
(though I only skim-read it for about 7 seconds and could be wrong
about that).  You apparently have blocks and stripes: hopefully they
are of fixed size so you might be able to teach each process to
advance some kind of atomic variable in shared memory so that each
process eats different blocks?

2.  Teach your GetForeignPath function to do something like this:

ForeignPath *partial_path;
double parallel_divisor;
int parallel_workers;

... existing code that adds regular non-partial path here ...

/* Should we add a partial path to enable a parallel scan? */
partial_path = create_foreignscan_path(root, baserel, NULL,
   baserel->rows,
   startup_cost,
   total_cost,
   NIL, NULL, NULL, coptions);
parallel_workers = compute_parallel_worker(baserel,
expected_num_pages, -1,

max_parallel_workers_per_gather);
partial_path->path.parallel_workers = parallel_workers;
partial_path->path.parallel_aware = true;
parallel_divisor = get_parallel_divisor(&partial_path->path);
partial_path->path.rows /= parallel_divisor;
partial_path->path.total_cost = startup_cost +
((total_cost - startup_cost) / parallel_divisor);
if (parallel_workers > 0)
  add_partial_path(baserel, (Path *) partial_path);

You don't really 

Re: Temporary tables prevent autovacuum, leading to XID wraparound

2018-08-19 Thread Michael Paquier
On Tue, Aug 14, 2018 at 06:53:32PM +0200, Michael Paquier wrote:
> I was thinking about adding "Even if it is not atomic" or such at the
> beginning of the paragraph, but at the end your phrasing sounds better
> to me.  So I have hacked up the attached, which also reworks the comment
> in InitTempTableNamespace in the same spirit.  Thoughts?

It has been close to one week since I sent this patch.  Anything?  I
don't like to keep folks unhappy about anything I committed.
--
Michael


signature.asc
Description: PGP signature


Re: [FEATURE PATCH] pg_stat_statements with plans (v02)

2018-08-19 Thread Thomas Munro
On Sat, Apr 7, 2018 at 1:03 AM, Julian Markwort
 wrote:
> I've created a draft patch that provides access to plans in a view called 
> pg_stat_statements_plans.
> There is no column that indicates whether the plan is "good" or "bad", 
> because that is evident from the execution time of both plans and because 
> that would require some kind of plan_type for all plans that might be stored 
> in future versions.
>
> Please take it for a spin and tell me, whether the layout and handling of the 
> view make sense to you.

I realise this is probably WIP, but for what it's worth here's some
feedback from my patch testing robot:

+ +INFINITY,

That ^ is from C99 math.h, and so we don't currently expect compilers
to have it (until such time as we decide to pull the trigger and
switch to C99, as discussed in a nearby thread):

  contrib/pg_stat_statements/pg_stat_statements.c(482): error C2065:
'INFINITY' : undeclared identifier
[C:\projects\postgresql\pg_stat_statements.vcxproj]
5826

Maybe get_float8_infinity()?

+ for(int j = 0; j < numPlans; j++)

Can't declare a new variable here in C89.

+ pgssPlan *planArray[numPlans];

Can't use variable length arrays in C89.

-- 
Thomas Munro
http://www.enterprisedb.com



Re: [PATCH] Add regress test for pg_read_all_stats role

2018-08-19 Thread Michael Paquier
On Fri, Aug 03, 2018 at 02:08:27PM +0100, Alexandra Ryzhevich wrote:
> Thank you for pointing to some problems of the patch. I've attached a
> modified version below.

Could you avoid top-posting?  This is not the style of the Postgres
mailing lists.

I have been looking at your latest patch, and there are some gotchas:
- There is no need for the initial DROP ROLE commands as those already
get dropped at the end of the tests.
- Some installations may use non-default settings, causing installcheck
to fail with your patch once stats_temp_directory is using a different
path than pg_stat_tmp.
- The same applies for archive_timeout.
- There is already rolenames.sql which has a tiny coverage for default
roles, why not just using it?

+-- should fail because regress_role_nopriv has not CONNECT permission
on this db
+SELECT pg_database_size('regression') > 0 AS canread;
+ERROR:  permission denied for database regression
+-- should fail because regress_role_nopriv has not CREATE permission on
this tablespace
+SELECT pg_tablespace_size('pg_global') > 0 AS canread;
+ERROR:  permission denied for tablespace pg_global
Why is that part of a test suite for default roles?

There are three code paths which could trigger the restrictions we are
looking at for pg_read_all_settings:
- GetConfigOptionResetString, which is only used for datetyle now.
- GetConfigOptionByName, which can be triggered with any SHOW command.
- GetConfigOption, which is used at postmaster startup and when
reloading the configuration file.

2) is easy to be triggered as a negative test (which fails), less as a
positive test.  In order to make a positive test failure-proof with
installcheck you would need to have a parameter which can be changed by
a superuser at session level which gets updated to a certain value, and
would fail to show for another user, so you could use one which is
GUC_SUPERUSER_ONLY and of category PGC_SUSET, like
session_preload_libraries or dynamic_preload_libraries.  Still that's
pretty restrictive, and would only test one out of the three code paths
available.

1) and 3) have restrictions in place visibly mainly for module
developers.
--
Michael


signature.asc
Description: PGP signature


Re: ATTACH/DETACH PARTITION CONCURRENTLY

2018-08-19 Thread David Rowley
On 14 August 2018 at 04:00, Robert Haas  wrote:
> I've thought about similar things, but I think there's a pretty deep
> can of worms.  For instance, if you built a relcache entry using the
> transaction snapshot, you might end up building a seemingly-valid
> relcache entry for a relation that has been dropped or rewritten.
> When you try to access the relation data, you'll be attempt to access
> a relfilenode that's not there any more.  Similarly, if you use an
> older snapshot to build a partition descriptor, you might thing that
> relation OID 12345 is still a partition of that table when in fact
> it's been detached - and, maybe, altered in other ways, such as
> changing column types.

hmm, I guess for that to work correctly we'd need some way to allow
older snapshots to see the changes if they've not already taken a lock
on the table. If the lock had already been obtained then the ALTER
TABLE to change the type of the column would get blocked by the
existing lock. That kinda blows holes in only applying the change to
only snapshots newer than the ATTACH/DETACH's

> It seems to me that overall you're not really focusing on the right
> set of issues here.  I think the very first thing we need to worry
> about how how we're going to keep the executor from following a bad
> pointer and crashing.  Any time the partition descriptor changes, the
> next relcache rebuild is going to replace rd_partdesc and free the old
> one, but the executor may still have the old pointer cached in a
> structure or local variable; the next attempt to dereference it will
> be looking at freed memory, and kaboom.  Right now, we prevent this by
> not allowing the partition descriptor to be modified while there are
> any queries running against the partition, so while there may be a
> rebuild, the old pointer will remain valid (cf. keep_partdesc).  I
> think that whatever scheme you/we choose here should be tested with a
> combination of CLOBBER_CACHE_ALWAYS and multiple concurrent sessions
> -- one of them doing DDL on the table while the other runs a long
> query.

I did focus on that and did write a patch to solve the issue. After
writing that I discovered another problem where if the PartitionDesc
differed between planning and execution then run-time pruning did the
wrong thing (See find_matching_subplans_recurse). The
PartitionPruneInfo is built assuming the PartitionDesc matches between
planning and execution. I moved on from the dangling pointer issue
onto trying to figure out a way to ensure these are the same between
planning and execution.

> Once we keep it from blowing up, the second question is what the
> semantics are supposed to be.  It seems inconceivable to me that the
> set of partitions that an in-progress query will scan can really be
> changed on the fly.  I think we're going to have to rule that if you
> add or remove partitions while a query is running, we're going to scan
> exactly the set we had planned to scan at the beginning of the query;
> anything else would require on-the-fly plan surgery to a degree that
> seems unrealistic.

Trying to do that for in-progress queries would be pretty insane. I'm
not planning on doing anything there.

> That means that when a new partition is attached,
> already-running queries aren't going to scan it.  If they did, we'd
> have big problems, because the transaction snapshot might see rows in
> those tables from an earlier time period during which that table
> wasn't attached.  There's no guarantee that the data at that time
> conformed to the partition constraint, so it would be pretty
> problematic to let users see it.  Conversely, when a partition is
> detached, there may still be scans from existing queries hitting it
> for a fairly arbitrary length of time afterwards.  That may be
> surprising from a locking point of view or something, but it's correct
> as far as MVCC goes.  Any changes made after the DETACH operation
> can't be visible to the snapshot being used for the scan.
>
> Now, what we could try to change on the fly is the set of partitions
> that are used for tuple routing.  For instance, suppose we're
> inserting a long stream of COPY data.  At some point, we attach a new
> partition from another session.  If we then encounter a row that
> doesn't route to any of the partitions that existed at the time the
> query started, we could - instead of immediately failing - go and
> reload the set of partitions that are available for tuple routing and
> see if the new partition which was concurrently added happens to be
> appropriate to the tuple we've got.  If so, we could route the tuple
> to it.  But all of this looks optional.  If new partitions aren't
> available for insert/update tuple routing until the start of the next
> query, that's not a catastrophe.  The reverse direction might be more
> problematic: if a partition is detached, I'm not sure how sensible it
> is to keep routing tuples into it.  On the flip side, what would
> break, really?

Unsure a

Re: ALTER TABLE on system catalogs

2018-08-19 Thread Michael Paquier
On Fri, Jul 13, 2018 at 11:05:10AM +0200, Peter Eisentraut wrote:
> After reviewing that thread, I think my patch would still be relevant.
> Because the pending proposal is to not add TOAST tables to some catalogs
> such as pg_attribute, so my original use case of allowing ALTER TABLE /
> SET on system catalogs would still be broken for those tables.

Something has happened in this area in the shape of 96cdeae, and the
following catalogs do not have toast tables: pg_class, pg_index,
pg_attribute and pg_largeobject*.  While 0001 proposed upthread is not
really relevant anymore because there is already a test to list which
catalogs do not have a toast table.  For 0002, indeed the patch is still
seems relevant.  The comment block at the beginning of
create_toast_table should be updated.  Some tests would also be nice to
have.
--
Michael


signature.asc
Description: PGP signature


Re: [FEATURE PATCH] pg_stat_statements with plans (v02)

2018-08-19 Thread Tom Lane
[ off topic for this patch, but as long as you mentioned switching
to C99 ]

Thomas Munro  writes:
> + for(int j = 0; j < numPlans; j++)
> Can't declare a new variable here in C89.

As previously noted, that seems like a nice thing to allow ...

> + pgssPlan *planArray[numPlans];
> Can't use variable length arrays in C89.

... but I'm less excited about this one.  Seems like a great opportunity
for unexpected stack overflows, and thence at least the chance for
DOS-causing security attacks.  Can we prevent that from being allowed,
if we start using -std=c99?

regards, tom lane



Re: [HACKERS] GnuTLS support

2018-08-19 Thread Michael Paquier
On Thu, Mar 08, 2018 at 02:13:51PM -0500, Peter Eisentraut wrote:
> In the thread about Secure Transport we agreed to move the consideration
> of new SSL libraries to PG12.
> 
> Here is my current patch, after all the refactorings.
> 
> The status is that it works fine and could be used.
> 
> There are two failures in the SSL tests that I cannot explain.  The
> tests are for some rather obscure configurations, so the changed
> behaviors are not obviously wrong, perhaps legitimate implementation
> differences.  But someone wrote those tests with a purpose (probably),
> so we should have some kind of explanation for the regressions.

Patch v6 of this thread is failing to apply.  Could you rebase?
--
Michael


signature.asc
Description: PGP signature


Re: [FEATURE PATCH] pg_stat_statements with plans (v02)

2018-08-19 Thread Thomas Munro
On Mon, Aug 20, 2018 at 2:55 PM, Tom Lane  wrote:
> Thomas Munro  writes:
> As previously noted, that seems like a nice thing to allow ...
>
>> + pgssPlan *planArray[numPlans];
>> Can't use variable length arrays in C89.
>
> ... but I'm less excited about this one.  Seems like a great opportunity
> for unexpected stack overflows, and thence at least the chance for
> DOS-causing security attacks.  Can we prevent that from being allowed,
> if we start using -std=c99?

-Werror=vla in GCC, apparently.

Another problem with VLAs is that they aren't in C++ and last I heard
they aren't ever likely to be (at least not with that syntax).  I'd rather not
introduce obstacles on that path.

-- 
Thomas Munro
http://www.enterprisedb.com



Re: [FEATURE PATCH] pg_stat_statements with plans (v02)

2018-08-19 Thread Tom Lane
Thomas Munro  writes:
> On Mon, Aug 20, 2018 at 2:55 PM, Tom Lane  wrote:
>> Can we prevent that from being allowed,
>> if we start using -std=c99?

> -Werror=vla in GCC, apparently.

Ah, cool.

> Another problem with VLAs is that they aren't in C++ and last I heard
> they aren't ever likely to be (at least not with that syntax).  I'd rather not
> introduce obstacles on that path.

While I'm not that excited about the prospect of ever moving to C++,
I'm on board with the idea of sticking to something that's a subset
of both C99 and C++.  Are there any other language features that we'd
need to avoid?  Is there a direct way of testing for that, rather
than finding per-feature warning methods?

regards, tom lane



Re: Fix help option of contrib/oid2name

2018-08-19 Thread Tatsuro Yamada

On August 18, 2018 10:52:33 AM GMT+09:00, Tom Lane  wrote:

I think it probably needs to stay documented, but we could mark it as
deprecated ...


Okay, no issues with doing so.


I revised the patch like following:

vacuumlo:
   Document
 - Add long options
 - Add environment section

oid2name:
   Document
 - Add long options
 - Add environment section
 - Remove deprecated and unhandled option "-P password"
   Code
 - Revive handling "-H host" option silently

I didn't add "-H" to the help message because it's a deprecated option.
I guess that that means "silently" as you said on previous email.
Should I add it?  For example,

   Not added "-H". (current patch)
   
   Connection options:
 -d, --dbname=DBNAMEdatabase to connect to
 -h, --host=HOSTNAMEdatabase server host or socket directory
 -p, --port=PORTdatabase server port number
 -U, --username=USERNAMEconnect as specified database user
   

   Added "-H" to the help message after "-h"
   
   Connection options:
 -d, --dbname=DBNAMEdatabase to connect to
 -h, -H, --host=HOSTNAMEdatabase server host or socket directory
 -p, --port=PORTdatabase server port number
 -U, --username=USERNAMEconnect as specified database user
   

Please find the attached patch.

Regards,
Tatsuro Yamada



diff --git a/contrib/oid2name/oid2name.c b/contrib/oid2name/oid2name.c
index 63e360c4c5..5f6813f91c 100644
--- a/contrib/oid2name/oid2name.c
+++ b/contrib/oid2name/oid2name.c
@@ -60,8 +60,26 @@ void		sql_exec_dumpalltbspc(PGconn *, struct options *);
 void
 get_opts(int argc, char **argv, struct options *my_opts)
 {
+	static struct option long_options[] = {
+		{"host", required_argument, NULL, 'h'},
+		{"host", required_argument, NULL, 'H'}, /* deprecated */
+		{"port", required_argument, NULL, 'p'},
+		{"username", required_argument, NULL, 'U'},
+		{"dbname", required_argument, NULL, 'd'},
+		{"tablename", required_argument, NULL, 't'},
+		{"oid", required_argument, NULL, 'o'},
+		{"filenode", required_argument, NULL, 'f'},
+		{"quiet", no_argument, NULL, 'q'},
+		{"system", no_argument, NULL, 'S'},
+		{"extend", no_argument, NULL, 'x'},
+		{"index", no_argument, NULL, 'i'},
+		{"tablespace", no_argument, NULL, 's'},
+		{NULL, 0, NULL, 0}
+	};
+
 	int			c;
 	const char *progname;
+	int			optindex;
 
 	progname = get_progname(argv[0]);
 
@@ -93,10 +111,30 @@ get_opts(int argc, char **argv, struct options *my_opts)
 	}
 
 	/* get opts */
-	while ((c = getopt(argc, argv, "H:p:U:d:t:o:f:qSxish")) != -1)
+	while ((c = getopt_long(argc, argv, "H:h:p:U:d:t:o:f:qSxis", long_options, &optindex)) != -1)
 	{
 		switch (c)
 		{
+/* host to connect to */
+			case 'h':
+my_opts->hostname = pg_strdup(optarg);
+break;
+
+/* (deprecated) host to connect to */
+			case 'H':
+my_opts->hostname = pg_strdup(optarg);
+break;
+
+/* port to connect to on remote host */
+			case 'p':
+my_opts->port = pg_strdup(optarg);
+break;
+
+/* username */
+			case 'U':
+my_opts->username = pg_strdup(optarg);
+break;
+
 /* specify the database */
 			case 'd':
 my_opts->dbname = pg_strdup(optarg);
@@ -122,46 +160,26 @@ get_opts(int argc, char **argv, struct options *my_opts)
 my_opts->quiet = true;
 break;
 
-/* host to connect to */
-			case 'H':
-my_opts->hostname = pg_strdup(optarg);
-break;
-
-/* port to connect to on remote host */
-			case 'p':
-my_opts->port = pg_strdup(optarg);
-break;
-
-/* username */
-			case 'U':
-my_opts->username = pg_strdup(optarg);
-break;
-
 /* display system tables */
 			case 'S':
 my_opts->systables = true;
 break;
 
-/* also display indexes */
-			case 'i':
-my_opts->indexes = true;
-break;
-
 /* display extra columns */
 			case 'x':
 my_opts->extended = true;
 break;
 
+/* also display indexes */
+			case 'i':
+my_opts->indexes = true;
+break;
+
 /* dump tablespaces only */
 			case 's':
 my_opts->tablespaces = true;
 break;
 
-			case 'h':
-help(progname);
-exit(0);
-break;
-
 			default:
 fprintf(stderr, _("Try \"%s --help\" for more information.\n"), progname);
 exit(1);
@@ -176,20 +194,21 @@ help(const char *progname)
 		   "Usage:\n"
 		   "  %s [OPTION]...\n"
 		   "\nOptions:\n"
-		   "  -d DBNAME  database to connect to\n"
-		   "  -f FILENODEshow info for table with given file node\n"
-		   "  -H HOSTNAMEdatabase server host or socket directory\n"
-		   "  -i show indexes and sequences too\n"
-		   "  -o OID show info for table with given OID\n"
-		   "  -p PORTdatabase server port number\n"
-		   "  -q quiet (don't show headers)\n"
-		   "  -s show all tablespaces\n"
-		   "  -S show system objects too\n"
-		   "  -t TABLE   show info for named table\n"
-		 

Re: NLS handling fixes.

2018-08-19 Thread Michael Paquier
On Fri, Aug 10, 2018 at 04:50:55PM -0400, Tom Lane wrote:
> I would certainly *not* back-patch the GetConfigOptionByNum change,
> as that will be a user-visible behavioral change that people will
> not be expecting.  We might get away with back-patching the other changes,
> but SHOW ALL output seems like something that people might be expecting
> not to change drastically in a minor release.

Agreed, some folks may rely on that.

> * In the patch fixing view_query_is_auto_updatable misuse, nothing's
> been done to remove the underlying cause of the errors, which IMO
> is that the header comment for view_query_is_auto_updatable fails to
> specify the return convention.  I'd consider adding a comment along
> the lines of
> 
>  * view_query_is_auto_updatable - test whether the specified view definition
>  * represents an auto-updatable view. Returns NULL (if the view can be 
> updated)
>  * or a message string giving the reason that it cannot be.
> +*
> +* The returned string has not been translated; if it is shown as an error
> +* message, the caller should apply _() to translate it.

That sounds right.  A similar comment should be added for
view_cols_are_auto_updatable and view_col_is_auto_updatable.

> * Perhaps pg_GSS_error likewise could use a comment saying the error
> string must be translated already.  Or we could leave its callers alone
> and put the _() into it, but either way the API contract ought to be
> written down.

Both things are indeed possible.  As pg_SSPI_error applies translation
beforehand, I have taken the approach to let the caller just apply _().
For two messages that does not matter much one way or another.

> * The proposed patch for psql/common.c seems completely wrong, or at
> least it has a lot of side-effects other than enabling header translation,
> and I doubt they are desirable.

I doubted about it, and at closer look I think that you are right, so +1
for discarding this one.

Attached is a patch which should address all the issues reported and all
the remarks done.  What do you think?
--
Michael
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 7cedc28c6b..2a12d64aeb 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10826,7 +10826,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 ereport(ERROR,
 		(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 		 errmsg("WITH CHECK OPTION is supported only on automatically updatable views"),
-		 errhint("%s", view_updatable_error)));
+		 errhint("%s", _(view_updatable_error;
 		}
 	}
 
diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c
index 7d4511c585..ffb71c0ea7 100644
--- a/src/backend/commands/view.c
+++ b/src/backend/commands/view.c
@@ -502,7 +502,7 @@ DefineView(ViewStmt *stmt, const char *queryString,
 			ereport(ERROR,
 	(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 	 errmsg("WITH CHECK OPTION is supported only on automatically updatable views"),
-	 errhint("%s", view_updatable_error)));
+	 errhint("%s", _(view_updatable_error;
 	}
 
 	/*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index cecd104b4a..18c4921d61 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1037,6 +1037,10 @@ static GSS_DLLIMP gss_OID GSS_C_NT_USER_NAME = &GSS_C_NT_USER_NAME_desc;
 #endif
 
 
+/*
+ * Generate an error for GSSAPI authentication.  The caller needs to apply
+ * _() to errmsg to make it translatable.
+ */
 static void
 pg_GSS_error(int severity, const char *errmsg, OM_uint32 maj_stat, OM_uint32 min_stat)
 {
@@ -1227,7 +1231,7 @@ pg_GSS_recvauth(Port *port)
 		{
 			gss_delete_sec_context(&lmin_s, &port->gss->ctx, GSS_C_NO_BUFFER);
 			pg_GSS_error(ERROR,
-		 gettext_noop("accepting GSS security context failed"),
+		 _("accepting GSS security context failed"),
 		 maj_stat, min_stat);
 		}
 
@@ -1253,7 +1257,7 @@ pg_GSS_recvauth(Port *port)
 	maj_stat = gss_display_name(&min_stat, port->gss->name, &gbuf, NULL);
 	if (maj_stat != GSS_S_COMPLETE)
 		pg_GSS_error(ERROR,
-	 gettext_noop("retrieving GSS user name failed"),
+	 _("retrieving GSS user name failed"),
 	 maj_stat, min_stat);
 
 	/*
@@ -1317,6 +1321,11 @@ pg_GSS_recvauth(Port *port)
  *
  */
 #ifdef ENABLE_SSPI
+
+/*
+ * Generate an error for SSPI authentication.  The caller needs to apply
+ * _() to errmsg to make it translatable.
+ */
 static void
 pg_SSPI_error(int severity, const char *errmsg, SECURITY_STATUS r)
 {
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index 3123ee274d..eed069d56c 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2217,6 +2217,9 @@ view_has_instead_trigger(Relation view, CmdType event)
  * is auto-updatable. Returns NULL (if the column can be updated) or a message
  * string giving the reason

simplehash.h comment

2018-08-19 Thread Thomas Munro
Hi,

"For examples of usage look at simplehash.c ..."

There is no such file in the tree.  Maybe this should say tidbitmap.c?

-- 
Thomas Munro
http://www.enterprisedb.com



Re: Fix help option of contrib/oid2name

2018-08-19 Thread Michael Paquier
On Mon, Aug 20, 2018 at 12:30:29PM +0900, Tatsuro Yamada wrote:
> vacuumlo:
>Document
>  - Add long options
>  - Add environment section

Let's keep things simple by not adding long options where it is not
especially obvious, so I would suggest to keep the patch simple and just
add long options for connection options.

> oid2name:
>Document
>  - Add long options
>  - Add environment section
>  - Remove deprecated and unhandled option "-P password"

Is this a good idea?  I doubt that it is used but keeping it would not
cause any breakage, and removing it could.

oid2name supports also PGDATABASE.

>Code
>  - Revive handling "-H host" option silently
> 
> I didn't add "-H" to the help message because it's a deprecated
> option.

It seems to me that this should be still documented in both --help and
in the docs, and that we should just mark it as deprecated in both.

> I guess that that means "silently" as you said on previous email.
> Should I add it?  For example,
>
> [...]

Let's use a different line for that.  See for example how pg_standby -k
is treated.

I can see why you have reordered the options, this is more consistent
with things in src/bin/scripts.

We could have as well some basic TAP tests for both utilities while on
it.  Something as simple as the first tests in 050_dropdb.pl would do a
fine first shot.
--
Michael


signature.asc
Description: PGP signature


Re: simplehash.h comment

2018-08-19 Thread Michael Paquier
On Mon, Aug 20, 2018 at 04:38:20PM +1200, Thomas Munro wrote:
> "For examples of usage look at simplehash.c ..."
> 
> There is no such file in the tree.  Maybe this should say tidbitmap.c?

And execGrouping.c..
--
Michael


signature.asc
Description: PGP signature


Re: Copy function for logical replication slots

2018-08-19 Thread Michael Paquier
On Tue, Aug 14, 2018 at 01:38:23PM +0900, Masahiko Sawada wrote:
> On Thu, Jul 12, 2018 at 9:28 PM, Masahiko Sawada  
> wrote:
>> Attached new version of patch incorporated the all comments I got from
>> Michael-san.
>>
>> To prevent the WAL segment file of restart_lsn of the origin slot from
>> removal during creating the target slot, I've chosen a way to copy new
>> one while holding the origin one. One problem to implement this way is
>> that the current replication slot code doesn't allow us to do
>> straightforwardly; the code assumes that the process creating a new
>> slot is not holding another slot. So I've changed the copy functions
>> so that it save temporarily MyReplicationSlot and then restore and
>> release it after creation the target slot. To ensure that both the
>> origin and target slot are released properly in failure cases I used
>> PG_ENSURE_ERROR_CLEANUP(). That way, we can keep the code changes of
>> the logical decoding at a minimum. I've thought that we can change the
>> logical decoding code so that it can assumes that the process can have
>> more than one slots at once but it seems overkill to me.

Yeah, we may be able to live with this trick.  For other processes, the
process doing the copy would be seen as holding the slot so the
checkpointer would not advance the oldest LSN to retain.

> The previous patch conflicts with the current HEAD. Attached updated
> version patch.

+-- Now we have maximum 8 replication slots. Check slots are properly
+-- released even when raise error during creating the target slot.
+SELECT 'copy' FROM pg_copy_logical_replication_slot('orig_slot1',
'failed'); -- error
+ERROR:  all replication slots are in use

installcheck is going to fail on an instance which does not use exactly
max_replication_slots = 8.  That lacks flexibility, and you could have
the same coverage by copying, then immediately dropping each new slot.

+to a physical replicaiton slot named 
dst_slot_name.
s/replicaiton/replicaton/.

+The copied physical slot starts to reserve WAL from the same
LSN as the
+source slot if the source slot already reserves WAL.
Or restricting this case?  In what is it useful to allow a copy from a
slot which has done nothing yet?  This would also simplify the acquire
and release logic of the source slot.

+   /* Check type of replication slot */
+   if (SlotIsLogical(MyReplicationSlot))
+   {
+   ReplicationSlotRelease();
+   ereport(ERROR,
+   (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+(errmsg("cannot copy a logical replication slot to a
physical replication slot";
+   }
No need to call ReplicationSlotRelease for an ERROR code path.

Does it actually make sense to allow copy of temporary slots or change
their persistence?  Those don't live across sessions so they'd need to
be copied in the same session which created them.
--
Michael


signature.asc
Description: PGP signature


Re: Conflict handling for COPY FROM

2018-08-19 Thread Surafel Temesgen
Thanks for looking at it 
1. It sounded like you added the copy_max_error_limit GUC as part of this patch 
to allow users to specify how many errors they want to swallow with this new 
functionality. The GUC didn't seem to be defined and we saw no mention of it in 
the code. My guess is this might be good to address one of the concerns 
mentioned in the initial email thread of this generating too many transaction 
IDs so it would probably be good to have it.
By mistake I write it copy_max_error_limit  here but in the patch it is  
copy_maximum_error_limit with a default value of 100 sorry for mismatch 
2. I was curious why you only have support for skipping errors on UNIQUE and 
EXCLUSION constraints and not other kinds of constraints? I'm not sure how 
difficult it would be to add support for those, but it seems they could also be 
useful.
I see it now that most of formatting error can be handle safely I will attache 
the patch for it this week
3. We think the wording "ON CONFLICT IGNORE" may not be the clearest 
description of what this is doing since it is writing the failed rows to a file 
for a user to process later, but they are not being ignored. We considered 
things like STASH or LOG as alternatives to IGNORE. Andrew may have some other 
suggestions for wording.
I agree.I will change it to ON CONFLICT LOG if we can’t find better naming 
4. We also noticed this has no tests and thought it would be good to add some 
to ensure this functionality works how you intend it and continues to work. We 
started running some SQL to validate this, but haven't gotten the chance to put 
it into a clean test yet. We can send you what we have so far, or we are also 
willing to put a little time in to turn it into tests ourselves that we could 
contribute to this patch.
okay

Re: Fix help option of contrib/oid2name

2018-08-19 Thread Tatsuro Yamada

On 2018/08/20 13:54, Michael Paquier wrote:

On Mon, Aug 20, 2018 at 12:30:29PM +0900, Tatsuro Yamada wrote:

vacuumlo:
Document
  - Add long options
  - Add environment section


Let's keep things simple by not adding long options where it is not
especially obvious, so I would suggest to keep the patch simple and just
add long options for connection options.


Okay.


oid2name:
Document
  - Add long options
  - Add environment section
  - Remove deprecated and unhandled option "-P password"


Is this a good idea?  I doubt that it is used but keeping it would not
cause any breakage, and removing it could.


Yeah, why I remove that code is that there is no code to handle "-P" option,
and it caused error when executing the command with the option like this:

  $ oid2name -P hoge
  oid2name: invalid option -- 'P'

Therefore, "-P" is a manual bag. I investigated more using git log command and
understood followings:

  1. -P option was removed on 4192f2d85
  2. -P option revived in only the manual on 2963d8228

So, we have 2 options, remove -P option from the manual or revive the code
regarding to -P option (I mean it is like a git revert 4192f2d85).



oid2name supports also PGDATABASE.


As far as I know, this environment variable is also unused because
I could not get the results as I expected. So, I didn't add it to the manual.
For example, following command expected that it shows about "postgres", but
it couldn't.

  $ env |grep PG
  ...
  PGDATABASE=postgres
  ...

  $ oid2name
  All databases:
  Oid  Database Name  Tablespace
  --
16392 hogedb  pg_default
13310   postgres  pg_default
13309  template0  pg_default
1  template1  pg_default

  $ oid2name -d postgres
  From database "postgres":
Filenode  Table Name
  --
   16388  t1



Code
  - Revive handling "-H host" option silently

I didn't add "-H" to the help message because it's a deprecated
option.


It seems to me that this should be still documented in both --help and
in the docs, and that we should just mark it as deprecated in both.


I guess that that means "silently" as you said on previous email.
Should I add it?  For example,

[...]


Let's use a different line for that.  See for example how pg_standby -k
is treated.


I will do that.



I can see why you have reordered the options, this is more consistent
with things in src/bin/scripts.

We could have as well some basic TAP tests for both utilities while on
it.  Something as simple as the first tests in 050_dropdb.pl would do a
fine first shot.


For now, I'm not sure about TAP test but I knew both utilities have no
regression tests. It would be better to use something tests.


Regards,
Tatsuro Yamada