Re: review printing ecpg program version
On Wed, Sep 12, 2018 at 03:55:56PM +0900, Ioseph Kim wrote: > check please pg_config --version too. Well, one problem with that is that you would break a ton of stuff which parse this version string automatically. pg_config --version is used by many extensions to guess which version of Postgres is being worked on. -- Michael signature.asc Description: PGP signature
Re: Allowing printf("%m") only where it actually works
On Sun, Aug 19, 2018 at 03:12:00PM -0400, Tom Lane wrote: > * The Windows aspects of this are untested. It seems like importing > pgwin32_socket_strerror's behavior into the frontend ought to be a > bug fix, though: win32_port.h redefines socket error symbols whether > FRONTEND is set or not, so aren't we printing bogus info for socket > errors in frontend right now? I had a look at that this morning for some other Windows patch, and I think that HEAD is flat wrong to not expose pgwin32_socket_strerror to the frontend. I would have liked to look at this patch in details, but it failed to apply. Could you rebase? -- Michael signature.asc Description: PGP signature
Re: make installcheck-world in a clean environment
On Mon, May 07, 2018 at 01:07:15PM -0400, Tom Lane wrote: > Robert Haas writes: >> After thinking about this some more, I think the question here is >> definitional. A first attempt at defining 'make installcheck' is to >> say that it runs the tests from the build tree against the running >> server. Certainly, we intend to use the SQL files that are present in >> the build tree, not the ones that were present in the build tree where >> the running server was built. But what about client programs that we >> use to connect to the server? You're suggesting that we use the >> pre-installed ones, but that is actually pretty problematic because >> the ones we see as installed might correspond neither to the contents >> of the build tree nor to the running server. Conceivably we could end >> up having a mix of assets from three different places: (1) the running >> server, (2) the build tree, (3) whatever is in our path at the moment. >> That seems very confusing. So now I think it's probably right to >> define 'make installcheck' as using the assets from the build tree to >> test the running server. Under that definition, we're missing some >> dependencies, but USE_INSTALLED_ASSETS isn't a thing we need. > > Nah, I disagree with this. To me, the purpose of "make installcheck" > is to verify the correctness of an installation, which I take to include > the client programs as well as the server. I think that "make > installcheck" ought to use the installed version of any file that we > actually install, and go to the build tree only for things we don't > install (e.g. SQL test scripts). > > If the user has screwed up his PATH or other environmental aspects so that > what he's testing isn't a single installation, that's his error, not > something that "make installcheck" ought to work around. Indeed, maybe > such aspects of his setup are intentional, and second-guessing them would > completely defeat his purpose. In any case, if you want to test the > build-tree assets, that's what "make check" is for. I agree with Tom's position, and this is the behavior that Postgres is using for ages for check and installcheck. If there are no objections, I'll mark the patch as rejected and move on to other things. -- Michael signature.asc Description: PGP signature
Re: [HACKERS] Cutting initdb's runtime (Perl question embedded)
On Fri, Aug 10, 2018 at 02:31:25PM -0400, Tom Lane wrote: > The cfbot points out that this has suffered bit-rot, so here's a rebased > version --- no substantive changes. + /* +* Try to read the symlink. If not there, not a symlink, etc etc, just +* quietly fail; the precise reason needn't concern us. +*/ + len = readlink(linkname, link_target, sizeof(link_target)); One thing that I can see changing with this patch is how timezone is set in postgresql.conf. For example, on HEAD I get 'Japan' while this patch gives back 'Asia/Tokyo'. Could it be an issue for countries with multiple timezones? I am not sure how Russian systems would react on that for example. The time cut for initdb is interesting for buildfarm members anyway, and the patch looks in nice shape to me. -- Michael signature.asc Description: PGP signature
Re: Collation versioning
Re: Thomas Munro 2018-09-07 > 2. We could remove datcollate and datctype and instead store a > collation OID. I'm not sure what problems would come up, but for > starters it seems a bit weird to have a shared catalog pointing to > rows in a non-shared catalog. Naive idea: make that catalog shared? Collations are system-wide after all. Christoph
Re: make installcheck-world in a clean environment
Hello Michael, 12.09.2018 10:20, Michael Paquier wrote: > On Mon, May 07, 2018 at 01:07:15PM -0400, Tom Lane wrote: >> Robert Haas writes: >>> After thinking about this some more, I think the question here is >>> definitional. A first attempt at defining 'make installcheck' is to >>> say that it runs the tests from the build tree against the running >>> server. Certainly, we intend to use the SQL files that are present in >>> the build tree, not the ones that were present in the build tree where >>> the running server was built. But what about client programs that we >>> use to connect to the server? You're suggesting that we use the >>> pre-installed ones, but that is actually pretty problematic because >>> the ones we see as installed might correspond neither to the contents >>> of the build tree nor to the running server. Conceivably we could end >>> up having a mix of assets from three different places: (1) the running >>> server, (2) the build tree, (3) whatever is in our path at the moment. >>> That seems very confusing. So now I think it's probably right to >>> define 'make installcheck' as using the assets from the build tree to >>> test the running server. Under that definition, we're missing some >>> dependencies, but USE_INSTALLED_ASSETS isn't a thing we need. >> Nah, I disagree with this. To me, the purpose of "make installcheck" >> is to verify the correctness of an installation, which I take to include >> the client programs as well as the server. I think that "make >> installcheck" ought to use the installed version of any file that we >> actually install, and go to the build tree only for things we don't >> install (e.g. SQL test scripts). >> >> If the user has screwed up his PATH or other environmental aspects so that >> what he's testing isn't a single installation, that's his error, not >> something that "make installcheck" ought to work around. Indeed, maybe >> such aspects of his setup are intentional, and second-guessing them would >> completely defeat his purpose. In any case, if you want to test the >> build-tree assets, that's what "make check" is for. > I agree with Tom's position, and this is the behavior that Postgres is > using for ages for check and installcheck. If there are no objections, > I'll mark the patch as rejected and move on to other things. > -- > Michael It seems, that you miss a major part of the discussion (we have discussed the issue from other positions later). And I don't think that age of the behavior should prevail over it's reasonability. Best regards, -- Alexander Lakhin Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
Re: review printing ecpg program version
ok, in case pg_config, ignore this, but It should be review in case ecpg. 2018년 09월 12일 16:03에 Michael Paquier 이(가) 쓴 글: On Wed, Sep 12, 2018 at 03:55:56PM +0900, Ioseph Kim wrote: check please pg_config --version too. Well, one problem with that is that you would break a ton of stuff which parse this version string automatically. pg_config --version is used by many extensions to guess which version of Postgres is being worked on. -- Michael
Re: Indicate anti-wraparound autovacuum in log_autovacuum_min_duration
On Tue, Jul 24, 2018 at 8:25 PM, Michael Paquier wrote: > On Tue, Jul 24, 2018 at 06:02:00PM +0900, Masahiko Sawada wrote: >> Yeah, for translation I think it's better to make full lines. When we >> added "aggressive" to autovacuum logs (commit b55509) we've done the >> same thing. > > I am wondering if it would easier to add an extra line in the output, > like "Options: aggressive, wraparound prevention" or such... It would be useful if we have a number of the options autovacuum workers could use but since there are only 2 I'm not sure we need the list-style. Regards, -- Masahiko Sawada NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center
Re: [HACKERS] Secondary index access optimizations
On 12.09.2018 08:14, David Rowley wrote: On 12 September 2018 at 08:32, Konstantin Knizhnik wrote: Also the patch proposed by you is much simple and does mostly the same. Yes, it is not covering CHECK constraints, but as far as partitioning becomes now standard in Postgres, I do not think that much people will use old inheritance mechanism and CHECK constraints. In any case, there are now many optimizations which works only for partitions, but not for inherited tables. I've not had time to look at your updated patch yet, but one thing I thought about after my initial review, imagine you have a setup like: create table listp (a int, b int) partition by list(a); create table listp1 partition of listp for values in(1); create index listp_a_b_idx on listp (a,b); and a query: select * from listp where a = 1 order by b; if we remove the "a = 1" qual, then listp_a_b_idx can't be used. Looks like this qual is considered for choosing optimal path before it is removed from list of quals in set_append_rel_size. At least the presence of this patch is not breaking the plan in this case: create table listp (a int, b int) partition by list(a); create table listp1 partition of listp for values in(1); create table listp2 partition of listp for values in(2); create index listp_a_b_idx on listp (a,b); insert into listp values (1,generate_series(1,10)); insert into listp values (2,generate_series(11,20)); explain select * from listp where a = 1 order by b; QUERY PLAN Merge Append (cost=0.30..4630.43 rows=10 width=8) Sort Key: listp1.b -> Index Only Scan using listp1_a_b_idx on listp1 (cost=0.29..3630.42 rows=10 width=8) (3 rows) I didn't test this in your patch, but I guess since the additional quals are not applied to the children in set_append_rel_size() that by the time set_append_rel_pathlist() is called, then when we go generating the paths, the (a,b) index won't be any good. Perhaps there's some workaround like inventing some sort of "no-op" qual that exists in planning but never makes it way down to scans. Although I admit to not having fully thought that idea through. -- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
Re: pgbench - add pseudo-random permutation function
Hello Hironobu-san, However, the implementation of the scatter operation in this patch overflows in many cases if the variable:size is 38 bit integer or greater. Because the variable:size and the item of the array:primes[] which stores 27-29 bit integers are multiplicated. If overflow occurs, the scatter operation does not satisfy bijective. Indeed. Again, thanks for the debug! As you contributed some code, I added you as a co-author in the CF entry. Attached a v3, based on your fix, plus some additional changes: - explicitly declare unsigned variables where appropriate, to avoid casts - use smaller 24 bits primes instead of 27-29 bits - add a shortcut for multiplier below 24 bits and y value below 40 bits, which should avoid the manually implemented multiplication in most practical cases (tables with over 2^40 rows are pretty rare...). - change the existing shortcut to look a the number of bits instead of using 32 limits. - add a test for minimal code coverage with over 40 bits sizes - attempt to improve the documentation - some comments were updates, hopefully for the better The main idea behind the smaller primes is to avoid the expensive modmul implementation on most realistic cases. -- Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index 88cf8b3933..9b8e90e26f 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -917,7 +917,7 @@ pgbench options d default_seed - seed used in hash functions by default + seed used in hash and pseudo-random permutation functions by default @@ -1370,6 +1370,13 @@ pgbench options d pow(2.0, 10), power(2.0, 10) 1024.0 + + pr_perm(i, size [, seed ] ) + integer + pseudo-random permutation in [0,size) + pr_perm(0, 4) + 0, 1, 2 or 3 + random(lb, ub) integer @@ -1531,6 +1538,24 @@ f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) / + +Function pr_perm implements a pseudo-random permutation. +It allows to mix the output of non uniform random functions so that +values drawn more often are not trivially correlated. +It permutes integers in [0, size) using a seed by applying rounds of +simple invertible functions, similarly to an encryption function, +although beware that it is not at all cryptographically secure. +Compared to hash functions discussed above, the function +ensures that a perfect permutation is applied: there are no collisions +nor holes in the output values. +Values outside the interval are interpreted modulo the size. +The function errors if size is not positive. +If no seed is provided, :default_seed is used. +For a given size and seed, the function is fully deterministic: if two +permutations on the same size must not be correlated, use distinct seeds +as outlined in the previous example about hash functions. + + As an example, the full definition of the built-in TPC-B-like transaction is: diff --git a/src/bin/pgbench/exprparse.y b/src/bin/pgbench/exprparse.y index f7c56cc6a3..762a62959b 100644 --- a/src/bin/pgbench/exprparse.y +++ b/src/bin/pgbench/exprparse.y @@ -19,6 +19,7 @@ #define PGBENCH_NARGS_VARIABLE (-1) #define PGBENCH_NARGS_CASE (-2) #define PGBENCH_NARGS_HASH (-3) +#define PGBENCH_NARGS_PRPERM (-4) PgBenchExpr *expr_parse_result; @@ -366,6 +367,9 @@ static const struct { "hash_fnv1a", PGBENCH_NARGS_HASH, PGBENCH_HASH_FNV1A }, + { + "pr_perm", PGBENCH_NARGS_PRPERM, PGBENCH_PRPERM + }, /* keep as last array element */ { NULL, 0, 0 @@ -478,6 +482,19 @@ make_func(yyscan_t yyscanner, int fnumber, PgBenchExprList *args) } break; + /* pseudo-random permutation function with optional seed argument */ + case PGBENCH_NARGS_PRPERM: + if (len < 2 || len > 3) +expr_yyerror_more(yyscanner, "unexpected number of arguments", + PGBENCH_FUNCTIONS[fnumber].fname); + + if (len == 2) + { +PgBenchExpr *var = make_variable("default_seed"); +args = make_elist(var, args); + } + break; + /* common case: positive arguments number */ default: Assert(PGBENCH_FUNCTIONS[fnumber].nargs >= 0); diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index 41b756c089..895e424452 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -986,6 +986,249 @@ getHashMurmur2(int64 val, uint64 seed) return (int64) result; } +/* pseudo-random permutation */ + +/* 16 so that % 16 can be optimized to & 0x0f */ +#define PRP_PRIMES 16 +/* + * 24 bits mega primes from https://primes.utm.edu/lists/small/millions/ + * the i-th prime, i \in [0, 15], is the first prime above 2^23 + i * 2^19 + */ +static uint64 primes[PRP_PRIMES] = { + UINT64CONST(8388617), + UINT64CONST(8912921), + UINT64CONST(9437189), + UINT64CONST(9961487), + UINT64CONST(10485767), + UINT64CONST(11010
Re: pg_dump test instability
On 28/08/2018 20:47, Tom Lane wrote: > Here's a proposed patch for this. It removes the hacking of the TOC list > order, solving Peter's original problem, and instead sorts-by-size > in the actual parallel dump or restore control code. I have reviewed this patch. I haven't done any major performance tests or the like, but the improvements are clear in principle. It does solve the issue that I had originally reported, when I apply it on top of my development branch. Some small comments on the code: Maybe add a ready_list_free() to go with ready_list_init(), instead of calling pg_free(ready_list.tes) directly. get_next_work_item() has been changed to remove the work item from the ready_list. Maybe rename to something like pop_next_work_item()? I'm confused by what ready_list_remove() is doing when it's not removing the first item. It looks like it's removing all leading items up to the i'th one. Is that what we want? In some cases, we are skipping over things that we are not interested at all, so this would work, but if we're just skipping over an item because of a lock conflict, then it's not right. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Getting ERROR: could not open file "base/13164/t3_16388" with partition table with ON COMMIT
Hi, I am getting below error while creating temp root partition table with on commit. getting same error from v10 onwards. [edb@localhost bin]$ ./psql postgres psql (10.5) Type "help" for help. postgres=# CREATE TEMP TABLE test ( c1 varchar, c2 int) PARTITION BY RANGE (c1) ON COMMIT DELETE ROWS; ERROR: could not open file "base/13164/t3_16388": No such file or directory Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: Getting ERROR: could not open file "base/13164/t3_16388" with partition table with ON COMMIT
On 2018/09/12 19:29, Rajkumar Raghuwanshi wrote: > Hi, > > I am getting below error while creating temp root partition table with on > commit. getting same error from v10 onwards. > > [edb@localhost bin]$ ./psql postgres > psql (10.5) > Type "help" for help. > > postgres=# CREATE TEMP TABLE test ( c1 varchar, c2 int) PARTITION BY RANGE > (c1) ON COMMIT DELETE ROWS; > ERROR: could not open file "base/13164/t3_16388": No such file or directory Oops, good catch. The infamous missing-relkind-check in heap_truncate() seems to be behind this. Perhaps, a patch like the attached will do? Thanks, Amit diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index 9176f6280b..3f0be39940 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -3174,7 +3174,8 @@ heap_truncate(List *relids) Relationrel = lfirst(cell); /* Truncate the relation */ - heap_truncate_one_rel(rel); + if (rel->rd_rel->relkind == RELKIND_RELATION) + heap_truncate_one_rel(rel); /* Close the relation, but keep exclusive lock on it until commit */ heap_close(rel, NoLock);
Re: Collation versioning
On 12/09/2018 10:15, Christoph Berg wrote: > Re: Thomas Munro 2018-09-07 > >> 2. We could remove datcollate and datctype and instead store a >> collation OID. I'm not sure what problems would come up, but for >> starters it seems a bit weird to have a shared catalog pointing to >> rows in a non-shared catalog. > > Naive idea: make that catalog shared? Collations are system-wide after > all. By the same argument, extensions should be shared, but they are not. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Collation versioning
Re: Peter Eisentraut 2018-09-12 <0447ec7b-cdb6-7252-7943-88a4664e7...@2ndquadrant.com> > > Naive idea: make that catalog shared? Collations are system-wide after > > all. > > By the same argument, extensions should be shared, but they are not. But extensions put a lot of visible stuff into a database, whereas a collation is just a line in some table that doesn't get into the way. Christoph
Re: Loaded footgun open_datasync on Windows
On Wed, 2018-09-12 at 14:43 +0900, Michael Paquier wrote: > On Mon, Sep 10, 2018 at 04:46:40PM +0200, Laurenz Albe wrote: > > I didn't get pg_upgrade to work without the log file hacks; I suspect > > that there is more than just log file locking going on, but my Windows > > skills are limited. > > > > How shall I proceed? > > I do like this patch, and we have an occasion to clean a bunch of things > in pg_upgrade, so this argument is enough to me to put my hands in the > dirt and check by myself, so... Thanks for that and the rest of your review. > > I have attached a new version, the previous one was bit-rotted. > > I really thought that this was not ambitious enough, so I have hacked on > top of your patch, so as pg_upgrade concurrent issues are removed, and I > have found one barrier in pg_ctl which decides that it is smarter to > redirect the log file (here pg_upgrade_server.log) using CMD. The > problem is that the lock taken by the process which does the redirection > does not work nicely with what pg_upgrade does in parallel. So I think > that it is better to drop that part. As soon as I get to our Windows machine with a bit of time on my hands, I'll try to remove that hack in pg_ctl and see if that makes pg_upgrade work without the WIN32 workarounds. > +#ifdef WIN32 > + if ((infile = fopen(path, "rt")) == NULL) > +#else > if ((infile = fopen(path, "r")) == NULL) > +#endif > This should have a comment, saying roughly that as this uses > win32_fopen, text mode needs to be enforced to get proper CRLF. Agreed, will do. > One spot for open() is missed in file_utils.c, please see > pre_sync_fname(). I missed that since PG_FLUSH_DATA_WORKS is never defined on Windows, but I agree that it can't hurt to use the three-argument form of open(2) there too, just in case Windows becomes more POSIX-compliant... > The patch fails to apply for pg_verify_checksums, with a conflict easy > enough to fix. That's the bitrot I mentioned above; looks like I attached the wrong version of the patch. Will amend. > At the end I would be incline to accept the patch proposed, knowing that > this would fix https://postgr.es/m/16922.1520722...@sss.pgh.pa.us > mentioned by Thomas upthread as get_pgpid would do things in a shared > manner, putting an end at some of the random failures we've seen on the > buildfarm. > > Laurenz, could you update your patch? I am switching that as waiting on > author for now. Thanks again! Yours, Laurenz Albe
Re: executor relation handling
Hi Amit, On 9/12/18 1:23 AM, Amit Langote wrote: Please find attached revised patches. After applying 0004 I'm getting a crash in 'eval-plan-qual' during check-world using export CFLAGS="-DCOPY_PARSE_PLAN_TREES -O0 -fno-omit-frame-pointer" && ./configure --enable-dtrace --with-openssl --with-gssapi --with-libxml --with-llvm --enable-debug --enable-depend --enable-tap-tests --enable-cassert Confirmed by CFBot in [1]. [1] https://travis-ci.org/postgresql-cfbot/postgresql/builds/427530296 Best regards, Jesper
Re: review printing ecpg program version
On 12/09/2018 10:32, Ioseph Kim wrote: > ok, in case pg_config, ignore this, but It should be review in case ecpg. fixed > 2018년 09월 12일 16:03에 Michael Paquier 이(가) 쓴 글: >> On Wed, Sep 12, 2018 at 03:55:56PM +0900, Ioseph Kim wrote: >>> check please pg_config --version too. >> Well, one problem with that is that you would break a ton of stuff which >> parse this version string automatically. pg_config --version is used by >> many extensions to guess which version of Postgres is being worked on. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
[patch] Support LLVM 7
LLVM 7 landed in Debian unstable, this patch teaches ./configure to use it. (General patch, not specific to Debian.) Christoph >From 19afeadb2491b09e6856ef8010fecbe688cb6042 Mon Sep 17 00:00:00 2001 From: Christoph Berg Date: Wed, 12 Sep 2018 14:41:39 +0200 Subject: [PATCH] Support LLVM 7 Upstream moved to single-number versions for naming binaries, it's llvm-config-7 and clang-7 now. --- config/llvm.m4 | 4 ++-- configure | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/config/llvm.m4 b/config/llvm.m4 index 7d81ac0b99..926d684ed1 100644 --- a/config/llvm.m4 +++ b/config/llvm.m4 @@ -13,7 +13,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT], AC_REQUIRE([AC_PROG_AWK]) AC_ARG_VAR(LLVM_CONFIG, [path to llvm-config command]) - PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9) + PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7 llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9) # no point continuing if llvm wasn't found if test -z "$LLVM_CONFIG"; then @@ -31,7 +31,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT], # need clang to create some bitcode files AC_ARG_VAR(CLANG, [path to clang compiler to generate bitcode]) - PGAC_PATH_PROGS(CLANG, clang clang-6.0 clang-5.0 clang-4.0 clang-3.9) + PGAC_PATH_PROGS(CLANG, clang clang-7 clang-6.0 clang-5.0 clang-4.0 clang-3.9) if test -z "$CLANG"; then AC_MSG_ERROR([clang not found, but required when compiling --with-llvm, specify with CLANG=]) fi diff --git a/configure b/configure index dd77742c46..c6a44a9078 100755 --- a/configure +++ b/configure @@ -4995,7 +4995,7 @@ done if test -z "$LLVM_CONFIG"; then - for ac_prog in llvm-config llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9 + for ac_prog in llvm-config llvm-config-7 llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9 do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 @@ -5066,7 +5066,7 @@ fi # need clang to create some bitcode files if test -z "$CLANG"; then - for ac_prog in clang clang-6.0 clang-5.0 clang-4.0 clang-3.9 + for ac_prog in clang clang-7 clang-6.0 clang-5.0 clang-4.0 clang-3.9 do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 -- 2.19.0
Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
On 11-09-2018 18:29, Fabien COELHO wrote: Hello Marina, Hmm, but we can say the same for serialization or deadlock errors that were not retried (the client test code itself could not run correctly or the SQL sent was somehow wrong, which is also the client's fault), can't we? I think not. If a client asks for something "legal", but some other client in parallel happens to make an incompatible change which result in a serialization or deadlock error, the clients are not responsible for the raised errors, it is just that they happen to ask for something incompatible at the same time. So there is no user error per se, but the server is reporting its (temporary) inability to process what was asked for. For these errors, retrying is fine. If the client was alone, there would be no such errors, you cannot deadlock with yourself. This is really an isolation issue linked to parallel execution. You can get other errors that cannot happen for only one client if you use shell commands in meta commands: starting vacuum...end. transaction type: pgbench_meta_concurrent_error.sql scaling factor: 1 query mode: simple number of clients: 2 number of threads: 1 number of transactions per client: 10 number of transactions actually processed: 20/20 maximum number of tries: 1 latency average = 6.953 ms tps = 287.630161 (including connections establishing) tps = 303.232242 (excluding connections establishing) statement latencies in milliseconds and failures: 1.636 0 BEGIN; 1.497 0 \setshell var mkdir my_directory && echo 1 0.007 0 \sleep 1 us 1.465 0 \setshell var rmdir my_directory && echo 1 1.622 0 END; starting vacuum...end. mkdir: cannot create directory ‘my_directory’: File exists mkdir: could not read result of shell command client 1 got an error in command 1 (setshell) of script 0; execution of meta-command failed transaction type: pgbench_meta_concurrent_error.sql scaling factor: 1 query mode: simple number of clients: 2 number of threads: 1 number of transactions per client: 10 number of transactions actually processed: 19/20 number of failures: 1 (5.000%) number of meta-command failures: 1 (5.000%) maximum number of tries: 1 latency average = 11.782 ms (including failures) tps = 161.269033 (including connections establishing) tps = 167.733278 (excluding connections establishing) statement latencies in milliseconds and failures: 2.731 0 BEGIN; 2.909 1 \setshell var mkdir my_directory && echo 1 0.231 0 \sleep 1 us 2.366 0 \setshell var rmdir my_directory && echo 1 2.664 0 END; Or if you use untrusted procedural languages in SQL expressions (see the used file in the attachments): starting vacuum...ERROR: relation "pgbench_branches" does not exist (ignoring this error and continuing anyway) ERROR: relation "pgbench_tellers" does not exist (ignoring this error and continuing anyway) ERROR: relation "pgbench_history" does not exist (ignoring this error and continuing anyway) end. client 1 got an error in command 0 (SQL) of script 0; ERROR: could not create the directory "my_directory": File exists at line 3. CONTEXT: PL/Perl anonymous code block client 1 got an error in command 0 (SQL) of script 0; ERROR: could not create the directory "my_directory": File exists at line 3. CONTEXT: PL/Perl anonymous code block transaction type: pgbench_concurrent_error.sql scaling factor: 1 query mode: simple number of clients: 2 number of threads: 1 number of transactions per client: 10 number of transactions actually processed: 18/20 number of failures: 2 (10.000%) number of serialization failures: 0 (0.000%) number of deadlock failures: 0 (0.000%) number of other SQL failures: 2 (10.000%) maximum number of tries: 1 latency average = 3.282 ms (including failures) tps = 548.437196 (including connections establishing) tps = 637.662753 (excluding connections establishing) statement latencies in milliseconds and failures: 1.566 2 DO $$ starting vacuum...ERROR: relation "pgbench_branches" does not exist (ignoring this error and continuing anyway) ERROR: relation "pgbench_tellers" does not exist (ignoring this error and continuing anyway) ERROR: relation "pgbench_history" does not exist (ignoring this error and continuing anyway) end. transaction type: pgbench_concurrent_error.sql scaling factor: 1 query mode: simple number of clients: 2 number of threads: 1 number of transactions per client: 10 number of transactions actually processed: 20/20 maximum number of tries: 1 latency average = 2.760 ms tps = 724.746078 (including connections establishing) tps = 853.131985 (excluding connections establishing) statement latencies in milliseconds and failures: 1.893 0 DO $$ Or if you try to create a function and perhaps replace an existing one: starting vacuum...end. client 0 go
Re: simplify index tuple descriptor initialization
On Mon, Aug 27, 2018 at 04:25:28PM +0200, Peter Eisentraut wrote: > Whenever some pg_attribute field is added or changed, a lot of > repetitive changes all over the code are necessary. Here is a small > change to remove one such place. It looks like a reasonable change to me. The code is good and regression tests passed. There is no need to update the documentation. Marked as Ready for Commiter. -- Arthur Zakirov Postgres Professional: http://www.postgrespro.com Russian Postgres Company
Re: [HACKERS] Cutting initdb's runtime (Perl question embedded)
Michael Paquier writes: > One thing that I can see changing with this patch is how timezone is set > in postgresql.conf. For example, on HEAD I get 'Japan' while this patch > gives back 'Asia/Tokyo'. Could it be an issue for countries with > multiple timezones? I am not sure how Russian systems would react on > that for example. Interesting --- what platform were you testing on? I believe that this patch will never make for any functional change, it will only give you some other alias for the zone it would have selected anyway. This could only fail to be true if there are distinct timezones that score_timezone() is failing to tell apart, which would be a bug in score_timezone, not this patch. (Presumably, we could fix any such bug by increasing the number of dates that score_timezone tests.) Since the tzdb database is rather full of aliases, for instance the one you mentioned $ grep ^Li src/timezone/data/tzdata.zi ... Li Asia/Tokyo Japan ... there is certainly plenty of opportunity for this to change the apparent value of TimeZone. But I think it's for the better: instead of choosing an alias that happens to be first according to some unspecified search order, it will choose the alias that somebody actually configured the operating system with. regards, tom lane
Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
Hello Marina, You can get other errors that cannot happen for only one client if you use shell commands in meta commands: Or if you use untrusted procedural languages in SQL expressions (see the used file in the attachments): Or if you try to create a function and perhaps replace an existing one: Sure. Indeed there can be shell errors, perl errors, create functions conflicts... I do not understand what is your point wrt these. I'm mostly saying that your patch should focus on implementing the retry feature when appropriate, and avoid changing the behavior (error displayed, abort or not) on features unrelated to serialization & deadlock errors. Maybe there are inconsistencies, and "bug"/"feature" worth fixing, but if so that should be a separate patch, if possible, and if these are bugs they could be backpatched. For now I'm still convinced that pgbench should keep on aborting on "\set" or SQL syntax errors, and show clear error messages on these, and your examples have not changed my mind on that point. I'm fine with renaming the field if it makes thinks clearer. They are all counters, so naming them "cnt" or "total_cnt" does not help much. Maybe "succeeded" or "success" to show what is really counted? Perhaps renaming of StatsData.cnt is better than just adding a comment to this field. But IMO we have the same problem (They are all counters, so naming them "cnt" or "total_cnt" does not help much.) for CState.cnt which cannot be named in the same way because it also includes skipped and failed transactions. Hmmm. CState's cnt seems only used to implement -t anyway? I'm okay if it has a different name, esp if it has a different semantics. I think I was arguing only about cnt in StatsData. -- Fabien.
Consistent segfault in complex query
Hello, We encountered a query that has been able to frequently segfault one of our postgres instances under certain conditions which we have not fully been able to isolate for reproduction. We were able to get a core dump out of one of the crashes and have poked at it, but we believe the answer is beyond our knowledge of postgres internals. This is on a 9.3.19 server and we saw no mention of a fix in the release notes since this version and we do not know if it affects later major releases as well. What we know so far is that somehow the arraycontains function was given a datum of 0 as the second argument that dereferenced to a null pointer. Our current hypothesis from poking at the core dump is that some memory context is getting freed before it should. This assumption comes from the complexity in the query (CTE containing params being repeatedly evaluated by multiple case statements) and the unpredictability of the failure case. The issue is easily avoidable, and we have asked the developer to solve their problem differently. However, the existence of a segfault is always concerning and we are reporting this issue in an effort to be conscientiousness members of the community. Due to the potentially sensitive contents we cannot provide the core directly, but we are happy to run commands against the core file to extract debugging information. We have also replaced certain values (database name, table name, column name) with generic identifiers. version PostgreSQL 9.3.19 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit (1 row) Query: WITH tmp(foo2, bar2, baz2, minutes2) AS ( SELECT ARRAY[$1 ::text], ARRAY[$2 ::text], ARRAY[$3 ::text], $4 ::double precision ) UPDATE table_name SET occurrences = CASE WHEN time_last_noticed >= current_timestamp - (SELECT minutes2 FROM tmp) * interval '1 minutes' THEN occurrences + 1 ELSE 1 END, foo = CASE WHEN time_last_noticed < current_timestamp - (SELECT minutes2 FROM tmp) * interval '1 minutes' THEN (SELECT foo2 FROM tmp) WHEN foo @> (SELECT foo2 FROM tmp) THEN foo ELSE array_cat(foo, (SELECT foo2 FROM tmp)) END, bar = CASE WHEN time_last_noticed < current_timestamp - (SELECT minutes2 FROM tmp) * interval '1 minutes' THEN (SELECT bar2 FROM tmp) WHEN bar @> (SELECT bar2 FROM tmp) THEN bar ELSE array_cat(bar, (SELECT bar2 FROM tmp)) END, baz = CASE WHEN time_last_noticed < current_timestamp - (SELECT minutes2 FROM tmp) * interval '1 minutes' THEN (SELECT baz2 FROM tmp) WHEN baz @> (SELECT baz2 FROM tmp) THEN baz ELSE array_cat(baz, (SELECT baz2 FROM tmp)) END, time_last_noticed = current_timestamp, total_occurrences = total_occurrences + 1 WHERE id1 = $5 AND id2 = $6; Table schema: Table "public.table_name" Column |Type | Modifiers | Storage | Stats target | Description ---+-+---+--+--+- id1 | text| not null | extended | | id2 | text| not null | extended | | occurrences | integer | not null | plain| | time_last_noticed | timestamp without time zone | not null | plain| | total_occurrences | integer | not null | plain| | bar | text[] | | extended | | baz | text[] | | extended | | foo | text[] | | extended | | Indexes: "table_name_pkey" PRIMARY KEY, btree (id1, id2) Back trace from the dump: (gdb) bt #0 pg_detoast_datum (datum=0x0) at fmgr.c:2241 #1 0x0067cd90 in arraycontains (fcinfo=0x2c56e30) at arrayfuncs.c:3841 #2 0x0058ba25 in ExecMakeFunctionResultNoSets (fcache=0x2c56dc0, econtext=0x2c500c0, isNull=0x7ffc571d9a3f "", isDone=) at execQual.c:2027 #3 0x00587375 in ExecEvalCase (caseExpr=0x2c54a30, econtext=0x2c500c0, isNull=0x2c60967 "", isDone=0x2c60c4c) at execQual.c:2985 #4 0x005878c3 in ExecTargetList (p
Re: [Patch] Create a new session in postmaster by calling setsid()
Michael Paquier writes: > On Mon, Aug 06, 2018 at 12:11:26PM +0800, Paul Guo wrote: >> Yes, if considering the case of starting postmaster manually, we can not >> create >> a new session in postmaster, so pg_ctl seems to be a good place for setsid() >> call. Attached a newer patch. Thanks. > Hmm. This patch breaks a feature of pg_ctl that I am really fond of for > development. When starting a node which enters in recovery, I sometimes > use Ctrl-C to stop pg_ctl, which automatically makes the started > Postgres instance to stop, and this saves more strokes. With your > patch, you don't get that anymore: when issuing Ctrl-C on pg_ctl then > the started instance still runs in the background. I would be ready to > accept a patch which does not change the default behavior, and makes the > deamonization behavior activated only if an option switch is given by > the user, like -d/--daemon. So I am -1 for what is proposed in its > current shape. Hmm, that seems like a pretty niche usage. I don't object to having a switch to control this, but it seems to me that dissociating from the terminal is by far the more commonly wanted behavior and so ought to be the default. regards, tom lane
Re: Consistent segfault in complex query
> "Kyle" == Kyle Samson writes: Kyle> Hello, Kyle> We encountered a query that has been able to frequently segfault Kyle> one of our postgres instances under certain conditions which we Kyle> have not fully been able to isolate for reproduction. We were Kyle> able to get a core dump out of one of the crashes and have poked Kyle> at it, but we believe the answer is beyond our knowledge of Kyle> postgres internals. This is on a 9.3.19 server and we saw no Kyle> mention of a fix in the release notes since this version and we Kyle> do not know if it affects later major releases as well. There's a relevant commit from Feb this year (ea6d67cf8) specifically referring to the case of CTEs inside subplans inside EvalPlanQual, which is exactly the scenario you have in your query. So you need to try this in 9.3.22 or later (ideally 9.3.24, the latest) which contain this fix. This is the relevant release note: * Fix misbehavior of concurrent-update rechecks with CTE references appearing in subplans (Tom Lane) If a CTE (WITH clause reference) is used in an InitPlan or SubPlan, and the query requires a recheck due to trying to update or lock a concurrently-updated row, incorrect results could be obtained. If this is indeed the problem, you may be able to narrow down the required conditions more tightly: the problem will occur only if the row to be updated was concurrently updated by another transaction. This shouldn't be too hard to arrange - update a row in another transaction but don't commit it yet, run the failing update statement such that it will update that same row (it will block), then commit the first update. -- Andrew (irc:RhodiumToad)
Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
On 12-09-2018 17:04, Fabien COELHO wrote: Hello Marina, You can get other errors that cannot happen for only one client if you use shell commands in meta commands: Or if you use untrusted procedural languages in SQL expressions (see the used file in the attachments): Or if you try to create a function and perhaps replace an existing one: Sure. Indeed there can be shell errors, perl errors, create functions conflicts... I do not understand what is your point wrt these. I'm mostly saying that your patch should focus on implementing the retry feature when appropriate, and avoid changing the behavior (error displayed, abort or not) on features unrelated to serialization & deadlock errors. Maybe there are inconsistencies, and "bug"/"feature" worth fixing, but if so that should be a separate patch, if possible, and if these are bugs they could be backpatched. For now I'm still convinced that pgbench should keep on aborting on "\set" or SQL syntax errors, and show clear error messages on these, and your examples have not changed my mind on that point. I'm fine with renaming the field if it makes thinks clearer. They are all counters, so naming them "cnt" or "total_cnt" does not help much. Maybe "succeeded" or "success" to show what is really counted? Perhaps renaming of StatsData.cnt is better than just adding a comment to this field. But IMO we have the same problem (They are all counters, so naming them "cnt" or "total_cnt" does not help much.) for CState.cnt which cannot be named in the same way because it also includes skipped and failed transactions. Hmmm. CState's cnt seems only used to implement -t anyway? I'm okay if it has a different name, esp if it has a different semantics. Ok! I think I was arguing only about cnt in StatsData. The discussion about this has become entangled from the beginning, because as I wrote in [1] at first I misread your original proposal... [1] https://www.postgresql.org/message-id/d318cdee8f96de6b1caf2ce684ffe4db%40postgrespro.ru -- Marina Polyakova Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
Re: executor relation handling
On Wed, Sep 12, 2018 at 9:23 PM, Jesper Pedersen wrote: > Hi Amit, > > On 9/12/18 1:23 AM, Amit Langote wrote: >> >> Please find attached revised patches. >> > > After applying 0004 I'm getting a crash in 'eval-plan-qual' during > check-world using > > export CFLAGS="-DCOPY_PARSE_PLAN_TREES -O0 -fno-omit-frame-pointer" && > ./configure --enable-dtrace --with-openssl --with-gssapi --with-libxml > --with-llvm --enable-debug --enable-depend --enable-tap-tests > --enable-cassert > > Confirmed by CFBot in [1]. > > [1] https://travis-ci.org/postgresql-cfbot/postgresql/builds/427530296 Thanks Jesper. Will look into it first thing tomorrow morning. Thanks, Amit
Re: Consistent segfault in complex query
Andrew Gierth writes: > "Kyle" == Kyle Samson writes: > Kyle> We encountered a query that has been able to frequently segfault > Kyle> one of our postgres instances under certain conditions which we > Kyle> have not fully been able to isolate for reproduction. We were > Kyle> able to get a core dump out of one of the crashes and have poked > Kyle> at it, but we believe the answer is beyond our knowledge of > Kyle> postgres internals. This is on a 9.3.19 server and we saw no > Kyle> mention of a fix in the release notes since this version and we > Kyle> do not know if it affects later major releases as well. > There's a relevant commit from Feb this year (ea6d67cf8) specifically > referring to the case of CTEs inside subplans inside EvalPlanQual, which > is exactly the scenario you have in your query. So you need to try this > in 9.3.22 or later (ideally 9.3.24, the latest) which contain this fix. I'm not entirely convinced that that fix will cure this, but certainly it seems related, and we should find out whether it has any effect. The reason this seems possibly different is that we're apparently returning wrong data out of the sub-select (a zero Datum value, but not marked isnull --- if it were, arraycontains wouldn't be reached). The previously fixed bug would have caused either multiple or missed returns of a valid CTE tuple. > If this is indeed the problem, you may be able to narrow down the > required conditions more tightly: the problem will occur only if the row > to be updated was concurrently updated by another transaction. Yeah, the presence of EvalPlanQual in the backtrace is sufficient to confirm that. It should be pretty easy to make a reproducible test case once you understand that prerequisite. regards, tom lane
Re: pg_dump test instability
Peter Eisentraut writes: > Some small comments on the code: > Maybe add a ready_list_free() to go with ready_list_init(), instead of > calling pg_free(ready_list.tes) directly. > get_next_work_item() has been changed to remove the work item from the > ready_list. Maybe rename to something like pop_next_work_item()? Both seem reasonable, will do. > I'm confused by what ready_list_remove() is doing when it's not removing > the first item. It looks like it's removing all leading items up to the > i'th one. Is that what we want? In some cases, we are skipping over > things that we are not interested at all, so this would work, but if > we're just skipping over an item because of a lock conflict, then it's > not right. No. In both code paths, the array slot at index first_te is being physically dropped from the set of valid entries (by incrementing first_te). In the first path, that slot holds the item we want to remove logically from the set, so that incrementing first_te is all we have to do: the remaining entries are still in the range first_te..last_te, and they're still sorted. In the second code path, the item that was in that slot is still wanted as part of the set, so we copy it into the valid range (overwriting the item in slot i, which is no longer wanted). Now the valid range is probably not sorted, so we have to flag that a re-sort is needed. I expect that most of the time the first code path will be taken, because usually we'll be able to dispatch the highest-priority ready entry. We'll only take the second path when we have to postpone the highest-priority entry because of a potential lock conflict against some already-running task. Any items between first_te and i are other tasks that also have lock conflicts and can't be dispatched yet; we certainly don't want to lose them, and this code doesn't. If you can suggest comments that would clarify this more, I'm all ears. regards, tom lane
Re: Consistent segfault in complex query
> "Tom" == Tom Lane writes: >> There's a relevant commit from Feb this year (ea6d67cf8) >> specifically referring to the case of CTEs inside subplans inside >> EvalPlanQual, which is exactly the scenario you have in your query. >> So you need to try this in 9.3.22 or later (ideally 9.3.24, the >> latest) which contain this fix. Tom> I'm not entirely convinced that that fix will cure this, but Tom> certainly it seems related, and we should find out whether it has Tom> any effect. I agree. Tom> The reason this seems possibly different is that we're apparently Tom> returning wrong data out of the sub-select (a zero Datum value, Tom> but not marked isnull --- if it were, arraycontains wouldn't be Tom> reached). The previously fixed bug would have caused either Tom> multiple or missed returns of a valid CTE tuple. I have some ideas as to why, and I'm poking at them in order to create a test case (no luck yet, but I'll keep at it). -- Andrew (irc:RhodiumToad)
Re: Consistent segfault in complex query
> "Andrew" == Andrew Gierth writes: Tom> The reason this seems possibly different is that we're apparently Tom> returning wrong data out of the sub-select (a zero Datum value, Tom> but not marked isnull --- if it were, arraycontains wouldn't be Tom> reached). The previously fixed bug would have caused either Tom> multiple or missed returns of a valid CTE tuple. Andrew> I have some ideas as to why, and I'm poking at them in order to Andrew> create a test case (no luck yet, but I'll keep at it). Bingo - I have a test case, which I'll post in a sec after testing it on other versions. The key in this case is that the EPQ is the _first_ time the InitPlan is executed - you need a construct like this: case when flag then foo when foo @> (select ... from cte) then foo end such that flag is true on the initially visible row version (hence the initplan is not run yet), but false on the modified version (hence running the initplan during EPQ). -- Andrew (irc:RhodiumToad)
Re: Getting ERROR: could not open file "base/13164/t3_16388" with partition table with ON COMMIT
Amit Langote writes: > The infamous missing-relkind-check in heap_truncate() seems to be behind > this. Perhaps, a patch like the attached will do? That seems excessively restrictive. Anything that has storage (e.g. matviews) ought to be truncatable, no? I thought we had a macro or utility function somewhere that knew which relkinds have storage, though I can't find it right now. I'd be inclined to instantiate that if it doesn't exist, and then the code here ought to read something like if (RelkindHasStorage(rel->rd_rel->relkind)) heap_truncate_one_rel(rel); Also, possibly the test ought to be inside heap_truncate_one_rel rather than its callers? regards, tom lane
Re: Consistent segfault in complex query
> "Andrew" == Andrew Gierth writes: > "Kyle" == Kyle Samson writes: Kyle> This is on a 9.3.19 server and we saw no Kyle> mention of a fix in the release notes since this version and we Kyle> do not know if it affects later major releases as well. Andrew> There's a relevant commit from Feb this year (ea6d67cf8) Andrew> specifically referring to the case of CTEs inside subplans Andrew> inside EvalPlanQual, which is exactly the scenario you have in Andrew> your query. So you need to try this in 9.3.22 or later (ideally Andrew> 9.3.24, the latest) which contain this fix. Kyle, you can disregard this suggestion because I've now confirmed the bug still exists in 9.3.24 (actually in REL9_3_STABLE head). -- Andrew (irc:RhodiumToad)
Re: Consistent segfault in complex query
> "Andrew" == Andrew Gierth writes: Andrew> Bingo - I have a test case, which I'll post in a sec after Andrew> testing it on other versions. OK, not only does it break in latest 9.3 stable, it also breaks in current master. This is the testcase: create table mytable (id integer, foo text[] default '{}', flag boolean default false); insert into mytable select generate_series(1,10); now in session B do: begin; update mytable set foo='{baz}', flag=true where id=6; -- leave transaction open and in session A: with tmp(f2) as (select array['foo']) update mytable set foo = case when not flag then foo when foo @> (select f2 from tmp) then foo else foo || (select f2 from tmp) end where id=6; -- hangs on row lock Then commit in session B, and watch A go down in flames. Going to see if this can be narrowed down further. -- Andrew (irc:RhodiumToad)
Re: Consistent segfault in complex query
> "Andrew" == Andrew Gierth writes: Andrew> Going to see if this can be narrowed down further. Simpler testcase, removing the CTE, so this is clearly just about InitPlan: create table mytable (flag boolean default false, foo integer); insert into mytable default values; session B: begin; update mytable set flag = true; session A: update mytable set foo = case when not flag then foo else length((select 'foo')) end; commit in B and watch A die in: #1 0x00b001bd in text_length (str=0) at varlena.c:647 -- Andrew (irc:RhodiumToad)
Re: [HACKERS] Exclude schema during pg_restore
Hi, Am Dienstag, den 20.09.2016, 20:59 -0400 schrieb Peter Eisentraut: > On 9/19/16 3:23 PM, Michael Banck wrote: > > Version 2 attached. > > Committed, thanks. > > I added the new option to the help output in pg_restore. I noticed this part of the help text does not mention `-N' when I think it should: |The options -I, -n, -P, -t, -T, and --section can be combined and specified |multiple times to select multiple objects. Patch attached. Michael -- Michael Banck Projektleiter / Senior Berater Tel.: +49 2166 9901-171 Fax: +49 2166 9901-100 Email: michael.ba...@credativ.de credativ GmbH, HRB Mönchengladbach 12080 USt-ID-Nummer: DE204566209 Trompeterallee 108, 41189 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer Unser Umgang mit personenbezogenen Daten unterliegt folgenden Bestimmungen: https://www.credativ.de/datenschutzdiff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c index 501d7cea72..34d93ab472 100644 --- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c @@ -506,7 +506,7 @@ usage(const char *progname) printf(_(" --role=ROLENAME do SET ROLE before restore\n")); printf(_("\n" - "The options -I, -n, -P, -t, -T, and --section can be combined and specified\n" + "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n" "multiple times to select multiple objects.\n")); printf(_("\nIf no input file name is supplied, then standard input is used.\n\n")); printf(_("Report bugs to .\n"));
Re: Consistent segfault in complex query
> "Andrew" == Andrew Gierth writes: Andrew> Simpler testcase, removing the CTE, so this is clearly just Andrew> about InitPlan: So I can see exactly where the problem is, but I'm not sure what the solution should be. EvalPlanQualStart copies the param_exec value list explicitly _not_ including the execPlan link, which obviously isn't going to work if the value has not been computed yet. Should it be forcing the evaluation of initplans that haven't been run yet, or should the EPQ scan evaluate them itself from a copy of the plan, or does there need to be some way to share state? (having the InitPlan be run more than once might be a problem?) -- Andrew (irc:RhodiumToad)
Re: Postgres 11 release notes
Hi, On Fri, May 11, 2018 at 11:08:52AM -0400, Bruce Momjian wrote: > I have committed the first draft of the Postgres 11 release notes. I > will add more markup soon. You can view the most current version here: > > http://momjian.us/pgsql_docs/release-11.html The first item of section 'E.1.3.10. Server Applications' is mine and has this TODO (though maybe that should be better marked up?) text: |IS IT CLEAR FROM THE DOCS THAT THE REPLICATION SLOT IS NOT TEMPORARY? So I guess we need to decide on that before release. The current doc is AFAIK: | This option causes the replication slot specified by the | option --slot to be created before starting the | backup. In this case, an error is raised if the slot already exists. I guess it does not hurt to have something like "to be (permanently) created before starting", but what do others think? Is it clear enough? Michael -- Michael Banck Projektleiter / Senior Berater Tel.: +49 2166 9901-171 Fax: +49 2166 9901-100 Email: michael.ba...@credativ.de credativ GmbH, HRB Mönchengladbach 12080 USt-ID-Nummer: DE204566209 Trompeterallee 108, 41189 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer Unser Umgang mit personenbezogenen Daten unterliegt folgenden Bestimmungen: https://www.credativ.de/datenschutz
Re: Allowing printf("%m") only where it actually works
Michael Paquier writes: > I would have liked to look at this patch in details, but it failed to > apply. Could you rebase? Ah, yeah, the dlopen patch touched a couple of the same places. Rebase attached --- no substantive changes. regards, tom lane diff --git a/configure b/configure index dd77742..1aefc57 100755 *** a/configure --- b/configure *** esac *** 15602,15620 fi - ac_fn_c_check_func "$LINENO" "strerror" "ac_cv_func_strerror" - if test "x$ac_cv_func_strerror" = xyes; then : - $as_echo "#define HAVE_STRERROR 1" >>confdefs.h - - else - case " $LIBOBJS " in - *" strerror.$ac_objext "* ) ;; - *) LIBOBJS="$LIBOBJS strerror.$ac_objext" - ;; - esac - - fi - ac_fn_c_check_func "$LINENO" "strlcat" "ac_cv_func_strlcat" if test "x$ac_cv_func_strlcat" = xyes; then : $as_echo "#define HAVE_STRLCAT 1" >>confdefs.h --- 15602,15607 diff --git a/configure.in b/configure.in index 3ada48b..3a23913 100644 *** a/configure.in --- b/configure.in *** else *** 1660,1666 AC_CHECK_FUNCS([fpclass fp_class fp_class_d class], [break]) fi ! AC_REPLACE_FUNCS([crypt dlopen fls getopt getrusage inet_aton mkdtemp random rint srandom strerror strlcat strlcpy strnlen]) case $host_os in --- 1660,1666 AC_CHECK_FUNCS([fpclass fp_class fp_class_d class], [break]) fi ! AC_REPLACE_FUNCS([crypt dlopen fls getopt getrusage inet_aton mkdtemp random rint srandom strlcat strlcpy strnlen]) case $host_os in diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c index f4356fe..af35cfb 100644 *** a/src/backend/port/win32/socket.c --- b/src/backend/port/win32/socket.c *** pgwin32_select(int nfds, fd_set *readfds *** 690,728 memcpy(writefds, &outwritefds, sizeof(fd_set)); return nummatches; } - - - /* - * Return win32 error string, since strerror can't - * handle winsock codes - */ - static char wserrbuf[256]; - const char * - pgwin32_socket_strerror(int err) - { - static HANDLE handleDLL = INVALID_HANDLE_VALUE; - - if (handleDLL == INVALID_HANDLE_VALUE) - { - handleDLL = LoadLibraryEx("netmsg.dll", NULL, DONT_RESOLVE_DLL_REFERENCES | LOAD_LIBRARY_AS_DATAFILE); - if (handleDLL == NULL) - ereport(FATAL, - (errmsg_internal("could not load netmsg.dll: error code %lu", GetLastError(; - } - - ZeroMemory(&wserrbuf, sizeof(wserrbuf)); - if (FormatMessage(FORMAT_MESSAGE_IGNORE_INSERTS | - FORMAT_MESSAGE_FROM_SYSTEM | - FORMAT_MESSAGE_FROM_HMODULE, - handleDLL, - err, - MAKELANGID(LANG_ENGLISH, SUBLANG_DEFAULT), - wserrbuf, - sizeof(wserrbuf) - 1, - NULL) == 0) - { - /* Failed to get id */ - sprintf(wserrbuf, "unrecognized winsock error %d", err); - } - return wserrbuf; - } --- 690,692 diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c index 16531f7..22e5d87 100644 *** a/src/backend/utils/error/elog.c --- b/src/backend/utils/error/elog.c *** static void send_message_to_server_log(E *** 178,185 static void write_pipe_chunks(char *data, int len, int dest); static void send_message_to_frontend(ErrorData *edata); static char *expand_fmt_string(const char *fmt, ErrorData *edata); - static const char *useful_strerror(int errnum); - static const char *get_errno_symbol(int errnum); static const char *error_severity(int elevel); static void append_with_tabs(StringInfo buf, const char *str); static bool is_log_level_output(int elevel, int log_min_level); --- 178,183 *** expand_fmt_string(const char *fmt, Error *** 3360,3366 */ const char *cp2; ! cp2 = useful_strerror(edata->saved_errno); for (; *cp2; cp2++) { if (*cp2 == '%') --- 3358,3364 */ const char *cp2; ! cp2 = strerror(edata->saved_errno); for (; *cp2; cp2++) { if (*cp2 == '%') *** expand_fmt_string(const char *fmt, Error *** 3384,3602 /* - * A slightly cleaned-up version of strerror() - */ - static const char * - useful_strerror(int errnum) - { - /* this buffer is only used if strerror() and get_errno_symbol() fail */ - static char errorstr_buf[48]; - const char *str; - - #ifdef WIN32 - /* Winsock error code range, per WinError.h */ - if (errnum >= 1 && errnum <= 11999) - return pgwin32_socket_strerror(errnum); - #endif - str = strerror(errnum); - - /* - * Some strerror()s return an empty string for out-of-range errno. This - * is ANSI C spec compliant, but not exactly useful. Also, we may get - * back strings of question marks if libc cannot transcode the message to - * the codeset specified by LC_CTYPE. If we get nothing useful, first try - * get_errno_symbol(), and if that fails, print the numeric errno. - */ - if (str == NULL || *str == '\0' || *str == '?') - str = get_errno_symbol(errnum); - - if (str == NULL) -
Re: Performance improvements for src/port/snprintf.c
Alexander Kuzmenkov writes: > I benchmarked this, using your testbed and comparing to libc sprintf > (Ubuntu GLIBC 2.27-0ubuntu3) and another implementation I know [1], all > compiled with gcc 5. Thanks for reviewing! The cfbot noticed that the recent dlopen patch conflicted with this in configure.in, so here's a rebased version. The code itself didn't change. regards, tom lane diff --git a/configure b/configure index dd77742..5fa9396 100755 *** a/configure --- b/configure *** fi *** 15060,15066 LIBS_including_readline="$LIBS" LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'` ! for ac_func in cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" --- 15060,15066 LIBS_including_readline="$LIBS" LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'` ! for ac_func in cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open strchrnul symlink sync_file_range utime utimes wcstombs_l do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" diff --git a/configure.in b/configure.in index 3ada48b..93e8556 100644 *** a/configure.in --- b/configure.in *** PGAC_FUNC_WCSTOMBS_L *** 1544,1550 LIBS_including_readline="$LIBS" LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'` ! AC_CHECK_FUNCS([cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l]) AC_REPLACE_FUNCS(fseeko) case $host_os in --- 1544,1550 LIBS_including_readline="$LIBS" LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'` ! AC_CHECK_FUNCS([cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open strchrnul symlink sync_file_range utime utimes wcstombs_l]) AC_REPLACE_FUNCS(fseeko) case $host_os in diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index 4094e22..752a547 100644 *** a/src/include/pg_config.h.in --- b/src/include/pg_config.h.in *** *** 531,536 --- 531,539 /* Define to 1 if you have the header file. */ #undef HAVE_STDLIB_H + /* Define to 1 if you have the `strchrnul' function. */ + #undef HAVE_STRCHRNUL + /* Define to 1 if you have the `strerror' function. */ #undef HAVE_STRERROR diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index 6618b43..ea72c44 100644 *** a/src/include/pg_config.h.win32 --- b/src/include/pg_config.h.win32 *** *** 402,407 --- 402,410 /* Define to 1 if you have the header file. */ #define HAVE_STDLIB_H 1 + /* Define to 1 if you have the `strchrnul' function. */ + /* #undef HAVE_STRCHRNUL */ + /* Define to 1 if you have the `strerror' function. */ #ifndef HAVE_STRERROR #define HAVE_STRERROR 1 diff --git a/src/port/snprintf.c b/src/port/snprintf.c index 851e2ae..66151c2 100644 *** a/src/port/snprintf.c --- b/src/port/snprintf.c *** flushbuffer(PrintfTarget *target) *** 295,301 } ! static void fmtstr(char *value, int leftjust, int minlen, int maxwidth, int pointflag, PrintfTarget *target); static void fmtptr(void *value, PrintfTarget *target); static void fmtint(int64 value, char type, int forcesign, --- 295,303 } ! static bool find_arguments(const char *format, va_list args, ! PrintfArgValue *argvalues); ! static void fmtstr(const char *value, int leftjust, int minlen, int maxwidth, int pointflag, PrintfTarget *target); static void fmtptr(void *value, PrintfTarget *target); static void fmtint(int64 value, char type, int forcesign, *** static void fmtfloat(double value, char *** 307,317 PrintfTarget *target); static void dostr(const char *str, int slen, PrintfTarget *target); static void dopr_outch(int c, PrintfTarget *target); static int adjust_sign(int is_negative, int forcesign, int *signvalue); ! static void adjust_padlen(int minlen, int vallen, int leftjust, int *padlen); ! static void leading_pad(int zpad, int *signvalue, int *padlen, PrintfTarget *target); ! static void trailing_pad(int *padlen, PrintfTarget *target); /* --- 309,320 PrintfTarget *target); static void dostr(const char *str, int slen, PrintfTarget *target); static voi
Re: [Patch] Create a new session in postmaster by calling setsid()
I wrote: > Michael Paquier writes: >> Hmm. This patch breaks a feature of pg_ctl that I am really fond of for >> development. When starting a node which enters in recovery, I sometimes >> use Ctrl-C to stop pg_ctl, which automatically makes the started >> Postgres instance to stop, and this saves more strokes. With your >> patch, you don't get that anymore: when issuing Ctrl-C on pg_ctl then >> the started instance still runs in the background. I would be ready to >> accept a patch which does not change the default behavior, and makes the >> deamonization behavior activated only if an option switch is given by >> the user, like -d/--daemon. So I am -1 for what is proposed in its >> current shape. > Hmm, that seems like a pretty niche usage. I don't object to having > a switch to control this, but it seems to me that dissociating from > the terminal is by far the more commonly wanted behavior and so > ought to be the default. BTW, just thinking outside the box a bit --- perhaps the ideal behavior to address Michael's use-case would be to have the postmaster itself do setsid(), but not until it reaches the state of being ready to accept client connections. We'd likely need a switch to control that. If memory serves, there used to be such a switch, but we got rid of the postmaster's setsid call and the switch too. We probably should dig in the archives and review the reasoning about that. I'm still of the opinion that dissociating from the terminal ought to be the default. On at least some platforms, that happens automatically because the postmaster's stdin, stdout, and stderr have been redirected away from the terminal. If we don't do it on platforms where setsid() is necessary, then we have a cross-platform behavioral difference, which generally doesn't seem like a good thing. regards, tom lane
Re: [PATCH][PROPOSAL] Add enum releation option type
В письме от 10 сентября 2018 18:02:10 пользователь Aleksandr Parfenov написал: > I did a quick look at yout patch and have some questions/points to > discuss. I like the idea of the patch and think that enum reloptions > can be useful. Especially for some frequently checked values, as it was > mentioned before. Thanks. > There are few typos in comments, like 'validateing'. Yeah. Thats my problem. I've rechecked them with spellchecker, and found two typos. If there are more, please point me to it. > I have two questions about naming of variables/structures: > > 1) variable opt_enum in parse_one_reloption named in different way > other similar variables named (without underscore). I've renamed it. If it confuses you, it may confuse others. No reason to confuse anybody. > > 2) enum > gist_option_buffering_numeric_values/gist_option_buffering_value_numbers. > Firstly, it has two names. My mistake. Fixed it. > Secondly, can we name it gist_option_buffering, why not? From my point of view, it is not "Gist Buffering Option" itself. It is only a part of C-code actually that creates "Gist Buffering Option". This enum defines "Numeric values for Gist Buffering Enum Option". So the logic of the name is following (((Gist options)->Buffering)->Numeric Values) May be "Numeric Values" is not the best name, but this type should be named gist_option_buffering_[something]. If you have any better idea what this "something" can be, I will follow your recommendations... > As you mentioned in previous mail, you prefer to keep enum and > relopt_enum_element_definition array in the same .h file. I'm not sure, > but I think it is done to keep everything related to enum in one place > to avoid inconsistency in case of changes in some part (e.g. change of > enum without change of array). On the other hand, array content created > without array creation itself in .h file. Can we move actual array > creation into same .h file? What is the point to separate array content > definition and array definition? Hm... This would be good. We really can do it? ;-) I am not C-expert, some aspects of C-development is still mysterious for me. So if it is really ok to create array in .h file, I would be happy to move it there (This patch does not include this change, I still want to be sure we can do it) -- Do code for fun.diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index db84da0..1801ebf 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -422,7 +422,11 @@ static relopt_real realRelOpts[] {{NULL}} }; -static relopt_string stringRelOpts[] +static relopt_enum_element_definition gist_buffering_enum_def[] + GIST_OPTION_BUFFERING_ENUM_DEF; +static relopt_enum_element_definition view_check_option_enum_def[] + VIEW_OPTION_CHECK_OPTION_ENUM_DEF; +static relopt_enum enumRelOpts[] { { { @@ -431,10 +435,8 @@ static relopt_string stringRelOpts[] RELOPT_KIND_GIST, AccessExclusiveLock }, - 4, - false, - gistValidateBufferingOption, - "auto" + gist_buffering_enum_def, + GIST_OPTION_BUFFERING_AUTO }, { { @@ -443,11 +445,14 @@ static relopt_string stringRelOpts[] RELOPT_KIND_VIEW, AccessExclusiveLock }, - 0, - true, - validateWithCheckOption, - NULL + view_check_option_enum_def, + VIEW_OPTION_CHECK_OPTION_NOT_SET }, + {{NULL}} +}; + +static relopt_string stringRelOpts[] +{ /* list terminator */ {{NULL}} }; @@ -494,6 +499,12 @@ initialize_reloptions(void) realRelOpts[i].gen.lockmode)); j++; } + for (i = 0; enumRelOpts[i].gen.name; i++) + { + Assert(DoLockModesConflict(enumRelOpts[i].gen.lockmode, + enumRelOpts[i].gen.lockmode)); + j++; + } for (i = 0; stringRelOpts[i].gen.name; i++) { Assert(DoLockModesConflict(stringRelOpts[i].gen.lockmode, @@ -532,6 +543,14 @@ initialize_reloptions(void) j++; } + for (i = 0; enumRelOpts[i].gen.name; i++) + { + relOpts[j] = &enumRelOpts[i].gen; + relOpts[j]->type = RELOPT_TYPE_ENUM; + relOpts[j]->namelen = strlen(relOpts[j]->name); + j++; + } + for (i = 0; stringRelOpts[i].gen.name; i++) { relOpts[j] = &stringRelOpts[i].gen; @@ -629,6 +648,9 @@ allocate_reloption(bits32 kinds, int type, const char *name, const char *desc) case RELOPT_TYPE_REAL: size = sizeof(relopt_real); break; + case RELOPT_TYPE_ENUM: + size = sizeof(relopt_enum); + break; case RELOPT_TYPE_STRING: size = sizeof(relopt_string); break; @@ -708,6 +730,24 @@ add_real_reloption(bits32 kinds, const char *name, const char *desc, double defa } /* + * add_enuum_reloption + * Add a new enum reloption + */ +void +add_enum_reloption(bits32 kinds, const char *name, const char *desc, + relopt_enum_element_definition *enum_def, int default_val) +{ + relopt_enum *newoption; + + newoption = (relopt_enum *) allocate_reloption(kinds, RELOPT_TYPE_ENUM, + name, desc); + newoption->enum_def = enum_def; +
Re: [Patch] Create a new session in postmaster by calling setsid()
> "Tom" == Tom Lane writes: Tom> BTW, just thinking outside the box a bit --- perhaps the ideal Tom> behavior to address Michael's use-case would be to have the Tom> postmaster itself do setsid(), but not until it reaches the state Tom> of being ready to accept client connections. Tom> We'd likely need a switch to control that. If memory serves, there Tom> used to be such a switch, but we got rid of the postmaster's Tom> setsid call and the switch too. We probably should dig in the Tom> archives and review the reasoning about that. The old silent_mode switch? The tricky part about doing setsid() is this: you're not allowed to do it if you're already a process group leader. silent_mode worked by having postmaster do another fork, exit in the parent, and do setsid() in the child. If postmaster is launched from pg_ctl, then it won't be a group leader unless pg_ctl made it one. But when it's run from the shell, it will be if the shell does job control (the initial command of each shell job is the group leader); if it's run from a service management process, then it'll depend on what that process does. -- Andrew (irc:RhodiumToad)
Re: adding tab completions
Arthur Zakirov writes: > On Sun, Jul 29, 2018 at 07:42:43PM -0500, Justin Pryzby wrote: >>> Actually..another thought: since toast tables may be VACUUM-ed, should I >>> introduce Query_for_list_of_tpmt ? >> I didn't include this one yet though. > I think it could be done by a separate patch. I don't actually think that's a good idea. It's more likely to clutter peoples' completion lists than offer anything they want. Even if someone actually does want to vacuum a toast table, they are not likely to be entering its name via tab completion; they're going to have identified which table they want via some query, and then they'll be doing something like copy-and-paste out of a query result. I pushed the first three hunks of the current patch, since those seem like pretty uncontroversial bug fixes for v11 issues. Attached is a rebased patch for the remainder, with some very minor adjustments. The main thing that is bothering me about the remainder is its desire to offer single-punctuation-character completions such as "(". I do not see the point of that. You can't select a completion without typing at least one character, so what does it accomplish to offer those options, except clutter? BTW, the cfbot has been claiming that this CF item fails patch application, but that seems to be because it's not actually testing the most recent patch. I speculate that that's because you did not name the attachment "something.patch" or "something.diff". Please use a more conventional filename for future attachments. regards, tom lane diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 7549b40..69e6632 100644 *** a/src/bin/psql/tab-complete.c --- b/src/bin/psql/tab-complete.c *** psql_completion(const char *text, int st *** 2802,2815 else if (Matches1("EXECUTE")) COMPLETE_WITH_QUERY(Query_for_list_of_prepared_statements); ! /* EXPLAIN */ ! ! /* ! * Complete EXPLAIN [ANALYZE] [VERBOSE] with list of EXPLAIN-able commands ! */ else if (Matches1("EXPLAIN")) ! COMPLETE_WITH_LIST7("SELECT", "INSERT", "DELETE", "UPDATE", "DECLARE", ! "ANALYZE", "VERBOSE"); else if (Matches2("EXPLAIN", "ANALYZE")) COMPLETE_WITH_LIST6("SELECT", "INSERT", "DELETE", "UPDATE", "DECLARE", "VERBOSE"); --- 2802,2834 else if (Matches1("EXECUTE")) COMPLETE_WITH_QUERY(Query_for_list_of_prepared_statements); ! /* ! * EXPLAIN [ ( option [, ...] ) ] statement ! * EXPLAIN [ ANALYZE ] [ VERBOSE ] statement ! */ else if (Matches1("EXPLAIN")) ! COMPLETE_WITH_LIST8("SELECT", "INSERT", "DELETE", "UPDATE", "DECLARE", ! "ANALYZE", "VERBOSE", "("); ! else if (HeadMatches2("EXPLAIN", "(")) ! { ! if (ends_with(prev_wd, '(') || ends_with(prev_wd, ',')) ! COMPLETE_WITH_LIST7("ANALYZE", "VERBOSE", "COSTS", "BUFFERS", ! "TIMING", "SUMMARY", "FORMAT"); ! else if (TailMatches1("FORMAT")) ! COMPLETE_WITH_LIST4("TEXT", "XML", "JSON", "YAML"); ! else if (TailMatches1("ANALYZE|VERBOSE|COSTS|BUFFERS|TIMING|SUMMARY")) ! COMPLETE_WITH_LIST4(",", ")", "ON", "OFF"); ! else ! COMPLETE_WITH_LIST2(",", ")"); ! } ! else if (Matches2("EXPLAIN", MatchAny) && ends_with(prev_wd, ')')) ! { ! /* ! * get_previous_words treats a parenthesized option list as one word, ! * so the above test is correct. ! */ ! COMPLETE_WITH_LIST5("SELECT", "INSERT", "DELETE", "UPDATE", "DECLARE"); ! } else if (Matches2("EXPLAIN", "ANALYZE")) COMPLETE_WITH_LIST6("SELECT", "INSERT", "DELETE", "UPDATE", "DECLARE", "VERBOSE"); *** psql_completion(const char *text, int st *** 3369,3401 COMPLETE_WITH_CONST("OPTIONS"); /* ! * VACUUM [ FULL | FREEZE ] [ VERBOSE ] [ table ] ! * VACUUM [ FULL | FREEZE ] [ VERBOSE ] ANALYZE [ table [ (column [, ...] ) ] ] */ else if (Matches1("VACUUM")) ! COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, " UNION SELECT 'FULL'" " UNION SELECT 'FREEZE'" " UNION SELECT 'ANALYZE'" " UNION SELECT 'VERBOSE'"); ! else if (Matches2("VACUUM", "FULL|FREEZE")) ! COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, " UNION SELECT 'ANALYZE'" " UNION SELECT 'VERBOSE'"); ! else if (Matches3("VACUUM", "FULL|FREEZE", "ANALYZE")) ! COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, ! " UNION SELECT 'VERBOSE'"); ! else if (Matches3("VACUUM", "FULL|FREEZE", "VERBOSE")) ! COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, " UNION SELECT 'ANALYZE'"); ! else if (Matches2("VACUUM", "VERBOSE")) ! COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, " UNION SELECT 'ANALYZE'"); ! else if (Matches2("VACUUM", "ANALYZE")) ! COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, ! " UNION SELECT 'VERBOSE'"); else if (HeadMatches1("VACUUM")) ! COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, NULL); /* WITH [RECURSIVE] */ --- 3388,3435 COM
Re: [Patch] Create a new session in postmaster by calling setsid()
Andrew Gierth writes: > "Tom" == Tom Lane writes: > Tom> We'd likely need a switch to control that. If memory serves, there > Tom> used to be such a switch, but we got rid of the postmaster's > Tom> setsid call and the switch too. We probably should dig in the > Tom> archives and review the reasoning about that. > The tricky part about doing setsid() is this: you're not allowed to do > it if you're already a process group leader. silent_mode worked by > having postmaster do another fork, exit in the parent, and do setsid() > in the child. Hmph. Can't we just ignore that error? regards, tom lane
Re: [Patch] Create a new session in postmaster by calling setsid()
> "Tom" == Tom Lane writes: >> The tricky part about doing setsid() is this: you're not allowed to >> do it if you're already a process group leader. silent_mode worked >> by having postmaster do another fork, exit in the parent, and do >> setsid() in the child. Tom> Hmph. Can't we just ignore that error? If you ignore the error from setsid(), then you're still a process group leader (as you would be after running setsid()), but you're still attached to whatever controlling terminal (if any) you were previously attached to. -- Andrew (irc:RhodiumToad)
Re: [Patch] Create a new session in postmaster by calling setsid()
Andrew Gierth writes: > "Tom" == Tom Lane writes: > Tom> Hmph. Can't we just ignore that error? > If you ignore the error from setsid(), then you're still a process group > leader (as you would be after running setsid()), but you're still > attached to whatever controlling terminal (if any) you were previously > attached to. Oh, got it. So actually, the setsid has to be done by what is/will be the postmaster process. Although pg_ctl could sneak it in between forking and execing, it seems like it'd be cleaner to have the postmaster proper do it in response to a switch that pg_ctl passes it. That avoids depending on the fork/exec separation, and makes the functionality available for other postmaster startup mechanisms, and opens the possibility of delaying the detach to the end of startup. regards, tom lane
Re: PATCH: Update snowball stemmers
I see that the cfbot is having difficulty building this: make[2]: *** No rule to make target `stem_ISO_8859_2_hungarian.o', needed by `dict_snowball.so'. Stop. Did you miss including Makefile changes in the submitted patch? regards, tom lane
Re: Avoid extra Sort nodes between WindowAggs when sorting can be reused
So I'm looking to commit this, and here's my comments so far: WindowClauseSortNode - I don't like this name, because it's not actually a Node of any kind. How about WindowSortData? list_concat_unique(list_copy(x),y) is exactly list_union(x,y), which looks a bit nicer to me. re. this: for (; nActive > 0; nActive--) result = lcons(actives[nActive - 1].wc, result); Now that we're allowed to use C99, I think it looks better like this: for (int i = 0; i < nActive; i++) result = lappend(result, actives[i].wc); (Building lists in forward order by using a reversed construction and iterating backwards seems like an unnecessary double-negative.) I can add a comment about not needing to compare eqop (which is derived directly from sortop, so it can't differ unless sortop also does - provided sortop is actually present; if window partitions could be hashed, this would be a problem, but that doesn't strike me as very likely to happen). Any comments? (no need to post further patches unless there's some major change needed) -- Andrew (irc:RhodiumToad)
Re: Avoid extra Sort nodes between WindowAggs when sorting can be reused
> On 12 Sep 2018, at 22:15, Andrew Gierth wrote: > WindowClauseSortNode - I don't like this name, because it's not actually > a Node of any kind. How about WindowSortData? That’s a good point. I probably would’ve named it WindowClauseSortData since it acts on WindowClauses, but that might just be overly verbose. > Any comments? (no need to post further patches unless there's some major > change needed) I have no objections to the comments made in this review, only the above nitpick. Thanks for picking this up! cheers ./daniel
Re: Code of Conduct plan
I wrote: > Stephen Frost writes: >> We seem to be a bit past that timeline... Do we have any update on when >> this will be moving forward? >> Or did I miss something? > Nope, you didn't. Folks have been on holiday which made it hard to keep > forward progress going, particularly with respect to selecting the initial > committee members. Now that Magnus is back on shore, I hope we can > wrap it up quickly --- say by the end of August. I apologize for the glacial slowness with which this has all been moving. The core team has now agreed to some revisions to the draft CoC based on the comments in this thread; see https://wiki.postgresql.org/wiki/Code_of_Conduct (That's the updated text, but you can use the diff tool on the page history tab to see the changes from the previous draft.) I think we are about ready to announce the initial membership of the CoC committee, as well, but that should be a separate post. regards, tom lane
Re: [patch] Support LLVM 7
Hi, On 2018-09-12 14:45:17 +0200, Christoph Berg wrote: > LLVM 7 landed in Debian unstable, this patch teaches ./configure to use > it. (General patch, not specific to Debian.) Thanks. Yes, I think we should do that, especially because my patches to add proper debugging and profiling support only landed in LLVM 7. Therefore I'm planning to add this to both v11 and master. Unless somebody protests? Greetings, Andres Freund
Re: [patch] Support LLVM 7
Re: Andres Freund 2018-09-12 <20180912210338.h3vsss5lkuu26...@alap3.anarazel.de> > Hi, > > On 2018-09-12 14:45:17 +0200, Christoph Berg wrote: > > LLVM 7 landed in Debian unstable, this patch teaches ./configure to use > > it. (General patch, not specific to Debian.) > > Thanks. Yes, I think we should do that, especially because my patches > to add proper debugging and profiling support only landed in LLVM > 7. Therefore I'm planning to add this to both v11 and master. Unless > somebody protests? I plan to switch postgresql-11.deb to LLVM 7 over the next days because of the support for non-x86 architectures, so this should definitely land in 11. Christoph
Re: Avoid extra Sort nodes between WindowAggs when sorting can be reused
Andrew Gierth writes: > So I'm looking to commit this, and here's my comments so far: I took a quick look over this. I agree with your nitpicks, and have a couple more: * Please run it through pgindent. That will, at a minimum, remove some gratuitous whitespace changes in this patch. I think it'll also expose some places where you need to change the code layout to prevent pgindent from making it look ugly. Notably, this: actives[nActive].uniqueOrder = list_concat_unique( list_copy(wc->partitionClause), wc->orderClause); is not per project style for function call layout. Given the other comment about using list_union, I'd probably lay it out like this: actives[nActive].uniqueOrder = list_union(wc->partitionClause, wc->orderClause); * The initial comment in select_active_windows, /* First, make a list of the active windows */ is now seriously inadequate as a description of what the subsequent loop does; it needs to be expanded. I'd also say that it's not building a list anymore, but an array. Further, there needs to be an explanation of why what it's doing is correct at all --- list_union doesn't make many promises about the order of the resulting list (nor did the phraseology with list_concat_unique), but what we're doing below certainly requires that order to have something to do with the window semantics. * I'm almost thinking that changing to list_union is a bad idea, because that obscures the fact that we're relying on the relative order of elements in partitionClause and orderClause to not change; any future reimplementation of list_union would utterly break this code. I'm also a bit suspicious as to whether the code is even correct; does it *really* match what will happen later when we create sort plan nodes? (Maybe it's fine; I haven't looked.) * The original comments also made explicit that we were not considering framing options, and I'm not too happy that that disappeared. * It'd be better if common_prefix_cmp didn't cast away const. regards, tom lane
Re: PATCH: Update snowball stemmers
On Wed, Sep 12, 2018 at 04:03:57PM -0400, Tom Lane wrote: > I see that the cfbot is having difficulty building this: > > make[2]: *** No rule to make target `stem_ISO_8859_2_hungarian.o', needed by > `dict_snowball.so'. Stop. > > Did you miss including Makefile changes in the submitted patch? Snowball's Makefile has changes related to hungarian stemmer: - stem_ISO_8859_1_hungarian.o \ ... + stem_ISO_8859_2_hungarian.o \ I've noticed the error some time ago. And I tried to understand why cfbot can't build it. Somehow cfbot didn't rename stem_ISO_8859_1_hungarian. You can see it from the commit [1]. And there is no new file stem_ISO_8859_2_hungarian.c [2]. Another problem with the header file [3]. cfbot created new file stem_ISO_8859_2_hungarian.h, but it didn't delete old stem_ISO_8859_1_hungarian.h. In my laptop there is no such problem. I tried both "git apply" and "patch -p1". And I can build the patch. Maybe cfbot's "patch" doesn't understand the patch file. Maybe I have too recent git (2.19.0)... 1 - https://github.com/postgresql-cfbot/postgresql/commit/efc280b89b181657afe5412f398681b2c393a35c#diff-efde70a147d16a83b9b132b7f396ab6d 2 - https://github.com/postgresql-cfbot/postgresql/tree/commitfest/19/1697/src/backend/snowball/libstemmer 3 - https://github.com/postgresql-cfbot/postgresql/tree/commitfest/19/1697/src/include/snowball/libstemmer -- Arthur Zakirov Postgres Professional: http://www.postgrespro.com Russian Postgres Company
Re[3]: doc - improve description of default privileges
on 2018-08-30, Fabien Coelho wrote ... > ... Find v3 attached. ... Hi Fabien, As we're coming up on the end of this commitfest ... Is the reviewer supposed to move this to "ready for committer" or is the author supposed to do that? Is the reviewer supposed to explicitly state "I've looked at your v3 patch and have no further suggestions" (which is true) or is a lack of additional comments normally taken as acceptance? Cheers.
Re: PATCH: Update snowball stemmers
Arthur Zakirov writes: > On Wed, Sep 12, 2018 at 04:03:57PM -0400, Tom Lane wrote: >> Did you miss including Makefile changes in the submitted patch? > I've noticed the error some time ago. And I tried to understand why > cfbot can't build it. Somehow cfbot didn't rename > stem_ISO_8859_1_hungarian. You can see it from the commit [1]. And there > is no new file stem_ISO_8859_2_hungarian.c [2]. Oh, patch(1) doesn't understand git's idea of "renaming" files, cf https://www.postgresql.org/message-id/CAEepm=2+CF3PshNRAs-r8jtPLKj0K6UEACeRSqBi5Cf74L=w...@mail.gmail.com I'd suggest using "git diff --no-renames", since some of us will want to apply the patch using patch(1). > In my laptop there is no such problem. I tried both "git apply" and > "patch -p1". And I can build the patch. Really? What version of patch was that? regards, tom lane
Re: Re[3]: doc - improve description of default privileges
"Bradley DeJong" writes: > Is the reviewer supposed to move this to "ready for committer" or is the > author supposed to do that? The reviewer does that, indicating signoff. regards, tom lane
Re: [HACKERS] Cutting initdb's runtime (Perl question embedded)
On Wed, Sep 12, 2018 at 10:00:21AM -0400, Tom Lane wrote: > Michael Paquier writes: >> One thing that I can see changing with this patch is how timezone is set >> in postgresql.conf. For example, on HEAD I get 'Japan' while this patch >> gives back 'Asia/Tokyo'. Could it be an issue for countries with >> multiple timezones? I am not sure how Russian systems would react on >> that for example. > > Interesting --- what platform were you testing on? Debian SID, with Asia/Tokyo used for /etc/localtime. > I believe that this patch will never make for any functional change, > it will only give you some other alias for the zone it would have > selected anyway. This could only fail to be true if there are > distinct timezones that score_timezone() is failing to tell apart, > which would be a bug in score_timezone, not this patch. (Presumably, > we could fix any such bug by increasing the number of dates that > score_timezone tests.) Looking at the list of aliases, I am not seeing listed countries running across multiple timezones, so that may be fine.. Anyway, your change, and particularly the cut of time in running initdb, which matters for TAP tests, are definitely good things in my opinion. So I am switching that as ready for committer. -- Michael signature.asc Description: PGP signature
Re: [HACKERS] Cutting initdb's runtime (Perl question embedded)
On 2018-Sep-13, Michael Paquier wrote: > Looking at the list of aliases, I am not seeing listed countries running > across multiple timezones, so that may be fine.. Well, mine is there, and it's correct: Li America/Santiago Chile/Continental Li Pacific/Easter Chile/EasterIsland Laugh all you want about Chile of all countries having multiple timezones ... -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: [HACKERS] Cutting initdb's runtime (Perl question embedded)
Michael Paquier writes: > On Wed, Sep 12, 2018 at 10:00:21AM -0400, Tom Lane wrote: >> I believe that this patch will never make for any functional change, >> it will only give you some other alias for the zone it would have >> selected anyway. > Looking at the list of aliases, I am not seeing listed countries running > across multiple timezones, so that may be fine. Not sure what you're worried about. "Linked" time zones are the *same data*. In an installed tzdb tree, the Japan file is either a hardlink or symlink to the Asia/Tokyo one, so they can't differ. What you seem to be speculating about is actual errors in the tzdb data, ie not describing the facts on the ground in particular places. That's possible I suppose but it's hardly our problem if it happens; it'd be theirs to fix. regards, tom lane
Re: stat() on Windows might cause error if target file is larger than 4GB
On Wed, Sep 12, 2018 at 12:47:31PM +0900, Michael Paquier wrote: > That's exactly what I would like to do and what I meant two paragraphs > above as that's the only solution I think would be clean, and this would > take care of st_size nicely. I have not tested (yet), but some tweaks > in win32_port.h could be enough before mapping lstat to stat? FWIW, I have been digging into this one and as "struct stat" is already an existing structure when it comes to MSVC, enforcing a mapping of __stat64 to that is proving to be tedious in one of the port headers. Another solution would be to modify pgwin32_safestat so as it directly uses _stat64, and then fill in the results from __stat64 directly to _stat field by field. One thing which would be bad is that _stat.st_size is 4 bytes, which would cause the eight high bytes of __stat64.st_size to be lost, hence if working on a file larger than 4GB we would send an incorrect size back to the caller, which is worse than the OVERFLOW we have now. We had better be careful with MinGW as well, and cygwin does not take this path. Perhaps somebody has a smart idea? -- Michael signature.asc Description: PGP signature
Re: stat() on Windows might cause error if target file is larger than 4GB
Michael Paquier writes: > On Wed, Sep 12, 2018 at 12:47:31PM +0900, Michael Paquier wrote: >> That's exactly what I would like to do and what I meant two paragraphs >> above as that's the only solution I think would be clean, and this would >> take care of st_size nicely. I have not tested (yet), but some tweaks >> in win32_port.h could be enough before mapping lstat to stat? > FWIW, I have been digging into this one and as "struct stat" is already > an existing structure when it comes to MSVC, enforcing a mapping of > __stat64 to that is proving to be tedious in one of the port headers. Yeah, I was afraid of that. We could invent a typedef "pg_struct_stat" that maps to the right thing, but using that everywhere would be a mighty invasive change :-(. And we can't just "#define stat __stat64" because that would break the calls to stat(). Or wait, maybe we could do that and also have a one-liner function named __stat64 that would catch the calls and redirect to _stat64? regards, tom lane
Re: [Patch] Create a new session in postmaster by calling setsid()
On Wed, Sep 12, 2018 at 03:55:00PM -0400, Tom Lane wrote: > Although pg_ctl could sneak it in between forking and execing, it seems > like it'd be cleaner to have the postmaster proper do it in response to > a switch that pg_ctl passes it. That avoids depending on the fork/exec > separation, and makes the functionality available for other postmaster > startup mechanisms, and opens the possibility of delaying the detach > to the end of startup. I would be fine with something happening only once the postmaster thinks that consistency has been reached and is open for business, though I'd still think that this should be controlled by a switch, where the default does what we do now. Feel free to ignore me if I am outvoted ;) -- Michael signature.asc Description: PGP signature
Re: Avoid extra Sort nodes between WindowAggs when sorting can be reused
> "Tom" == Tom Lane writes: Tom> * I'm almost thinking that changing to list_union is a bad idea, A fair point. Though it looks like list_union is used in only about 3 distinct places, and two of those are list_union(NIL, blah) to simply remove dups from a single list. The third place is the cartesian-product expansion of grouping sets, which uses list_union_int to remove duplicates - changing the order there will give slightly user-surprising but not actually incorrect results. Presumably list_concat_unique should be considered to guarantee that it preserves the relative order of the two lists and of the non-duplicate items in the second list? -- Andrew (irc:RhodiumToad)
Re: Avoid extra Sort nodes between WindowAggs when sorting can be reused
Andrew Gierth writes: > "Tom" == Tom Lane writes: > Tom> * I'm almost thinking that changing to list_union is a bad idea, > A fair point. Though it looks like list_union is used in only about 3 > distinct places, and two of those are list_union(NIL, blah) to simply > remove dups from a single list. The third place is the cartesian-product > expansion of grouping sets, which uses list_union_int to remove > duplicates - changing the order there will give slightly user-surprising > but not actually incorrect results. > Presumably list_concat_unique should be considered to guarantee that it > preserves the relative order of the two lists and of the non-duplicate > items in the second list? I'm thinking that whichever coding we use, the patch should include comment additions in list.c documenting that some callers have assumptions thus-and-so about list order preservation. Then at least anybody who got the idea to try to improve performance of those functions would be on notice about the risks. I see that list_union is currently documented like this: * Generate the union of two lists. This is calculated by copying * list1 via list_copy(), then adding to it all the members of list2 * that aren't already in list1. so as long as it stays like that, it's not unreasonable to use it in this patch. I just want the potential landmine to be obvious at that end. regards, tom lane
Changing the setting of wal_sender_timeout per standby
Hello, What do you think about changing wal_sender_timeout from PGC_HUP to PGC_BACKEND or PGC_USERSET? Some customer wants to change the setting per standby, i.e., a shorter timeout for a standby in the same region to enable faster detection failure and failover, and a longer timeout for a standby in the remote region (for disaster recovery) to avoid mis-judging its health. The current PGC_HUP allows to change the setting by editing postgresql.conf or ALTER SYSTEM and then sending SIGHUP to a specific walsender. But that's not easy to use. The user has to do it upon every switchover and failover. With PGC_BACKEND, the user would be able to tune the timeout as follows: [recovery.conf] primary_conninfo = '... options=''-c wal_sender_timeout=6'' ...' With PGC_USERSET, the user would be able to use different user accounts for each standby, and tune the setting as follows: ALTER USER repluser_remote SET wal_sender_timeout = 6; FYI In Oracle Data Guard, the user configures the timeout for each standby in the primary server's configuration file like this: LOG_ARCHIVE_DEST_1 = "SERVICE=local_conn_info SYNC NET_TIMEOUT=5" LOG_ARCHIVE_DEST_2 = "SERVICE=remote_conn_info ASYNC NET_TIMEOUT=60" Regards Takayuki Tsunakawa
Re: [HACKERS] Cutting initdb's runtime (Perl question embedded)
On Thu, Sep 13, 2018 at 11:20 AM Tom Lane wrote: > Michael Paquier writes: > > On Wed, Sep 12, 2018 at 10:00:21AM -0400, Tom Lane wrote: > >> I believe that this patch will never make for any functional change, > >> it will only give you some other alias for the zone it would have > >> selected anyway. > > > Looking at the list of aliases, I am not seeing listed countries running > > across multiple timezones, so that may be fine. > > Not sure what you're worried about. "Linked" time zones are the *same > data*. In an installed tzdb tree, the Japan file is either a hardlink or > symlink to the Asia/Tokyo one, so they can't differ. What you seem to be > speculating about is actual errors in the tzdb data, ie not describing > the facts on the ground in particular places. That's possible I suppose > but it's hardly our problem if it happens; it'd be theirs to fix. I tested this on a system where /etc/localtime is not a symlink (FreeBSD) and it worked fine, falling back to the old behaviour and finding my timezone. (Apparently the argument against a symlink /etc/localtime -> /usr/share/tzinfo/... is that new processes would effectively switch timezone after /usr is mounted so your boot logs would be mixed up. Or something. I bet that actually happens on Linux too, but maybe no one does /usr as a mount point anymore...?) I noticed that the patch does a bunch of s/Olson/IANA/. That leaves only one place in the tree that still refers to the "Olson" database: dt_common.c. Might want to change that too? -- Thomas Munro http://www.enterprisedb.com
Re: Changing the setting of wal_sender_timeout per standby
On Thu, Sep 13, 2018 at 01:14:12AM +, Tsunakawa, Takayuki wrote: > Some customer wants to change the setting per standby, i.e., a shorter > timeout for a standby in the same region to enable faster detection > failure and failover, and a longer timeout for a standby in the remote > region (for disaster recovery) to avoid mis-judging its health. This argument is sensible. > The current PGC_HUP allows to change the setting by editing > postgresql.conf or ALTER SYSTEM and then sending SIGHUP to a specific > walsender. But that's not easy to use. The user has to do it upon > every switchover and failover. > > With PGC_BACKEND, the user would be able to tune the timeout as follows: > > [recovery.conf] > primary_conninfo = '... options=''-c wal_sender_timeout=6'' ...' > > With PGC_USERSET, the user would be able to use different user > accounts for each standby, and tune the setting as follows: > > ALTER USER repluser_remote SET wal_sender_timeout = 6; It seems to me that switching to PGC_BACKENDwould cover already all the use-cases you are mentioning, as at the end one would just want to adjust the WAL sender timeout on a connection basis depending on the geographical location of the receiver and the latency between primary and standby. -- Michael signature.asc Description: PGP signature
Re: Getting ERROR: could not open file "base/13164/t3_16388" with partition table with ON COMMIT
On Wed, Sep 12, 2018 at 12:14:00PM -0400, Tom Lane wrote: > I thought we had a macro or utility function somewhere that knew which > relkinds have storage, though I can't find it right now. I'd be > inclined to instantiate that if it doesn't exist, and then the code > here ought to read something like Perhaps you are thinking about heap_create() which decides if a relkind can have storage created by setting create_storage. If you introduce a new macro, I would think that refactoring as well heap.c so as it makes use of it could make sense. -- Michael signature.asc Description: PGP signature
Re: review printing ecpg program version
On Wed, Sep 12, 2018 at 02:41:11PM +0200, Peter Eisentraut wrote: > On 12/09/2018 10:32, Ioseph Kim wrote: >> ok, in case pg_config, ignore this, but It should be review in case ecpg. > > fixed Thanks Peter for jumping in the ship. What you did looks correct to me. -- Michael signature.asc Description: PGP signature
Re: Indicate anti-wraparound autovacuum in log_autovacuum_min_duration
On Wed, Sep 12, 2018 at 05:36:31PM +0900, Masahiko Sawada wrote: > It would be useful if we have a number of the options autovacuum > workers could use but since there are only 2 I'm not sure we need the > list-style. Looking at what Sergei has proposed upthread again, using a comma-separated list of options may be more painful for translators as such lists really depend on the language, so I would be fine to commit what has been added. One last point though: we use anti-wraparound in already five places in the docs, still I have sympathy for "to prevent wraparound" as well. The brackets look rather useless to me, wouldn't it be better to remove them? By doing so the longest message becomes: "automatic aggressive vacuum to prevent wraparound of table blah.blah" -- Michael signature.asc Description: PGP signature
Re: [HACKERS] Cutting initdb's runtime (Perl question embedded)
On Wed, Sep 12, 2018 at 08:04:53PM -0300, Alvaro Herrera wrote: > Well, mine is there, and it's correct: > > Li America/Santiago Chile/Continental > Li Pacific/Easter Chile/EasterIsland > > Laugh all you want about Chile of all countries having multiple > timezones ... Impressed is a better word. Really impressed seeing the shape of the country ;) -- Michael signature.asc Description: PGP signature
Re: stat() on Windows might cause error if target file is larger than 4GB
On Wed, Sep 12, 2018 at 08:17:05PM -0400, Tom Lane wrote: > Yeah, I was afraid of that. We could invent a typedef "pg_struct_stat" > that maps to the right thing, but using that everywhere would be a mighty > invasive change :-(. There are 130 places where "struct stat " is used, so I'd rather avoid that. > And we can't just "#define stat __stat64" because > that would break the calls to stat(). Or wait, maybe we could do that > and also have a one-liner function named __stat64 that would catch the > calls and redirect to _stat64? I don't think I grab your point here. "#define stat __stat64" cannot exist from the start so how would you do that? -- Michael signature.asc Description: PGP signature
Re: Getting ERROR: could not open file "base/13164/t3_16388" with partition table with ON COMMIT
On 2018/09/13 1:14, Tom Lane wrote: > Amit Langote writes: >> The infamous missing-relkind-check in heap_truncate() seems to be behind >> this. Perhaps, a patch like the attached will do? > > That seems excessively restrictive. Anything that has storage (e.g. > matviews) ought to be truncatable, no? Not by heap_truncate it seems. The header comment of heap_truncate says that it concerns itself only with ON COMMIT truncation of temporary tables: /* * heap_truncate * * This routine deletes all data within all the specified relations. * * This is not transaction-safe! There is another, transaction-safe * implementation in commands/tablecmds.c. We now use this only for * ON COMMIT truncation of temporary tables, where it doesn't matter. */ ON COMMIT clause can only be used with temporary tables, so the only two possible relkind values that can be encountered here are RELKIND_RELATION and RELKIND_PARTITIONED_TABLE. Of the two, only the RELKIND_RELATION can have storage. > I thought we had a macro or utility function somewhere that knew which > relkinds have storage, though I can't find it right now. I'd be > inclined to instantiate that if it doesn't exist, and then the code > here ought to read something like > > if (RelkindHasStorage(rel->rd_rel->relkind)) > heap_truncate_one_rel(rel); There have been discussions (such as [1]), but none that led to some patch being committed. Might be a good idea to revive that discussion again, or perhaps there is already some solution being discussed on the pluggable storage thread. > Also, possibly the test ought to be inside heap_truncate_one_rel > rather than its callers? Hmm, perhaps. ExecuteTruncateGuts, the only other caller of heap_truncate_one_rel, also checks the relkind to skip partitioned tables. There seem to be reasons to do it in the caller in that case though, other than heap_truncate_one_rel being incapable of handling them. Thanks, Amit [1] Macros bundling RELKIND_* conditions https://www.postgresql.org/message-id/CAFjFpRcfzs%2Byst6YBCseD_orEcDNuAr9GUTraZ5GC%3DAvCYh55Q%40mail.gmail.com
Re: Getting ERROR: could not open file "base/13164/t3_16388" with partition table with ON COMMIT
On 2018/09/13 12:37, Michael Paquier wrote: > On Wed, Sep 12, 2018 at 12:14:00PM -0400, Tom Lane wrote: >> I thought we had a macro or utility function somewhere that knew which >> relkinds have storage, though I can't find it right now. I'd be >> inclined to instantiate that if it doesn't exist, and then the code >> here ought to read something like > > Perhaps you are thinking about heap_create() which decides if a relkind > can have storage created by setting create_storage. If you introduce a > new macro, I would think that refactoring as well heap.c so as it makes > use of it could make sense. Ashutosh Bapat had proposed patches in this area last year [1], but it seems the discussion didn't lead to any development. Thanks, Amit [1] Macros bundling RELKIND_* conditions https://www.postgresql.org/message-id/CAFjFpRcfzs%2Byst6YBCseD_orEcDNuAr9GUTraZ5GC%3DAvCYh55Q%40mail.gmail.com
Re: simplify index tuple descriptor initialization
On 12/09/2018 15:18, Arthur Zakirov wrote: > On Mon, Aug 27, 2018 at 04:25:28PM +0200, Peter Eisentraut wrote: >> Whenever some pg_attribute field is added or changed, a lot of >> repetitive changes all over the code are necessary. Here is a small >> change to remove one such place. > > It looks like a reasonable change to me. > > The code is good and regression tests passed. There is no need to update > the documentation. > > Marked as Ready for Commiter. Committed, thanks. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: A strange GiST error message or fillfactor of GiST build
Hello. At Fri, 07 Sep 2018 16:12:29 -0400, Tom Lane wrote in <19695.1536351...@sss.pgh.pa.us> > Bruce Momjian writes: > > So, are we going to output a notice if a non-100% fill factor is > > specified? > > I would not think that's helpful. I agree. My understanding is that: It has'nt been worked as expected anyway thus we ignore it instead of removing not to break existing setup, and with silence not to surprise someone using it believing it works. regards. -- Kyotaro Horiguchi NTT Open Source Software Center
Re: Unused argument from execute_sql_string()
> Hi, > > I found that a argument "filename" is not used in execute_sql_string() > although the comment says "filename is used only to report errors.", > so I think we can remove this argument as done in the patch I attached. It seems the "filename" argument has been there since the first version of extension.c was committed. I don't know why it has been left unused. Maybe someone thought someday he generates more fancy error reports using it? Anyway, considering it's a static function, chance of breaking backward compatibility is minimum, I think. So +1 to remove the unused argument. Best regards, -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese:http://www.sraoss.co.jp