Fwd: PostgreSQL: WolfSSL support
Hi, Is anyone here interested in helping to evaluate an experimental patch for wolfSSL support? Attached please find a WIP patch for wolfSSL support in postgresql-12. As a shortcut, you may find this merge request helpful: https://salsa.debian.org/postgresql/postgresql/-/merge_requests/4 I used Debian stable (buster) with backports enabled and preferred. The wolfssl.patch in d/patches builds and completes all tests, as long as libwolfssl-dev version 4.4.0+dfsg-2~bpo10+1 is installed and patched with the included libwolfssl-dev-rename-types.patch. You can do so as root with: cd /usr/include/wolfssl patch -p1 < libwolfssl-dev-rename-types.patch Patching the library was easier than resolving type conflicts for twenty-five files. An attempt was made but resulted in failing tests. The offending types are called 'ValidateDate' and 'Hash'. They do not seem to be part of the wolfSSL ABI. The patch operates with the following caveats: 1. DH parameters are not currently loaded from a database-internal PEM certificate. The function OBJ_find_sigid_algs is not available. The security implications should be discussed with a cryptographer. 2. The contrib module pgcrypto was not compiled with OpenSSL support and currently offers only native algorithms. wolfSSL's compatibility support for OpenSSL's EVP interface is incomplete and offers only a few algorithms. The module should work directly with wolfCrypt. 3. The error reporting in wolfSSL_set_fd seems to be different from OpenSSL. I could not locate SSLerr and decided to return BAD_FUNC_ARG. That is what the routine being mimicked does in wolfSSL. If you see an SSL connection error, it may be wise to simply remove these two statements in src/interfaces/libpq/fe-secure-openssl.c: ret = BAD_FUNC_ARG; Unsupported functions or features can probably be replaced with wolfSSL's or wolfCrypt's native interfaces. The company may be happy to assist. The patch includes modifications toward missing goals. Some parts modify code, for example in util/pgpcrypto, that is not actually called. Please note that the wolfSSL team prefers the styling of their brand to be capitalized as recorded in this sentence. Thank you! Kind regards Felix Lechner unchanged: --- a/configure.in +++ b/configure.in @@ -1211,7 +1211,7 @@ fi if test "$with_gssapi" = yes ; then if test "$PORTNAME" != "win32"; then -AC_SEARCH_LIBS(gss_init_sec_context, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [], +AC_SEARCH_LIBS(gss_init_sec_context, [gssapi_krb5 gss 'gssapi -lkrb5 -lwolfssl'], [], [AC_MSG_ERROR([could not find function 'gss_init_sec_context' required for GSSAPI])]) else LIBS="$LIBS -lgssapi32" @@ -1221,29 +1221,35 @@ fi if test "$with_openssl" = yes ; then dnl Order matters! if test "$PORTNAME" != "win32"; then - AC_CHECK_LIB(crypto, CRYPTO_new_ex_data, [], [AC_MSG_ERROR([library 'crypto' is required for OpenSSL])]) - AC_CHECK_LIB(ssl,SSL_new, [], [AC_MSG_ERROR([library 'ssl' is required for OpenSSL])]) + AC_CHECK_LIB(wolfssl, wolfSSL_new, [], [AC_MSG_ERROR([library 'wolfssl' is required for OpenSSL])]) else - AC_SEARCH_LIBS(CRYPTO_new_ex_data, [eay32 crypto], [], [AC_MSG_ERROR([library 'eay32' or 'crypto' is required for OpenSSL])]) - AC_SEARCH_LIBS(SSL_new, [ssleay32 ssl], [], [AC_MSG_ERROR([library 'ssleay32' or 'ssl' is required for OpenSSL])]) + AC_SEARCH_LIBS(wolfSSL_new, [wolfssl], [], [AC_MSG_ERROR([library 'wolfssl' is required for OpenSSL])]) fi - AC_CHECK_FUNCS([SSL_get_current_compression X509_get_signature_nid]) + # support for NIDs is incomplete; lack OBJ_find_sigid_algs + #AC_DEFINE(HAVE_X509_GET_SIGNATURE_NID, 1, [Define to 1 if you have X509_get_signature_nid()]) + # Functions introduced in OpenSSL 1.1.0. We used to check for # OPENSSL_VERSION_NUMBER, but that didn't work with 1.1.0, because LibreSSL # defines OPENSSL_VERSION_NUMBER to claim version 2.0.0, even though it # doesn't have these OpenSSL 1.1.0 functions. So check for individual # functions. - AC_CHECK_FUNCS([OPENSSL_init_ssl BIO_get_data BIO_meth_new ASN1_STRING_get0_data]) + AC_DEFINE(HAVE_BIO_GET_DATA, 1, [Define to 1 if you have BIO_get_data()]) + + # support for BIO_meth_new incomplete; lack BIO_meth_get_* + #AC_DEFINE(HAVE_BIO_METH_NEW, 1, [Define to 1 if you have BIO_meth_new()]) + AC_DEFINE(HAVE_ASN1_STRING_GET0_DATA, 1, [Define to 1 if you have ASN1_STRING_get0_data()]) # OpenSSL versions before 1.1.0 required setting callback functions, for # thread-safety. In 1.1.0, it's no longer required, and CRYPTO_lock() # function was removed. - AC_CHECK_FUNCS([CRYPTO_lock]) + # wolfSSL has CRYPTO_lock but does not need it; lacks CRYPTO_get_locking_callback + # AC_DEFINE(HAVE_CRYPTO_LOCK, 1, [Define to 1 if you have CRYPTO_lock()]) # SSL_clear_options is a macro in OpenSSL from 0.9.8 to 1.0.2, and # a function from 1.1.0 onwards so we cannot use AC_CHECK_FUNCS. AC_CACHE_CHECK(
Re: update substring pattern matching syntax
On 2020-06-20 09:08, Fabien COELHO wrote: I cannot say I'm a fan of this kind of keywords added for some arguments. I guess that it allows distinguishing between variants. I do not have the standard at hand: I wanted to check whether these keywords could be reordered, i.e. whether SUBSTRING(text ESCAPE ec SIMILAR part) was legal. I guess not. It is not. Maybe the doc could advertise more systematically whether a features conforms fully or partially to some SQL standards, or is pg specific. I think that would be useful, but it's probably a broader topic than just for this specific function. The added documentation refers both to SQL:1999 and SQL99. I'd suggest to chose one, possibly the former, and use it everywhere consistently. fixed It seems that two instances where not updated to the new syntax, see in ./src/backend/catalog/information_schema.sql and ./contrib/citext/sql/citext.sql. done -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services From 38a4f5bb3bc74ef033dd141031cbc89450e3bd06 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 19 Jun 2020 11:14:10 +0200 Subject: [PATCH v2 1/2] Clean up grammar a bit Simplify the grammar specification of substring() and overlay() a bit, simplify and update some comments. Discussion: https://www.postgresql.org/message-id/flat/a15db31c-d0f8-8ce0-9039-578a31758adb%402ndquadrant.com --- src/backend/parser/gram.y | 73 --- 1 file changed, 23 insertions(+), 50 deletions(-) diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index e669d75a5a..1a843049f0 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -452,7 +452,6 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); %typeextract_list overlay_list position_list %typesubstr_list trim_list %typeopt_interval interval_second -%typeoverlay_placing substr_from substr_for %type unicode_normal_form %type opt_instead @@ -13797,11 +13796,6 @@ func_expr_common_subexpr: } | OVERLAY '(' overlay_list ')' { - /* overlay(A PLACING B FROM C FOR D) is converted to -* overlay(A, B, C, D) -* overlay(A PLACING B FROM C) is converted to -* overlay(A, B, C) -*/ $$ = (Node *) makeFuncCall(SystemFuncName("overlay"), $3, @1); } | POSITION '(' position_list ')' @@ -14437,63 +14431,45 @@ unicode_normal_form: | NFKD { $$ = "nfkd"; } ; -/* OVERLAY() arguments - * SQL99 defines the OVERLAY() function: - * o overlay(text placing text from int for int) - * o overlay(text placing text from int) - * and similarly for binary strings - */ +/* OVERLAY() arguments */ overlay_list: - a_expr overlay_placing substr_from substr_for + a_expr PLACING a_expr FROM a_expr FOR a_expr { - $$ = list_make4($1, $2, $3, $4); + /* overlay(A PLACING B FROM C FOR D) is converted to overlay(A, B, C, D) */ + $$ = list_make4($1, $3, $5, $7); } - | a_expr overlay_placing substr_from + | a_expr PLACING a_expr FROM a_expr { - $$ = list_make3($1, $2, $3); + /* overlay(A PLACING B FROM C) is converted to overlay(A, B, C) */ + $$ = list_make3($1, $3, $5); } ; -overlay_placing: - PLACING a_expr - { $$ = $2; } - ; - /* position_list uses b_expr not a_expr to avoid conflict with general IN */ - position_list: b_expr IN_P b_expr { $$ = list_make2($3, $1); } | /*EMPTY*/ { $$ = NIL; } ; -/* SUBSTRING() arguments - * SQL9x defines a specific syntax for arguments to SUBSTRING(): - * o substring(text from int for int) - * o substring(text from int) get entire string from starting point "int" - * o substring(text for int) get first "int" characters of string - * o substring(text from pattern) get entire string matching pattern - * o substring(text from pattern for esca
Re: pgsql: Enable Unix-domain sockets support on Windows
On 2020-06-26 14:21, Amit Kapila wrote: On Sat, Mar 28, 2020 at 7:37 PM Peter Eisentraut wrote: Enable Unix-domain sockets support on Windows + +/* + * Windows headers don't define this structure, but you can define it yourself + * to use the functionality. + */ +struct sockaddr_un +{ + unsigned short sun_family; + char sun_path[108]; +}; I was going through this feature and reading about Windows support for it. I came across a few links which suggest that this structure is defined in . Is there a reason for not using this via afunix.h? [1] - https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/ [2] - https://gist.github.com/NZSmartie/079d8f894ee94f3035306cb23d49addc If we did it that way we'd have to write some kind of configuration-time check for the MSVC build, since not all Windows versions have that header. Also, not all versions of MinGW have that header (possibly none). So the current implementation is probably the most practical compromise. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Default setting for enable_hashagg_disk
On Thu, Jun 25, 2020 at 12:59 AM Robert Haas wrote: > > So, I don't think we can wire in a constant like 10x. That's really > unprincipled and I think it's a bad idea. What we could do, though, is > replace the existing Boolean-valued GUC with a new GUC that controls > the size at which the aggregate spills. The default could be -1, > meaning work_mem, but a user could configure a larger value if desired > (presumably, we would just treat a value smaller than work_mem as > work_mem, and document the same). > > I think that's actually pretty appealing. Separating the memory we > plan to use from the memory we're willing to use before spilling seems > like a good idea in general, and I think we should probably also do it > in other places - like sorts. > +1. I also think GUC on these lines could help not only the problem being discussed here but in other cases as well. However, I think the real question is do we want to design/implement it for PG13? It seems to me at this stage we don't have a clear understanding of what percentage of real-world cases will get impacted due to the new behavior of hash aggregates. We want to provide some mechanism as a safety net to avoid problems that users might face which is not a bad idea but what if we wait and see the real impact of this? Is it too bad to provide a GUC later in back-branch if we see users face such problems quite often? I think the advantage of delaying it is that we might see some real problems (like where hash aggregate is not a good choice) which can be fixed via the costing model. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Re: Default setting for enable_hashagg_disk
On Fri, Jun 26, 2020 at 05:24:36PM -0700, Peter Geoghegan wrote: On Fri, Jun 26, 2020 at 4:59 PM Tomas Vondra wrote: I agree larger work_mem for hashagg (and thus less spilling) may mean lower work_mem for so some other nodes that are less sensitive to this. But I think this needs to be formulated as a cost-based decision, although I don't know how to do that for the reasons I explained before (bottom-up plan construction vs. distributing the memory budget). Why do you think that it needs to be formulated as a cost-based decision? That's probably true of a scheme that allocates memory very intelligently, but what about an approach that's slightly better than work_mem? Well, there are multiple ideas discussed in this (sub)thread, one of them being a per-query memory limit. That requires decisions how much memory should different nodes get, which I think would need to be cost-based. What problems do you foresee (if any) with adding a hash_mem GUC that gets used for both planning and execution for hash aggregate and hash join nodes, in about the same way as work_mem is now? Of course, a simpler scheme like this would not require that. And maybe introducing hash_mem is a good idea - I'm not particularly opposed to that, actually. But I think we should not introduce separate memory limits for each node type, which was also mentioned earlier. The problem of course is that hash_mem does not really solve the issue discussed at the beginning of this thread, i.e. regressions due to underestimates and unexpected spilling at execution time. The thread is getting a rather confusing mix of proposals how to fix that for v13 and proposals how to improve our configuration of memory limits :-( FWIW some databases already do something like this - SQL Server has something called "memory grant" which I think mostly does what you described here. Same is true of Oracle. But Oracle also has simple work_mem-like settings for sorting and hashing. People don't really use them anymore, but apparently it was once common for the DBA to explicitly give over more memory to hashing -- much like the hash_mem setting I asked about. IIRC the same is true of DB2. Interesting. What is not entirely clear to me how do these databases decide how much should each node get during planning. With the separate work_mem-like settings it's fairly obvious, but how do they do that with the global limit (either per-instance or per-query)? The difference between sort and hashagg spills is that for sorts there is no behavior change. Plans that did (not) spill in v12 will behave the same way on v13, modulo some random perturbation. For hashagg that's not the case - some queries that did not spill before will spill now. So even if the hashagg spills are roughly equal to sort spills, both are significantly more expensive than not spilling. Just to make sure we're on the same page: both are significantly more expensive than a hash aggregate not spilling *specifically*. OTOH, a group aggregate may not be much slower when it spills compared to an in-memory sort group aggregate. It may even be noticeably faster, due to caching effects, as you mentioned at one point upthread. This is the property that makes hash aggregate special, and justifies giving it more memory than other nodes on a system-wide basis (the same thing applies to hash join). This could even work as a multiplier of work_mem. Yes, I agree. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: pgsql: Enable Unix-domain sockets support on Windows
On Sat, Jun 27, 2020 at 3:06 PM Peter Eisentraut wrote: > > On 2020-06-26 14:21, Amit Kapila wrote: > > On Sat, Mar 28, 2020 at 7:37 PM Peter Eisentraut > > wrote: > >> > >> Enable Unix-domain sockets support on Windows > >> > > > > + > > +/* > > + * Windows headers don't define this structure, but you can define it > > yourself > > + * to use the functionality. > > + */ > > +struct sockaddr_un > > +{ > > + unsigned short sun_family; > > + char sun_path[108]; > > +}; > > > > I was going through this feature and reading about Windows support for > > it. I came across a few links which suggest that this structure is > > defined in . Is there a reason for not using this via > > afunix.h? > > > > [1] - https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/ > > [2] - https://gist.github.com/NZSmartie/079d8f894ee94f3035306cb23d49addc > > If we did it that way we'd have to write some kind of configuration-time > check for the MSVC build, since not all Windows versions have that > header. Also, not all versions of MinGW have that header (possibly > none). So the current implementation is probably the most practical > compromise. > Fair enough, but what should be the behavior in the Windows versions (<10) where Unix-domain sockets are not supported? BTW, in which format the path needs to be specified for unix_socket_directories? I tried with '/c/tmp', 'c:/tmp', 'tmp' but nothing seems to be working, it gives me errors like: "could not create lock file "/c/tmp/.s.PGSQL.5432.lock": No such file or directory" on server start. I am trying this on Win7 just to check what is the behavior of this feature on it. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Re: Fwd: PostgreSQL: WolfSSL support
On 2020-06-27 00:33, Felix Lechner wrote: Is anyone here interested in helping to evaluate an experimental patch for wolfSSL support? What would be the advantage of using wolfSSL over OpenSSL? -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Fwd: PostgreSQL: WolfSSL support
Re: Peter Eisentraut > What would be the advantage of using wolfSSL over OpenSSL? Avoiding the OpenSSL-vs-GPL linkage problem with readline. Christoph
Re: proposal: possibility to read dumped table's name from file
On Thu, Jun 11, 2020 at 1:07 PM Pavel Stehule wrote: > > Thank you for comments, attached updated patch > Few comments: +invalid_filter_format(char *message, char *filename, char *line, int lineno) +{ + char *displayname; + + displayname = *filename == '-' ? "stdin" : filename; + + pg_log_error("invalid format of filter file \"%s\": %s", +displayname, +message); + + fprintf(stderr, "%d: %s\n", lineno, line); + exit_nicely(1); +} I think fclose is missing here. + if (line[chars - 1] == '\n') + line[chars - 1] = '\0'; Should we check for '\r' also to avoid failures in some platforms. + + --filter=filename + + +Read filters from file. Format "(+|-)(tnfd) objectname: + + + I felt some documentation is missing here. We could include, options tnfd is for controlling table, schema, foreign server data & table exclude patterns. Instead of using tnfd, if we could use the same options as existing pg_dump options it will be less confusing. Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
Re: Fwd: PostgreSQL: WolfSSL support
On Sat, Jun 27, 2020 at 02:50:27PM +0200, Christoph Berg wrote: > Re: Peter Eisentraut > > What would be the advantage of using wolfSSL over OpenSSL? > > Avoiding the OpenSSL-vs-GPL linkage problem with readline. Uh, wolfSSL is GPL2: https://www.wolfssl.com/license/ Not sure why we would want to lock Postgres into a GPL-style requirement. As I understand it, we don't normally ship readline or openssl. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY FROM
On Sat, Jun 27, 2020 at 9:23 AM Bharath Rupireddy wrote: > > Thanks Rushabh and Vignesh for the comments. > > > > > One comment: > > We could change below code: > > + */ > > + if (!cstate->binary) > > + cstate->raw_buf = (char *) palloc(RAW_BUF_SIZE + 1); > > + else > > + cstate->raw_buf = NULL; > > to: > > cstate->raw_buf = (cstate->binary) ? NULL : (char *) palloc(RAW_BUF_SIZE + > > 1); > > > > Attached the patch with the above changes. Changes look fine to me. Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
Re: Fwd: PostgreSQL: WolfSSL support
Bruce Momjian writes: > On Sat, Jun 27, 2020 at 02:50:27PM +0200, Christoph Berg wrote: >> Re: Peter Eisentraut >>> What would be the advantage of using wolfSSL over OpenSSL? >> Avoiding the OpenSSL-vs-GPL linkage problem with readline. > Uh, wolfSSL is GPL2: > https://www.wolfssl.com/license/ Readline is GPLv3+ (according to Red Hat's labeling of that package anyway, didn't check the source). So they'd be compatible, while openssl's license is nominally incompatible with GPL. As I recall, Debian jumps through some silly hoops to pretend that they're not using openssl and readline at the same time with Postgres, so I can definitely understand Christoph's interest in an alternative. However, judging from the caveats mentioned in the initial message, my inclination would be to wait awhile for wolfSSL to mature. In any case, the patch as written seems to *remove* the option to compile PG with OpenSSL. The chance of it being accepted that way is indistinguishable from zero. We've made some efforts towards separating out the openssl-specific bits, so the shape I'd expect from a patch like this is to add some parallel wolfssl-specific bits. There probably are more such bits to separate, but this isn't the way to proceed. regards, tom lane
Re: Fwd: PostgreSQL: WolfSSL support
On Sat, Jun 27, 2020 at 10:56:46AM -0400, Tom Lane wrote: > Bruce Momjian writes: > > On Sat, Jun 27, 2020 at 02:50:27PM +0200, Christoph Berg wrote: > >> Re: Peter Eisentraut > >>> What would be the advantage of using wolfSSL over OpenSSL? > > >> Avoiding the OpenSSL-vs-GPL linkage problem with readline. > > > Uh, wolfSSL is GPL2: > > https://www.wolfssl.com/license/ > > Readline is GPLv3+ (according to Red Hat's labeling of that package > anyway, didn't check the source). So they'd be compatible, while > openssl's license is nominally incompatible with GPL. As I recall, > Debian jumps through some silly hoops to pretend that they're not > using openssl and readline at the same time with Postgres, so I > can definitely understand Christoph's interest in an alternative. > > However, judging from the caveats mentioned in the initial message, > my inclination would be to wait awhile for wolfSSL to mature. Also, wolfSSL is developed by a company and dual GPL/commerical licenses, so it seems like a mismatch to me. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: Fwd: PostgreSQL: WolfSSL support
Bruce Momjian writes: > Also, wolfSSL is developed by a company and dual GPL/commerical > licenses, so it seems like a mismatch to me. Yeah, that's definitely a factor behind my disinterest in making wolfSSL be the only alternative. However, as long as it's available on GPL terms, I don't see a problem with it being one alternative. regards, tom lane
Re: Fwd: PostgreSQL: WolfSSL support
On Sat, Jun 27, 2020 at 11:16:26AM -0400, Tom Lane wrote: > Bruce Momjian writes: > > Also, wolfSSL is developed by a company and dual GPL/commerical > > licenses, so it seems like a mismatch to me. > > Yeah, that's definitely a factor behind my disinterest in > making wolfSSL be the only alternative. However, as long as > it's available on GPL terms, I don't see a problem with it > being one alternative. Yeah, I guess it depends on how much Postgres code it takes to support it. Company-developed open source stuff usually goes into pay mode once it gets popular, so I am not super-excited to be going in this direction. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: should libpq also require TLSv1.2 by default?
I wrote: > Daniel Gustafsson writes: >> SSL_R_UNKNOWN_PROTOCOL seem to covers cases when someone manages to perform >> something which OpenSSL believes is a broken SSLv2 connection, but their own >> client-level code use it to refer to SSL as well as TLS. Maybe it's worth >> adding as a belts and suspenders type thing? > No objection on my part. >> If anything it might useful to document in the comment that we're only >> concerned with TLS versions, SSL2/3 are disabled in the library >> initialization. > Good point. Pushed with those corrections. I also rewrote the comment about which error codes we'd seen in practice, after realizing that one of my tests had been affected by the presence of "MinProtocol = TLSv1.2" in RHEL8's openssl.cnf (causing a max setting less than that to be a local configuration error, not something the server had rejected). regards, tom lane
compile error master SSL_R_VERSION_TOO_HIGH:
Hello, On "Debian GNU/Linux 9 (stretch)", compiling master just now, I get the following (interspersed with some output fom my build script): -- [2020.06.27 19:07:42 HEAD/1] ./configure --prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD --bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin.fast --l ibdir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/lib.fast --with-pgport=6514 --quiet --enable-depend --with-openssl --with-perl --with-libxml --with-libxslt --with-zlib --enable-tap-tests --with-extra-version=_0627_b63d -- [2020.06.27 19:08:06 HEAD/1] make core: make --quiet -j 4 be-secure-openssl.c: In function ‘be_tls_open_server’: be-secure-openssl.c:477:11: error: ‘SSL_R_VERSION_TOO_HIGH’ undeclared (first use in this function) 477 | case SSL_R_VERSION_TOO_HIGH: | ^~ be-secure-openssl.c:477:11: note: each undeclared identifier is reported only once for each function it appears in be-secure-openssl.c:478:11: error: ‘SSL_R_VERSION_TOO_LOW’ undeclared (first use in this function); did you mean ‘SSL_R_MESSAGE_TOO_LONG’? 478 | case SSL_R_VERSION_TOO_LOW: | ^ | SSL_R_MESSAGE_TOO_LONG make[3]: *** [be-secure-openssl.o] Error 1 make[2]: *** [libpq-recursive] Error 2 make[2]: *** Waiting for unfinished jobs make[1]: *** [all-backend-recurse] Error 2 make: *** [all-src-recurse] Error 2 ../../../src/Makefile.global:919: recipe for target 'be-secure-openssl.o' failed common.mk:39: recipe for target 'libpq-recursive' failed Makefile:42: recipe for target 'all-backend-recurse' failed GNUmakefile:11: recipe for target 'all-src-recurse' failed To be honest I have no idea what needs to be fixed... Thanks, Erik Rijkers
Re: compile error master SSL_R_VERSION_TOO_HIGH:
Erik Rijkers writes: > On "Debian GNU/Linux 9 (stretch)", compiling master just now, I get the > following (interspersed with some output fom my build script): Yeah, just saw that in the buildfarm. Should be OK now. regards, tom lane
Re: compile error master SSL_R_VERSION_TOO_HIGH:
On Sat, Jun 27, 2020 at 01:28:15PM -0400, Tom Lane wrote: > Erik Rijkers writes: > > On "Debian GNU/Linux 9 (stretch)", compiling master just now, I get the > > following (interspersed with some output fom my build script): > > Yeah, just saw that in the buildfarm. Should be OK now. I can confirm a successful "Debian 10/Buster" compile here with master. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: compile error master SSL_R_VERSION_TOO_HIGH:
On 2020-06-27 19:28, Tom Lane wrote: Erik Rijkers writes: On "Debian GNU/Linux 9 (stretch)", compiling master just now, I get the following (interspersed with some output fom my build script): Yeah, just saw that in the buildfarm. Should be OK now. It is. I should've checked the farm before complaining... Thanks!
Re: xid wraparound danger due to INDEX_CLEANUP false
On Fri, Jun 26, 2020 at 10:15 PM Masahiko Sawada wrote: > Regarding to the extent of the impact, this bug will affect the user > who turned vacuum_index_cleanup off or executed manually vacuum with > INDEX_CLEANUP off for a long time, after some vacuums. On the other > hand, the user who uses INDEX_CLEANUP off on the spot or turns > vacuum_index_cleanup off of the table from the start would not be > affected or less affected. I don't think that it's likely to cause too much trouble. It's already possible to leak deleted pages, if only because the FSM isn't crash safe. Actually, the nbtree README says this, and has since 2003: """ (Note: if we find a deleted page with an extremely old transaction number, it'd be worthwhile to re-mark it with FrozenTransactionId so that a later xid wraparound can't cause us to think the page is unreclaimable. But in more normal situations this would be a waste of a disk write.) """ But, uh, isn't the btvacuumcleanup() call supposed to avoid wraparound? Who knows?! It doesn't seem like the recycling aspect of page deletion was rigorously designed, possibly because it's harder to test than page deletion itself. This is a problem that we should fix. > I apologize for writing this patch without enough consideration. I > should have been more careful as I learned the nbtree page recycle > strategy when discussing vacuum_cleanup_index_scale_factor patch. While it's unfortunate that this was missed, let's not lose perspective. Anybody using the INDEX_CLEANUP feature (whether it's through a direct VACUUM, or by using the reloption) is already asking for an extreme behavior: skipping regular index vacuuming. I imagine that the vast majority of users that are in that position just don't care about the possibility of leaking deleted pages. They care about avoiding a real disaster from XID wraparound. -- Peter Geoghegan
Re: Fwd: PostgreSQL: WolfSSL support
Re: Tom Lane > In any case, the patch as written seems to *remove* the option > to compile PG with OpenSSL. It's a WIP patch, meant to see if it works at all. Of course OpenSSL would stay as the default option. Christoph
Re: Fwd: PostgreSQL: WolfSSL support
Christoph Berg writes: > It's a WIP patch, meant to see if it works at all. Of course OpenSSL > would stay as the default option. Fair enough. One thing that struck me as I looked at it was that most of the #include hackery seemed unnecessary. The configure script could add -I/usr/include/wolfssl (or wherever those files are) to CPPFLAGS instead of touching all those #includes. regards, tom lane
pg_read_file() with virtual files returns empty string
Since pg11 pg_read_file() and friends can be used with absolute paths as long as the user is superuser or explicitly granted the role pg_read_server_files. I noticed that when trying to read a virtual file, e.g.: SELECT pg_read_file('/proc/self/status'); the returned result is a zero length string. However this works fine: SELECT pg_read_file('/proc/self/status', 127, 128); The reason for that is pg_read_file_v2() sets bytes_to_read=-1 if no offset and length are supplied as arguments when it is called. It passes bytes_to_read down to read_binary_file(). When the latter function sees bytes_to_read < 0 it tries to read the entire file by getting the file size via stat, which returns 0 for a virtual file size. The attached patch fixes this for me. I think it ought to be backpatched through pg11. Comments? Joe -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, & Open Source Development diff --git a/src/backend/utils/adt/genfile.c b/src/backend/utils/adt/genfile.c index ceaa618..101df3a 100644 *** a/src/backend/utils/adt/genfile.c --- b/src/backend/utils/adt/genfile.c *** *** 36,41 --- 36,42 #include "utils/syscache.h" #include "utils/timestamp.h" + #define READBUF_SIZE 4096 /* * Convert a "text" filename argument to C string, and check it's allowable. *** read_binary_file(const char *filename, i *** 106,112 bool missing_ok) { bytea *buf; ! size_t nbytes; FILE *file; if (bytes_to_read < 0) --- 107,113 bool missing_ok) { bytea *buf; ! size_t nbytes = 0; FILE *file; if (bytes_to_read < 0) *** read_binary_file(const char *filename, i *** 154,162 (errcode_for_file_access(), errmsg("could not seek in file \"%s\": %m", filename))); ! buf = (bytea *) palloc((Size) bytes_to_read + VARHDRSZ); ! nbytes = fread(VARDATA(buf), 1, (size_t) bytes_to_read, file); if (ferror(file)) ereport(ERROR, --- 155,187 (errcode_for_file_access(), errmsg("could not seek in file \"%s\": %m", filename))); ! if (bytes_to_read > 0) ! { ! buf = (bytea *) palloc((Size) bytes_to_read + VARHDRSZ); ! nbytes = fread(VARDATA(buf), 1, (size_t) bytes_to_read, file); ! } ! else ! { ! /* bytes_to_read can be <= zero if the file is a virtual file */ ! StringInfoData sbuf; ! size_t rbytes = 0; ! char rbuf[READBUF_SIZE]; ! ! initStringInfo(&sbuf); ! while (!feof(file)) ! { ! memset(rbuf, '\0', READBUF_SIZE); ! rbytes = fread(rbuf, 1, (size_t) READBUF_SIZE, file); ! nbytes += rbytes; ! ! appendStringInfoString(&sbuf, rbuf); ! } ! ! Assert(nbytes == sbuf.len); ! buf = (bytea *) palloc((Size) nbytes + VARHDRSZ); ! memcpy(VARDATA(buf), sbuf.data, nbytes); ! } if (ferror(file)) ereport(ERROR, signature.asc Description: OpenPGP digital signature
Re: Fwd: PostgreSQL: WolfSSL support
Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg escreveu: > Re: Peter Eisentraut > > What would be the advantage of using wolfSSL over OpenSSL? > > Avoiding the OpenSSL-vs-GPL linkage problem with readline. > I'm curious, how do you intend to solve a linking problem with OpenSSL-vs-GPL-readline, with another GPL product? WolfSSL, will provide a commercial license for PostgreSQL? Isn't LIbreSSL a better alternative? regards, Ranier Vilela
Re: Fwd: PostgreSQL: WolfSSL support
On Sat, Jun 27, 2020 at 3:25 PM Ranier Vilela wrote: > Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg > escreveu: > >> Re: Peter Eisentraut >> > What would be the advantage of using wolfSSL over OpenSSL? >> >> Avoiding the OpenSSL-vs-GPL linkage problem with readline. >> > I'm curious, how do you intend to solve a linking problem with > OpenSSL-vs-GPL-readline, with another GPL product? > WolfSSL, will provide a commercial license for PostgreSQL? > Isn't LIbreSSL a better alternative? > Somewhere, I recall seeing an open-source OpenSSL compatibility wrapper for WolfSSL. Assuming that still exists, this patch seems entirely unnecessary. -- Jonah H. Harris
Re: Fwd: PostgreSQL: WolfSSL support
Re: Jonah H. Harris > Somewhere, I recall seeing an open-source OpenSSL compatibility wrapper for > WolfSSL. Assuming that still exists, this patch seems entirely unnecessary. Unless you actually tried. Christoph
Re: Fwd: PostgreSQL: WolfSSL support
Re: Ranier Vilela > I'm curious, how do you intend to solve a linking problem with > OpenSSL-vs-GPL-readline, with another GPL product? > WolfSSL, will provide a commercial license for PostgreSQL? It's replacing OpenSSL+GPL with GPL+GPL. > Isn't LIbreSSL a better alternative? I don't know. Christoph
Re: Fwd: PostgreSQL: WolfSSL support
On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote: > Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg > escreveu: > > Re: Peter Eisentraut > > What would be the advantage of using wolfSSL over OpenSSL? > > Avoiding the OpenSSL-vs-GPL linkage problem with readline. > > I'm curious, how do you intend to solve a linking problem with > OpenSSL-vs-GPL-readline, with another GPL product? I assume you can use wolfSSL as long as the result is GPL, which is the same requirement libreadline causes for Postgres, particularly if Postgres is statically linked to libreadline. > WolfSSL, will provide a commercial license for PostgreSQL? > Isn't LIbreSSL a better alternative? Seems it might be. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: pg_read_file() with virtual files returns empty string
Joe Conway writes: > The attached patch fixes this for me. I think it ought to be backpatched > through > pg11. > Comments? 1. Doesn't seem to be accounting for the possibility of an error in fread(). 2. Don't we want to remove the stat() call altogether, if we're not going to believe its length? 3. This bit might need to cast the RHS to int64: if (bytes_to_read > (MaxAllocSize - VARHDRSZ)) otherwise it might be treated as an unsigned comparison. Or you could check for bytes_to_read < 0 separately. 4. appendStringInfoString seems like quite the wrong thing to use when the input is binary data. 5. Don't like the comment. Whether the file is virtual or not isn't very relevant here. 6. If the file size exceeds 1GB, I fear we'll get some rather opaque failure from the stringinfo infrastructure. It'd be better to check for that here and give a file-too-large error. regards, tom lane
Re: pg_bsd_indent compiles bytecode
On Tue, Jun 23, 2020 at 05:56:10PM -0400, Alvaro Herrera wrote: > I just noticed that when you compile pg_bsd_indent with a PG tree that > has --enable-jit (or something around that), then it compiles the source > files into bytecode. > > Obviously this is not harmful since these files don't get installed, but > I wonder if our compiles aren't being excessively generous. Are you saying pg_bsd_indent indents the JIT output files? I assumed people only ran pg_bsd_indent on dist-clean trees. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: pg_bsd_indent compiles bytecode
Bruce Momjian writes: > On Tue, Jun 23, 2020 at 05:56:10PM -0400, Alvaro Herrera wrote: >> I just noticed that when you compile pg_bsd_indent with a PG tree that >> has --enable-jit (or something around that), then it compiles the source >> files into bytecode. >> Obviously this is not harmful since these files don't get installed, but >> I wonder if our compiles aren't being excessively generous. > Are you saying pg_bsd_indent indents the JIT output files? I assumed > people only ran pg_bsd_indent on dist-clean trees. I think what he means is that when pg_bsd_indent absorbs the CFLAGS settings that PG uses (because it uses the pgxs build infrastructure), it ends up also building .bc files. I wouldn't care about this particularly for pg_bsd_indent itself, but it suggests that we're probably building .bc files for client-side files, which seems like a substantial waste of time. Maybe we need different CFLAGS for client and server? regards, tom lane
Re: pg_bsd_indent compiles bytecode
On Sat, Jun 27, 2020 at 05:12:57PM -0400, Tom Lane wrote: > Bruce Momjian writes: > > On Tue, Jun 23, 2020 at 05:56:10PM -0400, Alvaro Herrera wrote: > >> I just noticed that when you compile pg_bsd_indent with a PG tree that > >> has --enable-jit (or something around that), then it compiles the source > >> files into bytecode. > >> Obviously this is not harmful since these files don't get installed, but > >> I wonder if our compiles aren't being excessively generous. > > > Are you saying pg_bsd_indent indents the JIT output files? I assumed > > people only ran pg_bsd_indent on dist-clean trees. > > I think what he means is that when pg_bsd_indent absorbs the CFLAGS > settings that PG uses (because it uses the pgxs build infrastructure), > it ends up also building .bc files. Wow, OK, I was confused then. > I wouldn't care about this particularly for pg_bsd_indent itself, > but it suggests that we're probably building .bc files for client-side > files, which seems like a substantial waste of time. Maybe we need > different CFLAGS for client and server? Understood. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: Fwd: PostgreSQL: WolfSSL support
Em sáb., 27 de jun. de 2020 às 16:40, Bruce Momjian escreveu: > On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote: > > Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg > > escreveu: > > > > Re: Peter Eisentraut > > > What would be the advantage of using wolfSSL over OpenSSL? > > > > Avoiding the OpenSSL-vs-GPL linkage problem with readline. > > > > I'm curious, how do you intend to solve a linking problem with > > OpenSSL-vs-GPL-readline, with another GPL product? > > I assume you can use wolfSSL as long as the result is GPL, which is the > same requirement libreadline causes for Postgres, particularly if > Postgres is statically linked to libreadline. > I don't want to divert the focus from the theread, but this subject has a controversial potential, in my opinion. I participated in a speech on another list, where I make contributions (IUP library: https://www.tecgraf.puc-rio.br/iup/). Where a user, upon discovering that two sub-libraries, were GPL licenses, caused an uproar, bringing the speech to Mr.Stallman himself. In short, the best thing for the project will be to remove the two GPL sub-libraries. regards, Ranier Vilela
Re: Fwd: PostgreSQL: WolfSSL support
On Sat, Jun 27, 2020 at 06:14:21PM -0300, Ranier Vilela wrote: > Em sáb., 27 de jun. de 2020 às 16:40, Bruce Momjian > escreveu: > > On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote: > > Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg > > escreveu: > > > > Re: Peter Eisentraut > > > What would be the advantage of using wolfSSL over OpenSSL? > > > > Avoiding the OpenSSL-vs-GPL linkage problem with readline. > > > > I'm curious, how do you intend to solve a linking problem with > > OpenSSL-vs-GPL-readline, with another GPL product? > > I assume you can use wolfSSL as long as the result is GPL, which is the > same requirement libreadline causes for Postgres, particularly if > Postgres is statically linked to libreadline. > > I don't want to divert the focus from the theread, but this subject has a > controversial potential, in my opinion. > I participated in a speech on another list, where I make contributions (IUP > library: https://www.tecgraf.puc-rio.br/iup/). > Where a user, upon discovering that two sub-libraries, were GPL licenses, > caused an uproar, bringing the speech to Mr.Stallman himself. > In short, the best thing for the project will be to remove the two GPL > sub-libraries. We aleady try to do that by trying to use BSD-licensed libedit if installed: https://github.com/freebsd/freebsd/tree/master/lib/libedit https://certif.com/spec_print/readline.html I would love to see libedit fully functional so we don't need to rely on libreadline anymore, but I seem to remember there are a few libreadline features that libedit doesn't implement, so we use libreadline if it is already installed. (I am still not clear if dynamic linking is a GPL violation.) -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: Fwd: PostgreSQL: WolfSSL support
On Sat, Jun 27, 2020 at 3:37 PM Christoph Berg wrote: > Re: Jonah H. Harris > > Somewhere, I recall seeing an open-source OpenSSL compatibility wrapper > for > > WolfSSL. Assuming that still exists, this patch seems entirely > unnecessary. > > Unless you actually tried. Did you? It worked for me in the past on a similarly large system... -- Jonah H. Harris
Re: Fwd: PostgreSQL: WolfSSL support
Em sáb., 27 de jun. de 2020 às 18:23, Bruce Momjian escreveu: > On Sat, Jun 27, 2020 at 06:14:21PM -0300, Ranier Vilela wrote: > > Em sáb., 27 de jun. de 2020 às 16:40, Bruce Momjian > > escreveu: > > > > On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote: > > > Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg < > m...@debian.org> > > > escreveu: > > > > > > Re: Peter Eisentraut > > > > What would be the advantage of using wolfSSL over OpenSSL? > > > > > > Avoiding the OpenSSL-vs-GPL linkage problem with readline. > > > > > > I'm curious, how do you intend to solve a linking problem with > > > OpenSSL-vs-GPL-readline, with another GPL product? > > > > I assume you can use wolfSSL as long as the result is GPL, which is > the > > same requirement libreadline causes for Postgres, particularly if > > Postgres is statically linked to libreadline. > > > > I don't want to divert the focus from the theread, but this subject has a > > controversial potential, in my opinion. > > I participated in a speech on another list, where I make contributions > (IUP > > library: https://www.tecgraf.puc-rio.br/iup/). > > Where a user, upon discovering that two sub-libraries, were GPL licenses, > > caused an uproar, bringing the speech to Mr.Stallman himself. > > In short, the best thing for the project will be to remove the two GPL > > sub-libraries. > > We aleady try to do that by trying to use BSD-licensed libedit if > installed: > > https://github.com/freebsd/freebsd/tree/master/lib/libedit > https://certif.com/spec_print/readline.html > > I would love to see libedit fully functional so we don't need to rely on > libreadline anymore, but I seem to remember there are a few libreadline > features that libedit doesn't implement, so we use libreadline if it is > already installed. (I am still not clear if dynamic linking is a GPL > violation.) > Personally, the dynamic link does not hurt the GPL. But some people, do not think so, it was also unclear what Mr Stallman thinks of the subject (dynamic link). regards, Ranier Vilela
Re: Fwd: PostgreSQL: WolfSSL support
On Sat, Jun 27, 2020 at 06:25:21PM -0300, Ranier Vilela wrote: > Personally, the dynamic link does not hurt the GPL. > But some people, do not think so, it was also unclear what Mr Stallman thinks > of the subject (dynamic link). I think Stallman says the courts have to decide, which kind of makes sense. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: Fwd: PostgreSQL: WolfSSL support
Christoph Berg writes: > Re: Ranier Vilela >> Isn't LIbreSSL a better alternative? > I don't know. It should work all right --- it's the default ssl library on OpenBSD and some other platforms, so we have some buildfarm coverage for it. (AFAICT, none of the OpenBSD machines are running the ssl test, but I tried that just now on OpenBSD 6.4 and it passed.) However, I'm not exactly convinced that using LibreSSL gets you out of the license compatibility bind. LibreSSL is a fork of OpenSSL, and IIUC a fairly hostile fork at that, so how did they get permission to remove OpenSSL's problematic license clauses? Did they remove them at all? A quick look at the header files on my OpenBSD installation shows a whole lot of ancient copyright text. regards, tom lane
Re: Fwd: PostgreSQL: WolfSSL support
Bruce Momjian writes: > On Sat, Jun 27, 2020 at 06:25:21PM -0300, Ranier Vilela wrote: >> Personally, the dynamic link does not hurt the GPL. >> But some people, do not think so, it was also unclear what Mr Stallman thinks >> of the subject (dynamic link). > I think Stallman says the courts have to decide, which kind of makes > sense. This subject (openssl vs readline) has been discussed to death in the past, with varying opinions --- for example, Red Hat's well-qualified lawyers think building PG with openssl + readline poses no problem, Debian's lawyers apparently think otherwise. Please see the archives before re-opening the topic. And, if you're not a lawyer, it's quite unlikely you'll bring any new insights. regards, tom lane
Re: Fwd: PostgreSQL: WolfSSL support
On Sat, Jun 27, 2020 at 05:46:17PM -0400, Tom Lane wrote: > Bruce Momjian writes: > > On Sat, Jun 27, 2020 at 06:25:21PM -0300, Ranier Vilela wrote: > >> Personally, the dynamic link does not hurt the GPL. > >> But some people, do not think so, it was also unclear what Mr Stallman > >> thinks > >> of the subject (dynamic link). > > > I think Stallman says the courts have to decide, which kind of makes > > sense. > > This subject (openssl vs readline) has been discussed to death in the > past, with varying opinions --- for example, Red Hat's well-qualified > lawyers think building PG with openssl + readline poses no problem, > Debian's lawyers apparently think otherwise. Please see the archives > before re-opening the topic. And, if you're not a lawyer, it's quite > unlikely you'll bring any new insights. I think the larger problem is that different jurisdictions, e.g., USA, EU, could rule differently. Also, the FSF is not the only organization that can bring violation suits, e.g. Oracle with MySQL, so there probably isn't one answer to this until it is thoroughly litigated. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com The usefulness of a cup is in its emptiness, Bruce Lee
Re: pg_bsd_indent compiles bytecode
Hi, On 2020-06-27 17:12:57 -0400, Tom Lane wrote: > Bruce Momjian writes: > > On Tue, Jun 23, 2020 at 05:56:10PM -0400, Alvaro Herrera wrote: > >> I just noticed that when you compile pg_bsd_indent with a PG tree that > >> has --enable-jit (or something around that), then it compiles the source > >> files into bytecode. > >> Obviously this is not harmful since these files don't get installed, but > >> I wonder if our compiles aren't being excessively generous. > > > Are you saying pg_bsd_indent indents the JIT output files? I assumed > > people only ran pg_bsd_indent on dist-clean trees. > > I think what he means is that when pg_bsd_indent absorbs the CFLAGS > settings that PG uses (because it uses the pgxs build infrastructure), > it ends up also building .bc files. Hm. Yea, I think I see the problem. OBJS should only be expanded if MODULE_big is set. > I wouldn't care about this particularly for pg_bsd_indent itself, > but it suggests that we're probably building .bc files for client-side > files, which seems like a substantial waste of time. Maybe we need > different CFLAGS for client and server? I don't think it'll apply to most in-tree client side programs, so it shouldn't be too bad currently. Still should fix it, of course. I can test that with another program, but for some reason pg_bsd_indent fails to build against 13/HEAD, but builds fine against 12. Not sure yet what's up: /usr/bin/ld.gold: error: indent.o: multiple definition of 'input' /usr/bin/ld.gold: args.o: previous definition here /usr/bin/ld.gold: error: indent.o: multiple definition of 'output' /usr/bin/ld.gold: args.o: previous definition here /usr/bin/ld.gold: error: indent.o: multiple definition of 'labbuf' /usr/bin/ld.gold: args.o: previous definition here ... Greetings, Andres Freund
Re: pg_bsd_indent compiles bytecode
Andres Freund writes: > I can test that with another program, but for some reason pg_bsd_indent > fails to build against 13/HEAD, but builds fine against 12. Not sure yet > what's up: Huh. Works here on RHEL8 ... what platform are you using? regards, tom lane
Re: pg_bsd_indent compiles bytecode
Andres Freund writes: > On 2020-06-27 17:12:57 -0400, Tom Lane wrote: >> I wouldn't care about this particularly for pg_bsd_indent itself, >> but it suggests that we're probably building .bc files for client-side >> files, which seems like a substantial waste of time. Maybe we need >> different CFLAGS for client and server? > I don't think it'll apply to most in-tree client side programs, so it > shouldn't be too bad currently. Still should fix it, of course. Having now checked, there isn't any such problem. No .bc files are getting built except in src/backend and in other modules that feed into the backend, such as src/timezone and most of contrib. I do see .bc files getting built for pg_bsd_indent, as Alvaro reported. Seems like it must be a bug in the pgxs make logic, not anything more generic. regards, tom lane
Re: PostgreSQL: WolfSSL support
On Saturday, June 27, 2020, Tom Lane wrote: > Christoph Berg writes: > > Re: Ranier Vilela > >> Isn't LIbreSSL a better alternative? > > > I don't know. > > It should work all right --- it's the default ssl library on OpenBSD > and some other platforms, so we have some buildfarm coverage for it. > (AFAICT, none of the OpenBSD machines are running the ssl test, but > I tried that just now on OpenBSD 6.4 and it passed.) > > However, I'm not exactly convinced that using LibreSSL gets you out > of the license compatibility bind. LibreSSL is a fork of OpenSSL, > and IIUC a fairly hostile fork at that, so how did they get permission > to remove OpenSSL's problematic license clauses? Did they remove them > at all? A quick look at the header files on my OpenBSD installation > shows a whole lot of ancient copyright text. As I understand Libressl objective is not to change the license of existing code but to deprecate features they don't want in it. They also include in Libressl a new libtls which is ISC licensed, but it's another history > regards, tom lane > > >
Re: update substring pattern matching syntax
Hallo Peter, v2 patches apply cleanly, compile, global check ok, citext check ok, doc gen ok. No further comments. As I did not find an entry in the CF, so I did nothing about tagging it "ready". -- Fabien.
Re: Raising stop and warn limits
On Sun, Jun 21, 2020 at 01:35:13AM -0700, Noah Misch wrote: > In brief, I'm proposing to raise xidWrapLimit-xidStopLimit to 3M and > xidWrapLimit-xidWarnLimit to 40M. Likewise for mxact counterparts. Here's the patch for it. > 1. VACUUM, to advance a limit, may assign IDs subject to one of the limits. >VACUUM formerly consumed XIDs, not mxacts. It now consumes mxacts, not >XIDs. Correction: a lazy_truncate_heap() at wal_level!=minimal does assign an XID, so XID consumption is impossible with "VACUUM (TRUNCATE false)" but possible otherwise. "VACUUM (ANALYZE)", which a DBA might do by reflex, also assigns XIDs. (These corrections do not affect $SUBJECT.) Author: Noah Misch Commit: Noah Misch Change XID and mxact limits to warn at 40M and stop at 3M. We have edge-case bugs when assigning values in the last few dozen pages before the wrap limit. We may introduce similar bugs in the future. At default BLCKSZ, this makes such bugs unreachable outside of single-user mode. Also, when VACUUM began to consume mxacts, multiStopLimit did not change to compensate. pg_upgrade may fail on a cluster that was already printing "must be vacuumed" warnings. Follow the warning's instructions to clear the warning, then run pg_upgrade again. One can still, peacefully consume 98% of XIDs or mxacts, so DBAs need not change routine VACUUM settings. Reviewed by FIXME. Discussion: https://postgr.es/m/20200621083513.ga3074...@rfd.leadboat.com diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml index 612e4cb..a28ea56 100644 --- a/doc/src/sgml/maintenance.sgml +++ b/doc/src/sgml/maintenance.sgml @@ -608,10 +608,10 @@ SELECT datname, age(datfrozenxid) FROM pg_database; If for some reason autovacuum fails to clear old XIDs from a table, the system will begin to emit warning messages like this when the database's -oldest XIDs reach eleven million transactions from the wraparound point: +oldest XIDs reach forty million transactions from the wraparound point: -WARNING: database "mydb" must be vacuumed within 10985967 transactions +WARNING: database "mydb" must be vacuumed within 39985967 transactions HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database. @@ -621,7 +621,7 @@ HINT: To avoid a database shutdown, execute a database-wide VACUUM in that data be able to advance the database's datfrozenxid.) If these warnings are ignored, the system will shut down and refuse to start any new -transactions once there are fewer than 1 million transactions left +transactions once there are fewer than three million transactions left until wraparound: @@ -629,7 +629,7 @@ ERROR: database is not accepting commands to avoid wraparound data loss in data HINT: Stop the postmaster and vacuum that database in single-user mode. -The 1-million-transaction safety margin exists to let the +The three-million-transaction safety margin exists to let the administrator recover without data loss, by manually executing the required VACUUM commands. However, since the system will not execute commands once it has gone into the safety shutdown mode, diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c index ce84dac..475f5ed 100644 --- a/src/backend/access/transam/multixact.c +++ b/src/backend/access/transam/multixact.c @@ -2217,28 +2217,24 @@ SetMultiXactIdLimit(MultiXactId oldest_datminmxid, Oid oldest_datoid, multiWrapLimit += FirstMultiXactId; /* -* We'll refuse to continue assigning MultiXactIds once we get within 100 -* multi of data loss. -* -* Note: This differs from the magic number used in -* SetTransactionIdLimit() since vacuum itself will never generate new -* multis. XXX actually it does, if it needs to freeze old multis. +* We'll refuse to continue assigning MultiXactIds once we get within 3M +* multi of data loss. See SetTransactionIdLimit. */ - multiStopLimit = multiWrapLimit - 100; + multiStopLimit = multiWrapLimit - 300; if (multiStopLimit < FirstMultiXactId) multiStopLimit -= FirstMultiXactId; /* -* We'll start complaining loudly when we get within 10M multis of the -* stop point. This is kind of arbitrary, but if you let your gas gauge -* get down to 1% of full, would you be looking for the next gas station? -* We need to be fairly liberal about this number because there are lots -* of scenarios where most transactions are done by automatic clients that -* won't pay attention to warnings. (No, we're not gonna make this +* We'll start complaining loudly when we get within 40M multis of data +* loss. This is kind of arbitrary, but if you let your