Re: [RFC] building postgres with meson - v11
On 17.08.22 23:53, Andres Freund wrote: Any comment on the pg_regress_ecpg commit? I'd like to get that out of the way, and it seems considerably cleaner than the hackery we do right now to make VPATH builds work. That one looks like a very good improvement.
Re: Proposal: CREATE/ALTER DOMAIN ... STORAGE/COMPRESSION = ...
On 17.08.22 11:43, Aleksander Alekseev wrote: Do you think it will be useful to specify STORAGE and/or COMPRESSION for domains? Domains are supposed to a logical construct that restricts the accepted values for a data type (it's in the name "domain"). Expanding that into a general "column definition macro" seems outside its scope. For example, what would be the semantics of this when such a domain is a function argument or return value? As an example, this will allow creating an alias for TEXT with EXTERNAL storage strategy. In other words, to do the same we do with ALTER TABLE, but for types. This feature is arguably not something most people are going to use, but it shouldn't be difficult to implement and/or maintain either. Considering how difficult it has been to maintain domains in just their current form, I don't believe that.
Re: static libpq (and other libraries) overwritten on aix
Hi, On 2022-08-18 22:56:43 -0700, Noah Misch wrote: > > On 2022-08-17 21:59:29 -0700, Noah Misch wrote: > > > Along the lines of Robert's comment, it could be a nice code > > > beautification to > > > use a different suffix for the short-lived .a file. Perhaps _so_inputs.a. > > > > Agreed, it'd be an improvement. > > > > Afaict we could just stop building the intermediary static lib. Afaict the > > MKLDEXPORT path isn't needed for libraries without an exports.txt because > > the > > linker defaults to exporting "most" symbols > > If that works, great. I looked at that. It's not too hard to make it work. But while doing so I encountered some funny bits. As far as I can tell the way we build shared libraries on aix with gcc isn't correct: Without -shared gcc won't know that it's building a shared library, which afaict will prevent gcc from generating correct unwind info and we end up with a statically linked copy of libgcc each time. The naive thing of just adding -shared fails, but that's our fault: ldd pgoutput.so pgoutput.so needs: Cannot find libgcc_s.a(shr.o) /usr/lib/libc.a(shr_64.o) /unix /usr/lib/libcrypt.a(shr_64.o) Makefile.aix has: # -blibpath must contain ALL directories where we should look for libraries libpath := $(shell echo $(subst -L,:,$(filter -L/%,$(LDFLAGS))) | sed -e's/ //g'):/usr/lib:/lib but that's insufficient for gcc, because it won't find gcc's runtime lib. We could force a build of the statically linked libgcc, but once it knows it's generating with a shared library, a static libgcc unfortunately blows up the size of the output considerably. So I think we need something like ifeq ($(GCC), yes) libpath := $(libpath):$(dir $(shell gcc -print-libgcc-file-name)) endif although deferring the computation of that would be nicer, but would require some cleanup before. With that libraries do shrink a bit. E.g. cube.so goes from 140k to 96k. Afaict there's no reason to generate lib.a for extension .so's, right? We have plenty of detritus that's vaguely AIX related. The common.mk rule to generate SUBSYS.o isn't used (mea culpa), and backend/Makefile's postgres.o rule hasn't been used for well over 20 years. I'll send in a patch series tomorrow, too tired for today. Greetings, Andres Freund
Re: shadow variables - pg15 edition
On Fri, 19 Aug 2022 at 16:28, Justin Pryzby wrote: > Let me know what I can do when it's time for round two. I pushed the modified 0001-0008 patches earlier today and also the one I wrote to fixup the 36 warnings about "expected" being shadowed. I looked through a bunch of your remaining patches and was a bit unexcited to see many more renaming such as: - List*querytree_list; + List*this_querytree_list; I don't think this sort of thing is an improvement. However, one category of these changes that I do like are the ones where we can move the variable into an inner scope. Out of your renaming 0009-0026 patches, these are: 0013 0014 0017 0018 I feel like having the variable in scope for the minimal amount of time makes the code cleaner and I feel like these are good next steps because: a) no variable needs to be renamed b) any backpatching issues is more likely to lead to compilation failure rather than using the wrong variable. Likely 0016 is a subcategory of the above as if you modified that patch to follow this rule then you'd have to declare the variable a few times. I think that category is less interesting and we can maybe consider those after we're done with the more simple ones. Do you want to submit a series of patches that fixes all of the remaining warnings that are in this category? Once these are done we can consider the best ways to fix and if we want to fix any of the remaining ones. Feel free to gzip the patches up if the number is large. David
Re: ICU for global collation
On 2022-08-17 19:53, Julien Rouhaud wrote: Good catch. There's unfortunately not a lot of regression tests for ICU-initialized clusters. I'm wondering if the build-farm client could be taught about the locale provider rather than assuming libc. Clearly that wouldn't have caught this issue, but it should still increase the coverage a bit (I'm thinking of the recent problem with the abbreviated keys). Looking at installchecks with different locales (e.g. see [1] with sv_SE.UTF-8) - why not?.. diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c index b31a30550b025d48ba3cc250dc4c15f41f9a80be..17a2942341e528c01182fb6d4878580f2706bec9 100644 --- a/src/backend/commands/dbcommands.c +++ b/src/backend/commands/dbcommands.c @@ -1048,6 +1048,8 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt) (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("ICU locale must be specified"))); } + else + dbiculocale = NULL; if (dblocprovider == COLLPROVIDER_ICU) check_icu_locale(dbiculocale); I think it would be better to do that in the various variable initialization. Maybe switch the dblocprovider and dbiculocale initialization, and only initialize dbiculocale if dblocprovider is COLLPROVIDER_ICU. diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c index b31a30550b025d48ba3cc250dc4c15f41f9a80be..883f381f3453142790f728a3725586cebe2e2f20 100644 --- a/src/backend/commands/dbcommands.c +++ b/src/backend/commands/dbcommands.c @@ -1012,10 +1012,10 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt) dbcollate = src_collate; if (dbctype == NULL) dbctype = src_ctype; - if (dbiculocale == NULL) - dbiculocale = src_iculocale; if (dblocprovider == '\0') dblocprovider = src_locprovider; + if (dbiculocale == NULL && dblocprovider == COLLPROVIDER_ICU) + dbiculocale = src_iculocale; /* Some encodings are client only */ if (!PG_VALID_BE_ENCODING(encoding)) Then it seemed to me that it was easier to first get all the parameters from the template database as usual and then process them as needed. But with your suggestion the failed assertion will check the code above more accurately... This discrepancy between createdb and CREATE DATABASE looks like an oversight, as createdb indeed interprets --locale as: if (locale) { if (lc_ctype) pg_fatal("only one of --locale and --lc-ctype can be specified"); if (lc_collate) pg_fatal("only one of --locale and --lc-collate can be specified"); lc_ctype = locale; lc_collate = locale; } AFAIK the fallback in the CREATE DATABASE case is expected as POSIX locale names should be accepted by icu, so this should work for createdb too. Oh, great, thanks! > > I spent some time looking at the ICU api trying to figure out if using a > > posix locale name (e.g. en_US) was actually compatible with an ICU locale name. > > It seems that ICU accept any of 'en-us', 'en-US', 'en_us' or 'en_US' as the > > same locale, but I might be wrong. I also didn't find a way to figure out how > > to ask ICU if the locale identifier passed is complete garbage or not. One > > sure thing is that the system collation we import are of the form 'en-us', so > > it seems weird to have this form in pg_collation and by default another form in > > pg_database. > > Yeah it seems to be inconsistent about that. The locale ID documentation > appears to indicate that "en_US" is the canonical form, but when you ask it > to list all the locales it knows about it returns "en-US". Yeah I saw that too when checking is POSIX locale names were valid, and that's not great. I'm sorry but IIUC pg_import_system_collations uses uloc_getAvailable to get the locale ID and then specifically calls uloc_toLanguageTag?.. I don't think that initdb --collation-provider icu should be allowed without --icu-locale, same for --collation-provider libc *with* --icu-locale. > initdb has some specific processing to transform the default libc locale to > something more appropriate, but as far as I can see creatdb / CREATE DATABASE > aren't doing that. It seems inconsistent, and IMHO another reason why > defaulting to the libc locale looks like a bad idea. This has all been removed. The separate ICU locale option should now be required everywhere (initdb, createdb, CREATE DATABASE). If it's a feature and not a bug in CREATE DATABASE, why should not it work in initdb too? Here we define locale/lc_collate/lc_ctype for the first 3 databases in the cluster in much the same way... P.S. FYI there seems to be a bug for very old ICU versions: in master (92fc
Re: including pid's for `There are XX other sessions using the database`
On Fri, Aug 19, 2022 at 9:31 PM Euler Taveira wrote: > On Fri, Aug 19, 2022, at 2:10 PM, Zhihong Yu wrote: > > I want to poll the community on whether including proc->pid's in the error > message would be useful for troubleshooting. > > Such message is only useful for a parameter into a pg_stat_activity query. > You > don't need the PID list if you already have the most important information: > database name. I don't think revealing the current session PIDs from the > database you want to drop will buy you anything. It could be a long list > and it > does not help you to solve the issue: why wasn't that database removed? > > Besides that, if you know that there is a possibility that a connection is > open, you can always use the FORCE option. The old/other alternative is to > use > a query like > >SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = > 'foo'; > > (possibly combined with a REVOKE CONNECT or pg_hba.conf modification) > before > executing DROP DATABASE. > > > -- > Euler Taveira > EDB https://www.enterprisedb.com/ > > Thanks for responding. Since pg_stat_activity shows fewer number of connections compared to the number revealed in the error message, I am not sure the above query would terminate all connections for the database to be dropped.
Re: sslinfo extension - add notbefore and notafter timestamps
> On 20 Aug 2022, at 01:00, Cary Huang wrote: > I noticed that sslinfo extension does not have functions to return current > client certificate's notbefore and notafter timestamps which are also quite > important attributes in a X509 certificate. The attached patch adds 2 > functions to get notbefore and notafter timestamps from the currently > connected client certificate. Off the cuff that doesn't seem like a bad idea, but I wonder if we should add them to pg_stat_ssl (or both) instead if we deem them valuable? Re the patch, it would be nice to move the logic in ssl_client_get_notafter and the _notbefore counterpart to a static function since they are copies of eachother. -- Daniel Gustafsson https://vmware.com/
Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is full on the publisher
Hi, I'm a little late to catch up with your comments, but here are my replies: > My answer for the above assumes that your question is regarding what > happens if you ANALYZE on a partitioned table. If your question is > something different, please let me know. > > > > I was talking about inheritance cases, something like: > create table tbl1 (a int); > create table tbl1_part1 (b int) inherits (tbl1); > create table tbl1_part2 (c int) inherits (tbl1); > > What we do in such cases is documented as: "if the table being > analyzed has inheritance children, ANALYZE gathers two sets of > statistics: one on the rows of the parent table only, and a second > including rows of both the parent table and all of its children. This > second set of statistics is needed when planning queries that process > the inheritance tree as a whole. The child tables themselves are not > individually analyzed in this case." Oh, I haven't considered inherited tables. That seems right, the statistics of the children are not updated when the parent is analyzed. > Now, the point I was worried about was what if the changes in child > tables (*_part1, *_part2) are much more than in tbl1? In such cases, > we may not invalidate child rel entries, so how will logical > replication behave for updates/deletes on child tables? There may not > be any problem here but it is better to do some analysis of such cases > to see how it behaves. > I also haven't observed any specific issues. In the end, when the user (or autovacuum) does ANALYZE on the child, it is when the statistics are updated for the child. Although I do not have much experience with inherited tables, this sounds like the expected behavior? I also pushed a test covering inherited tables. First, a basic test on the parent. Then, show that updates on the parent can also use indexes of the children. Also, after an ANALYZE on the child, we can re-calculate the index and use the index with a higher cardinality column. > > Also, for the majority of the use-cases, I think we'd probably expect an > index on a column with high cardinality -- hence use index scan. So, bitmap > index scans are probably not going to be that much common. > > > > You are probably right here but I don't think we can make such > assumptions. I think the safest way to avoid any regression here is to > choose an index when the planner selects an index scan. We can always > extend it later to bitmap scans if required. We can add a comment > indicating the same. > > Alright, I got rid of the bitmap scans. Though, it caused few of the new tests to fail. I think because of the data size/distribution, the planner picks bitmap scans. To make the tests consistent and small, I added `enable_bitmapscan to off` for this new test file. Does that sound ok to you? Or, should we change the tests to make sure they genuinely use index scans? * > + /* > + * For insert-only workloads, calculating the index is not necessary. > + * As the calculation is not expensive, we are fine to do here (instead > + * of during first update/delete processing). > + */ > > I think here instead of talking about cost, we should mention that it > is quite an infrequent operation i.e performed only when we first time > performs an operation on the relation or after invalidation. This is > because I think the cost is relative. > Changed, does that look better? + > + /* > + * Although currently it is not possible for planner to pick a > + * partial index or indexes only on expressions, > > It may be better to expand this comment by describing a bit why it is > not possible in our case. You might want to give the function > reference where it is decided. > > Make sense, added some more information. Thanks, Onder v7_0001_use_index_on_subs_when_pub_rep_ident_full.patch Description: Binary data
Re: Issue in postgresql installation - Target version Postgresql 14.
Dear Bruce Momjian, Thanks for clarifying, got the issue resolved with installation of lz4 independently. On Fri, Aug 19, 2022 at 9:16 PM Bruce Momjian wrote: > > First, this is not an appropriate question for hackers. Second, this is > a question for the package manager where you got the pre-built software. > > --- > > On Fri, Aug 19, 2022 at 03:39:29AM +0530, kavya chandren wrote: > > Dear team, > > > > We are facing issues during installation of postgresql at our > environment. > > > > > > This command completed with no errors. > > > > dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/ > > EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm > > > > Then we ran this command > > > > dnf install postgresql14-server-14.5-1PGDG.rhel8.x86_64.rpm > > postgresql14-14.5-1PGDG.rhel8.x86_64.rpm > > postgresql14-libs-14.5-1PGDG.rhel8.x86_64.rpm > > > > and got the following messages > > > > Updating Subscription Management repositories. > > > > This system is registered to Red Hat Subscription Management, but is not > > receiving updates. You can use subscription-manager to assign > subscriptions. > > > > Last metadata expiration check: 0:02:07 ago on Thu 18 Aug 2022 03:12:32 > PM EDT. > > Error: > > Problem 1: cannot install the best candidate for the job > > - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64 > > Problem 2: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires > > postgresql14(x86-64) = 14.5-1PGDG.rhel8, but none of the providers can be > > installed > > - cannot install the best candidate for the job > > - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64 > > (try to add '--skip-broken' to skip uninstallable packages or '--nobest' > to use > > not only best candidate packages) > > > > Again tried `dnf update` and `dnf install -y postgresql14-server` , but > still > > stuck with below error: > > > > > > # dnf update > > Updating Subscription Management repositories. > > > > This system is registered to Red Hat Subscription Management, but is not > > receiving updates. You can useions. > > > > Last metadata expiration check: 0:42:29 ago on Thu 18 Aug 2022 04:42:25 > PM EDT. > > Dependencies resolved. > > > === > > > > Package Architecture > Version > > > === > > > > Upgrading: > > pg_qualstats_13 x86_64 > > 2.0.4-1.rhel8 > > postgresql13 x86_64 > > 13.8-1PGDG.rhel8 > > postgresql13-contrib x86_64 > > 13.8-1PGDG.rhel8 > > postgresql13-libs x86_64 > > 13.8-1PGDG.rhel8 > > postgresql13-server x86_64 > > 13.8-1PGDG.rhel8 > > > > Transaction Summary > > > === > > > > Upgrade 5 Packages > > > > Total download size: 8.1 M > > Is this ok [y/N]: y > > Downloading Packages: > > (1/5): pg_qualstats_13-2.0.4-1.rhel8.x86_64.rpm > > (2/5): postgresql13-contrib-13.8-1PGDG.rhel8.x86_64.rpm > > (3/5): postgresql13-13.8-1PGDG.rhel8.x86_64.rpm > > (4/5): postgresql13-libs-13.8-1PGDG.rhel8.x86_64.rpm > > (5/5): postgresql13-server-13.8-1PGDG.rhel8.x86_64.rpm > > > --- > > Total > > Running transaction check > > Transaction check succeeded. > > Running transaction test > > Transaction test succeeded. > > Running transaction > > Preparing: > > Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64 > > Upgrading: postgresql13-libs-13.8-1PGDG.rhel8.x86_64 > > Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64 > > Upgrading: postgresql13-13.8-1PGDG.rhel8.x86_64 > > Running scriptlet: postgresql13-13.8-1PGDG.rhel8.x86_64 > > Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64 > > Upgrading: postgresql13-server-13.8-1PGDG.rhel8.x86_64 > > Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64 > > Upgrading: pg_qualstats_13-2.0.4-1.rhel8.x86_64 > > Running scriptlet: pg_qualstats_13-2.0.4-1.rhel8.x86_64 > > Upgrading: postgresql13-contrib-13.8-1PGDG.rhel8.x86_64 > > Cleanup : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64 > > Cleanup : pg_qualstats_13-2.0.3-1.rhel8.x86_64 > > Running scriptlet: pg_qualstats_13-2.0.3-1.rhel8.x86_64 > > Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64 > > Cleanup : postgresql13-server-13.1-3PGDG.rhel8.x86_64 > > Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64 > > Cleanup : postgresql13-13.1-3PGDG.rhel8.x
Re: Schema variables - new implementation for Postgres 15
Op 19-08-2022 om 17:29 schreef Pavel Stehule: pá 19. 8. 2022 v 15:57 odesílatel Pavel Stehule napsal: Hi I am sending fresh update - enhanced work with composite types - now the used composite type can be enhanced, reduced and stored value is converted to expected format - enhancing find_composite_type_dependencies to support session variables, so the type of any field of used composite type cannot be changed update - fix cpp check v20220819-2-0001-Catalogue-support-for-session-variables.patch v20220819-2-0002-session-variables.patch v20220819-2-0003-typecheck-check-of-consistency-of-format-of-stored-v.patch v20220819-2-0004-LET-command.patch v20220819-2-0005-Support-of-LET-command-in-PLpgSQL.patch v20220819-2-0006-DISCARD-VARIABLES-command.patch v20220819-2-0007-Enhancing-psql-for-session-variables.patch v20220819-2-0008-Possibility-to-dump-session-variables-by-pg_dump.patch v20220819-2-0009-typedefs.patch v20220819-2-0010-Regress-tests-for-session-variables.patch v20220819-2-0011-fix.patch v20220819-2-0012-This-patch-changes-error-message-column-doesn-t-exis.patch v20220819-2-0013-documentation.patch make check fails as a result of the errors in the attached session_variables.out. Erik Regards Pavel CREATE SCHEMA svartest; SET search_path = svartest; CREATE VARIABLE var1 AS integer; CREATE TEMP VARIABLE var2 AS text; DROP VARIABLE var1, var2; -- functional interface CREATE VARIABLE var1 AS numeric; CREATE ROLE var_test_role; GRANT USAGE ON SCHEMA svartest TO var_test_role; SET ROLE TO var_test_role; -- should fail SELECT var1; ERROR: permission denied for session variable var1 SET ROLE TO DEFAULT; GRANT READ ON VARIABLE var1 TO var_test_role; SET ROLE TO var_test_role; -- should fail LET var1 = 10; ERROR: permission denied for session variable var1 -- should work SELECT var1; var1 -- (1 row) SET ROLE TO DEFAULT; GRANT WRITE ON VARIABLE var1 TO var_test_role; SET ROLE TO var_test_role; -- should work LET var1 = 333; SET ROLE TO DEFAULT; REVOKE ALL ON VARIABLE var1 FROM var_test_role; CREATE OR REPLACE FUNCTION secure_var() RETURNS int AS $$ SELECT svartest.var1::int; $$ LANGUAGE sql SECURITY DEFINER; SELECT secure_var(); secure_var 333 (1 row) SET ROLE TO var_test_role; -- should fail SELECT svartest.var1; ERROR: permission denied for session variable var1 -- should work; SELECT secure_var(); secure_var 333 (1 row) SET ROLE TO DEFAULT; EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM generate_series(1,100) g(v) WHERE v = var1; QUERY PLAN --- Function Scan on pg_catalog.generate_series g Output: v Function Call: generate_series(1, 100) Filter: ((g.v)::numeric = var1) (4 rows) CREATE VIEW schema_var_view AS SELECT var1; SELECT * FROM schema_var_view; var1 -- 333 (1 row) \c - SET search_path = svartest; -- should work still, but var will be empty SELECT * FROM schema_var_view; var1 -- (1 row) LET var1 = pi(); SELECT var1; var1 -- 3.14159265358979 (1 row) -- we can see execution plan of LET statement EXPLAIN (VERBOSE, COSTS OFF) LET var1 = pi(); QUERY PLAN SET SESSION VARIABLE Result Output: 3.14159265358979 (3 rows) SELECT var1; var1 -- 3.14159265358979 (1 row) CREATE VARIABLE var3 AS int; CREATE OR REPLACE FUNCTION inc(int) RETURNS int AS $$ BEGIN LET svartest.var3 = COALESCE(svartest.var3 + $1, $1); RETURN var3; END; $$ LANGUAGE plpgsql; SELECT inc(1); inc - 1 (1 row) SELECT inc(1); inc - 2 (1 row) SELECT inc(1); inc - 3 (1 row) SELECT inc(1) FROM generate_series(1,10); inc - 4 5 6 7 8 9 10 11 12 13 (10 rows) SET ROLE TO var_test_role; -- should fail LET var3 = 0; ERROR: permission denied for session variable var3 SET ROLE TO DEFAULT; DROP VIEW schema_var_view; DROP VARIABLE var1 CASCADE; DROP VARIABLE var3 CASCADE; -- composite variables CREATE TYPE sv_xyz AS (x int, y int, z numeric(10,2)); CREATE VARIABLE v1 AS sv_xyz; CREATE VARIABLE v2 AS sv_xyz; \d v1 \d v2 LET v1 = (1,2,3.14); LET v2 = (10,20,3.14*10); -- should work too - there are prepared casts LET v1 = (1,2,3.14); SELECT v1; v1 (1,2,3.14) (1 row) SELECT v2; v2 --- (10,20,31.40) (1 row) SELECT (v1).*; x | y | z ---+---+-- 1 | 2 | 3.14 (1 row) SELECT (v2).*; x | y | z ++--- 10 | 20 | 31.40 (1 row) SELECT v1.x + v1.z; ?column? -- 4.14 (1 row) SELECT v2.x + v2.z; ?column? -- 41.40 (1 row) -- access to composite fields should be safe too -- should fail SET ROLE TO var_test_role; SELECT v2.x; ERROR: permission denied for session variable v2 SET ROLE TO DEFAULT; DROP VARIABLE v1; DROP VARIABLE v2; REVOKE USAGE ON SCHEMA svartest FROM var_test_role; DROP ROLE va
Re: Schema variables - new implementation for Postgres 15
Op 20-08-2022 om 15:32 schreef Erik Rijkers: Op 19-08-2022 om 17:29 schreef Pavel Stehule: make check fails as a result of the errors in the attached session_variables.out. Sorry, that should have been this diffs file, of course (attached). Erikdiff -U3 /home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out /home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out --- /home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out 2022-08-20 15:03:24.767103554 +0200 +++ /home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out 2022-08-20 15:10:47.679172066 +0200 @@ -990,9 +990,9 @@ ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; -- should to fail too (different type, different generation number); SELECT public.svar; - svar --- - (10,20,) + svar +--- + (10,20,0) (1 row) LET public.svar = ROW(10,20,30); @@ -,9 +,9 @@ LET public.svar = (10, 20); ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; SELECT public.svar; - svar --- - (10,20,) +svar + + (10,20,16) (1 row) LET public.svar2 = (10, 20, 30);
Re: Schema variables - new implementation for Postgres 15
so 20. 8. 2022 v 15:36 odesílatel Erik Rijkers napsal: > Op 20-08-2022 om 15:32 schreef Erik Rijkers: > > Op 19-08-2022 om 17:29 schreef Pavel Stehule: > > > > make check fails as a result of the errors in the attached > > session_variables.out. > > > > > Sorry, that should have been this diffs file, of course (attached). > It looks like some problem with not well initialized memory, but I have no idea how it is possible. What are your configure options? > > Erik
Re: Schema variables - new implementation for Postgres 15
Op 20-08-2022 om 15:41 schreef Pavel Stehule: so 20. 8. 2022 v 15:36 odesílatel Erik Rijkers napsal: Op 20-08-2022 om 15:32 schreef Erik Rijkers: Op 19-08-2022 om 17:29 schreef Pavel Stehule: make check fails as a result of the errors in the attached session_variables.out. Sorry, that should have been this diffs file, of course (attached). It looks like some problem with not well initialized memory, but I have no idea how it is possible. What are your configure options? I compiled both assert-enable and 'normal', and I only just noticed that the assert-enable one did pass tests normally. Below is the config that produced the failing tests: ./configure --prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables --bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/bin.fast --libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/lib.fast --with-pgport=6986 --quiet --enable-depend --with-openssl --with-perl --with-libxml --with-libxslt --with-zlib --enable-tap-tests --with-extra-version=_0820_schema_variables_1509--with-lz4 --with-icu (debian 9, gcc 12.2.0) Erik
Re: Fix typo with logical connector (src/backend/commands/vacuumparallel.c)
Em sáb., 20 de ago. de 2022 às 01:03, Amit Kapila escreveu: > On Fri, Aug 19, 2022 at 7:45 PM Ranier Vilela wrote: > > > > Em sex., 19 de ago. de 2022 às 10:28, Tom Lane > escreveu: > >> > >> Ranier Vilela writes: > >> > At function parallel_vacuum_process_all_indexes there is > >> > a typo with a logical connector. > >> > I think that correct is &&, because both of the operators are > >> > bool types [1]. > >> > As a result, parallel vacuum workers can be incorrectly enabled. > >> > >> Since they're bools, the C spec requires them to promote to integer > >> 0 or 1, therefore the & operator will yield the desired result. > >> So there's not going to be any incorrect behavior. > > > > > > So, my assumption is incorrect. > > > > Right, but as Tom pointed it is still better to change this. Sorry, I expressed myself badly. As Tom pointed out, It's not a bug, as I stated in the first post. But even if it wasn't a small performance improvement, by avoiding the function call. The correct thing is to use logical connectors (&& ||) with boolean operands. > However, > I am not sure if we should backpatch this to PG15 as this won't lead > to any incorrect behavior. > +1 for backpath to PG15, too. It's certainly a safe change. regards, Ranier Vilela
Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)
Em sex., 19 de ago. de 2022 às 19:27, David Zhang escreveu: > Hi Ranier, > Hi David, > > Following the comment in commit 9fd45870c1436b477264c0c82eb195df52bc0919, > > (The same could be done with appropriate memset() calls, but this > patch is part of an effort to phase out MemSet(), so it doesn't touch > memset() calls.) > > Should these obviously possible replacement of the standard library > function "memset" be considered as well? > Yes, sure. In modern C compilers like clang above 13, gcc and msvc the initialization with {0}, has no problem, because all bits are correctly initialized to zero. However with some old compilers, such behavior is not strictly followed, so with structs it is not safe to use. But especially for arrays, whose use doesn't depend on filling the holes, it's certainly safe and cheap to use, which is the case here. For example, something like the attached one which is focusing on the > pageinspect extension only. > Surely you did, but it has to be said, it was compiled and tested with at least a make check. Looks like it's ok, LTGM. regards, Ranier Vilela >
Re: Schema variables - new implementation for Postgres 15
On Sat, Aug 20, 2022 at 03:55:07PM +0200, Erik Rijkers wrote: > > Op 20-08-2022 om 15:41 schreef Pavel Stehule: > > so 20. 8. 2022 v 15:36 odesílatel Erik Rijkers napsal: > > > > > Op 20-08-2022 om 15:32 schreef Erik Rijkers: > > > > Op 19-08-2022 om 17:29 schreef Pavel Stehule: > > > > > > > > make check fails as a result of the errors in the attached > > > > session_variables.out. > > > > > > > > > > > > > Sorry, that should have been this diffs file, of course (attached). > > > > > > > It looks like some problem with not well initialized memory, but I have no > > idea how it is possible. What are your configure options? > > > > I compiled both assert-enable and 'normal', and I only just noticed that the > assert-enable one did pass tests normally. > > > Below is the config that produced the failing tests: > > ./configure > --prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables > --bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/bin.fast > > --libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/lib.fast > --with-pgport=6986 --quiet --enable-depend --with-openssl --with-perl > --with-libxml --with-libxslt --with-zlib --enable-tap-tests > --with-extra-version=_0820_schema_variables_1509--with-lz4 --with-icu I also tried locally (didn't look at the patch yet), with debug/assert enabled, and had similar error: diff -dU10 /Users/rjuju/git/postgresql/src/test/regress/expected/session_variables.out /Users/rjuju/git/pg/pgmaster_debug/src/test/regress/results/session_variables.out --- /Users/rjuju/git/postgresql/src/test/regress/expected/session_variables.out 2022-08-20 22:25:17.0 +0800 +++ /Users/rjuju/git/pg/pgmaster_debug/src/test/regress/results/session_variables.out 2022-08-20 22:30:50.0 +0800 @@ -983,23 +983,23 @@ -- should to fail SELECT public.svar; svar - (10,20) (1 row) ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; -- should to fail too (different type, different generation number); SELECT public.svar; - svar --- - (10,20,) +svar + + (10,20,2139062142) (1 row) LET public.svar = ROW(10,20,30); -- should be ok again for new value SELECT public.svar; svar (10,20,30) (1 row) @@ -1104,31 +1104,31 @@ (1 row) DROP VARIABLE public.svar; DROP TYPE public.svar_test_type; CREATE TYPE public.svar_test_type AS (a int, b int); CREATE VARIABLE public.svar AS public.svar_test_type; CREATE VARIABLE public.svar2 AS public.svar_test_type; LET public.svar = (10, 20); ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; SELECT public.svar; - svar --- - (10,20,) +svar + + (10,20,16) (1 row) LET public.svar2 = (10, 20, 30); ALTER TYPE public.svar_test_type DROP ATTRIBUTE b; SELECT public.svar; - svar - (10,) + svar +- + (10,16) (1 row) SELECT public.svar2; svar2 - (10,30) (1 row)
Re: Goodbye Windows XP
On 2022-08-13 Sa 16:49, Andrew Dunstan wrote: > For some time I have been nursing along my old Windows XP instance, > which nowadays only builds release 10, which is due to go to EOL in a > few months. The machine has suddenly started having issues with git, and > I'm not really inclined to spend lots of time fixing it. XP itself is > now a very long time past EOL, so I think I'm just going to turn it off. > That will be the end for frogmouth, currawong and brolga. > > Right after I posted this it mysteriously started working again, so I guess we'll limp on till around December. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Re: static libpq (and other libraries) overwritten on aix
On Sat, Aug 20, 2022 at 01:35:22AM -0700, Andres Freund wrote: > Afaict there's no reason to generate lib.a for extension .so's, right? Right. We install cube.so, not any *cube.a file.
Re: static libpq (and other libraries) overwritten on aix
Hi, On 2022-08-20 01:35:22 -0700, Andres Freund wrote: > I'll send in a patch series tomorrow, too tired for today. Here it goes. 0001 aix: Fix SHLIB_EXPORTS reference in VPATH builds That's mostly so I could even build. It's not quite right in the sense that we don't depend on the file, but that's a preexisting issue. Could be folded in with 0005, which fixes that aspect. Or it could be backpatched as the minimal fix. 0002 Remove SUBSYS.o rule in common.mk, hasn't been used in a long time 0003 Remove rule to generate postgres.o, not needed for 20+ years Both obvious, I think. 0004 aix: when building with gcc, tell gcc we're building a shared library That's the gcc -shared issue I explained in the email I'm replying to. We should probably consider building executables with -shared-libgcc too, that shrinks them a decent amount (e.g. 1371684 -> 1126765 for psql). But I've not done that here. 0005 aix: No need to use mkldexport when we want to export all symbols This makes the building of shared libraries a lot more similar to other platforms. Export files are only used when an exports.txt is present and there's no more intermediary static libraries. 0006 configure: Expand -fvisibility checks to more compilers, add -qvisibility This isn't strictly speaking part of the same "thread" of work, but I don't want to touch aix more often than I have too... I'll post it in the other thread too. I did just test that this passes at least some tests on aix with xlc and solaris with sunpro. Greetings, Andres >From 0569092ec6f33ddf002ad73fb2630bbb80ffbe75 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 17 Aug 2022 13:04:33 -0700 Subject: [PATCH v1 1/6] aix: Fix SHLIB_EXPORTS reference in VPATH builds The dependencies here aren't quite right independent of vpath builds or not, but this at least makes vpath builds succeed. And it's pretty rare to change the exports.txt file anyway... --- src/Makefile.shlib | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/Makefile.shlib b/src/Makefile.shlib index 2af6192f0f3..6624fa79615 100644 --- a/src/Makefile.shlib +++ b/src/Makefile.shlib @@ -302,7 +302,7 @@ $(shlib): $(OBJS) | $(SHLIB_PREREQS) ifeq (,$(SHLIB_EXPORTS)) $(MKLDEXPORT) $(stlib) $(shlib) >$(exports_file) else - ( echo '#! $(shlib)'; $(AWK) '/^[^#]/ {printf "%s\n",$$1}' $(SHLIB_EXPORTS) ) >$(exports_file) + ( echo '#! $(shlib)'; $(AWK) '/^[^#]/ {printf "%s\n",$$1}' ${srcdir}/$(SHLIB_EXPORTS) ) >$(exports_file) endif $(COMPILER) -o $(shlib) $(stlib) -Wl,-bE:$(exports_file) $(LDFLAGS) $(LDFLAGS_SL) $(SHLIB_LINK) rm -f $(stlib) -- 2.37.0.3.g30cc8d0f14 >From 3fb77733bd862f7f7a0929353d9aa5b94a685d91 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 19 Aug 2022 17:32:12 -0700 Subject: [PATCH v1 2/6] Remove SUBSYS.o rule in common.mk, hasn't been used in a long time Apparently I missed that this SUBSYS.o rule isn't needed anymore in a4ebbd27527, likely because there still is a reference to it due to AIX - but that's self contained in src/backend/Makefile --- src/backend/common.mk | 3 --- 1 file changed, 3 deletions(-) diff --git a/src/backend/common.mk b/src/backend/common.mk index 663e9f886ce..fa96a82b1a0 100644 --- a/src/backend/common.mk +++ b/src/backend/common.mk @@ -17,9 +17,6 @@ ifneq ($(subdir), src/backend) all: $(subsysfilename) endif -SUBSYS.o: $(SUBDIROBJS) $(OBJS) - $(LD) $(LDREL) $(LDOUT) $@ $^ - objfiles.txt: Makefile $(SUBDIROBJS) $(OBJS) # Don't rebuild the list if only the OBJS have changed. $(if $(filter-out $(OBJS),$?),( $(if $(SUBDIROBJS),cat $(SUBDIROBJS); )echo $(addprefix $(subdir)/,$(OBJS)) ) >$@,touch $@) -- 2.37.0.3.g30cc8d0f14 >From d02c3479e9c0255a60cd0b61969a2973278f122a Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 19 Aug 2022 19:06:44 -0700 Subject: [PATCH v1 3/6] Remove rule to generate postgres.o, not needed for 20+ years --- src/backend/Makefile | 6 -- 1 file changed, 6 deletions(-) diff --git a/src/backend/Makefile b/src/backend/Makefile index 3f01c655927..f498cfd5930 100644 --- a/src/backend/Makefile +++ b/src/backend/Makefile @@ -110,12 +110,6 @@ endif # aix $(top_builddir)/src/port/libpgport_srv.a: | submake-libpgport -# The postgres.o target is needed by the rule in Makefile.global that -# creates the exports file when MAKE_EXPORTS = true. -postgres.o: $(OBJS) - $(CC) $(LDREL) $(call expand_subsys,$^) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@ - - # The following targets are specified in make commands that appear in # the make files in our subdirectories. Note that it's important we # match the dependencies shown in the subdirectory makefiles! -- 2.37.0.3.g30cc8d0f14 >From e08492cdaa9cdac45302d6d25943bd527b687ff0 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sat, 20 Aug 2022 08:30:22 -0700 Subject: [PATCH v1 4/6] aix: when building with gcc, tell gcc we're building a shared library Not passing -shared to gcc when building a shared library t
Re: [RFC] building postgres with meson - v11
Hi, On 2022-08-20 09:38:48 +0200, Peter Eisentraut wrote: > On 17.08.22 23:53, Andres Freund wrote: > > Any comment on the pg_regress_ecpg commit? I'd like to get that out of the > > way, and it seems considerably cleaner than the hackery we do right now to > > make VPATH builds work. > > That one looks like a very good improvement. Thanks for checking! Pushed. Greetings, Andres Freund
Re: Schema variables - new implementation for Postgres 15
pá 19. 8. 2022 v 22:54 odesílatel Alvaro Herrera napsal: > > diff --git a/src/backend/parser/parse_relation.c > b/src/backend/parser/parse_relation.c > > index f6b740df0a..b3bee39457 100644 > > --- a/src/backend/parser/parse_relation.c > > +++ b/src/backend/parser/parse_relation.c > > @@ -3667,8 +3667,8 @@ errorMissingColumn(ParseState *pstate, > > ereport(ERROR, > > (errcode(ERRCODE_UNDEFINED_COLUMN), > >relname ? > > - errmsg("column %s.%s does not exist", > relname, colname) : > > - errmsg("column \"%s\" does not exist", > colname), > > + errmsg("column or variable %s.%s does not > exist", relname, colname) : > > + errmsg("column or variable \"%s\" does > not exist", colname), > >state->rfirst ? closestfirst ? > >errhint("Perhaps you meant to reference > the column \"%s.%s\".", > > > state->rfirst->eref->aliasname, closestfirst) : > > This is in your patch 12. I wonder -- if relname is not null, then > surely this is a column and not a variable, right? So only the second > errmsg() here should be changed, and the first one should remain as in > the original. > Yes, it should work. I changed it in today patch Thank you for tip Regards Pavel > > -- > Álvaro Herrera PostgreSQL Developer — > https://www.EnterpriseDB.com/ >
Re: Schema variables - new implementation for Postgres 15
Op 20-08-2022 om 20:09 schreef Pavel Stehule: Hi LET public.svar2 = (10, 20, 30); ALTER TYPE public.svar_test_type DROP ATTRIBUTE b; SELECT public.svar; - svar - (10,) + svar +- + (10,16) (1 row) SELECT public.svar2; svar2 - (10,30) (1 row) I hope so I found this error. It should be fixed > [patches v20220820-1-0001 -> 0012] I'm afraid it still gives the same errors during 'make check', and again only errors when compiling without --enable-cassert Thanks, Erik Regards Pavel diff -U3 /home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out /home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out --- /home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out 2022-08-20 20:12:34.558069463 +0200 +++ /home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out 2022-08-20 20:21:38.028404785 +0200 @@ -990,9 +990,9 @@ ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; -- should to fail too (different type, different generation number); SELECT public.svar; - svar --- - (10,20,) + svar +--- + (10,20,0) (1 row) LET public.svar = ROW(10,20,30); @@ -,17 +,17 @@ LET public.svar = (10, 20); ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; SELECT public.svar; - svar --- - (10,20,) +svar + + (10,20,16) (1 row) LET public.svar2 = (10, 20, 30); ALTER TYPE public.svar_test_type DROP ATTRIBUTE b; SELECT public.svar; - svar - (10,) + svar +- + (10,16) (1 row) SELECT public.svar2;
Inconsistencies around defining FRONTEND
Hi, This started at https://postgr.es/m/20220817215317.poeofidf7o7dy6hy%40awork3.anarazel.de Peter made a good point about -DFRONTED not being defined symmetrically between meson and autoconf builds, which made me look at where we define it. And I think we ought to clean this up independ of the meson patch. On 2022-08-17 14:53:17 -0700, Andres Freund wrote: > On 2022-08-17 15:50:23 +0200, Peter Eisentraut wrote: > > - -DFRONTEND is used somewhat differently from the makefiles. For > >example, meson sets -DFRONTEND for pg_controldata, but the > >makefiles don't. Conversely, the makefiles set -DFRONTEND for > >ecpglib, but meson does not. This should be checked again to make > >sure it all matches up. > > Yes, should sync that up. > > FWIW, meson does add -DFRONTEND for ecpglib. There were a few places that did > add it twice, I'll push a cleanup of that in a bit. Yikes, the situation in HEAD is quite the mess. Several .c files set FRONTEND themselves, so they can include postgres.h, instead of postgres_fe.h: $ git grep '#define.*FRONTEND' upstream/master ':^src/include/postgres_fe.h' upstream/master:src/bin/pg_controldata/pg_controldata.c:#define FRONTEND 1 upstream/master:src/bin/pg_resetwal/pg_resetwal.c:#define FRONTEND 1 upstream/master:src/bin/pg_waldump/compat.c:#define FRONTEND 1 upstream/master:src/bin/pg_waldump/pg_waldump.c:#define FRONTEND 1 upstream/master:src/bin/pg_waldump/rmgrdesc.c:#define FRONTEND 1 Yet, most of them also define FRONTEND in both make and msvc buildsystem. $ git grep -E "(D|AddDefine\(')FRONTEND" upstream/master src/bin/ src/tools/msvc/ upstream/master:src/bin/initdb/Makefile:override CPPFLAGS := -DFRONTEND -I$(libpq_srcdir) -I$(top_srcdir)/src/timezone $(CPPFLAGS) upstream/master:src/bin/pg_rewind/Makefile:override CPPFLAGS := -I$(libpq_srcdir) -DFRONTEND $(CPPFLAGS) upstream/master:src/bin/pg_waldump/Makefile:override CPPFLAGS := -DFRONTEND $(CPPFLAGS) upstream/master:src/tools/msvc/Mkvcbuild.pm: $libpgport->AddDefine('FRONTEND'); upstream/master:src/tools/msvc/Mkvcbuild.pm: $libpgcommon->AddDefine('FRONTEND'); upstream/master:src/tools/msvc/Mkvcbuild.pm: $libpgfeutils->AddDefine('FRONTEND'); upstream/master:src/tools/msvc/Mkvcbuild.pm: $libpq->AddDefine('FRONTEND'); upstream/master:src/tools/msvc/Mkvcbuild.pm: $pgtypes->AddDefine('FRONTEND'); upstream/master:src/tools/msvc/Mkvcbuild.pm: $libecpg->AddDefine('FRONTEND'); upstream/master:src/tools/msvc/Mkvcbuild.pm: $libecpgcompat->AddDefine('FRONTEND'); upstream/master:src/tools/msvc/Mkvcbuild.pm: $pgrewind->AddDefine('FRONTEND'); upstream/master:src/tools/msvc/Mkvcbuild.pm: $pg_waldump->AddDefine('FRONTEND') That's largely because they also build files from src/backend, which obviously contain no #define FRONTEND. The -DFRONTENDs for the various ecpg libraries don't seem necessary anymore. That looks to be a leftover from 7143b3e8213, before that ecpg had copies of various pgport libraries. Same with libpq, also looks to be obsoleted by 7143b3e8213. I don't think we need FRONTEND in initdb - looks like that stopped being required with af1a949109d. Unfortunately, the remaining uses of FRONTEND are required. That's: - pg_controldata, via #define - pg_resetwal, via #define - pg_rewind, via -DFRONTEND, due to xlogreader.c - pg_waldump, via #define and -DFRONTEND, due to xlogreader.c, xlogstats.c, rmgrdesc/*desc.c I'm kind of wondering if we should add xlogreader.c, xlogstat.c, *desc to fe_utils, instead of building them in various places. That'd leave us only with #define FRONTENDs for cases that do need to include postgres.h themselves, which seems a lot more palatable. Greetings, Andres Freund >From 96b9580229956c95279b4546f52d781408a3b15b Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sat, 20 Aug 2022 12:13:42 -0700 Subject: [PATCH v1 1/3] Don't define FRONTEND for initdb No headers requiring FRONTED to be defined are included as of af1a949109d. Since this is the last user of (frontend|contrib)_defines in Mkvcbuild.pm, remove that. For commit the file should be perltidy'ed, but as that causes more changes I thought it'd be easer like this for review. --- src/bin/initdb/Makefile | 2 +- src/tools/msvc/Mkvcbuild.pm | 14 ++ 2 files changed, 3 insertions(+), 13 deletions(-) diff --git a/src/bin/initdb/Makefile b/src/bin/initdb/Makefile index b0dd13dfbdf..6737938c3f8 100644 --- a/src/bin/initdb/Makefile +++ b/src/bin/initdb/Makefile @@ -16,7 +16,7 @@ subdir = src/bin/initdb top_builddir = ../../.. include $(top_builddir)/src/Makefile.global -override CPPFLAGS := -DFRONTEND -I$(libpq_srcdir) -I$(top_srcdir)/src/timezone $(CPPFLAGS) +override CPPFLAGS := -I$(libpq_srcdir) -I$(top_srcdir)/src/timezone $(CPPFLAGS) # Note: it's important that we link to encnames.o from libpgcommon, not # from libpq, else we have risks of version skew if we run with a libpq diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm index ee963d85f30..ac8634
Re: Strip -mmacosx-version-min options from plperl build
Hi, On 2022-08-19 10:00:35 -0400, Tom Lane wrote: > Peter Eisentraut writes: > > After analyzing the source code of ExtUtils::Embed's ldopts, I think we > > can also do this by subtracting $Config{ldflags}, since > > my $linkage = "$ccdlflags $ldflags @archives $ld_or_bs"; > > and we really just want the $ld_or_bs part. (@archives should be empty > > for our uses.) > > +1, this looks like a nice clean solution. I see that it gets rid > of stuff we don't really want on RHEL8 as well as various generations > of macOS. Looks like it'd also get rid of the bogus -bE:/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE/perl.exp we're we're picking up on AIX (we had a thread about filtering that out, but I've only done so inside the meson patch, round tuits). So +1 from that front. Maybe a daft question: Why do want any of the -l flags other than -lperl? With the patch configure spits out the following on my debian system: checking for CFLAGS to compile embedded Perl... -DDEBIAN checking for flags to link embedded Perl... -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt those libraries were likely relevant to build libperl, but don't look relevant for linking to it dynamically. Statically would be a different story, but we already insist on a shared build. Greetings, Andres Freund
Re: Strip -mmacosx-version-min options from plperl build
Andres Freund writes: > Maybe a daft question: Why do want any of the -l flags other than -lperl? With > the patch configure spits out the following on my debian system: > checking for CFLAGS to compile embedded Perl... -DDEBIAN > checking for flags to link embedded Perl... > -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc > -lcrypt > those libraries were likely relevant to build libperl, but don't look relevant > for linking to it dynamically. I'm certain that there are/were platforms that insist on those libraries being mentioned anyway. Maybe they are all obsolete now? regards, tom lane
Re: Strip -mmacosx-version-min options from plperl build
Hi, FWIW, looks like Peter's patch unbreaks building plperl on AIX using gcc and system perl. Before we picked up a bunch of xlc specific flags that prevented that. before: checking for flags to link embedded Perl... -brtl -bdynamic -b64 -L/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE -lperl -lpthread -lbind -lnsl -ldl -lld -lm -lcrypt -lpthreads -lc now: checking for flags to link embedded Perl... -L/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE -lperl -lpthread -lbind -lnsl -ldl -lld -lm -lcrypt -lpthreads -lc On 2022-08-20 16:53:31 -0400, Tom Lane wrote: > Andres Freund writes: > > Maybe a daft question: Why do want any of the -l flags other than -lperl? > > With > > the patch configure spits out the following on my debian system: > > > checking for CFLAGS to compile embedded Perl... -DDEBIAN > > checking for flags to link embedded Perl... > > -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc > > -lcrypt > > > those libraries were likely relevant to build libperl, but don't look > > relevant > > for linking to it dynamically. > > I'm certain that there are/were platforms that insist on those libraries > being mentioned anyway. Maybe they are all obsolete now? I don't think any of the supported platforms require it for stuff used inside the shared library (and we'd be in trouble if so, check e.g. libpq.pc). But of course that's different if there's inline function / macros getting pulled in. Which turns out to be an issue on AIX. All the -l flags added by perl can be removed for xlc, but for gcc, -lpthreads (or -pthread) it is required. Tried it on Solaris (32 bit, not sure if there's a 64bit perl available), works. Greetings, Andres Freund
Re: Making Vars outer-join aware
Progress report on this ... I've been trying to remove some of the cruftier aspects of EquivalenceClasses (which really is the main point of this whole effort), and gotten repeatedly blocked by the fact that the semantics are still a bit crufty thanks to the hacking associated with outer join identity 3. I think I see a path forward though. To recap, the thing about identity 3 is that it says you can get equivalent results from (A leftjoin B on (Pab)) leftjoin C on (Pb*c) A leftjoin (B leftjoin C on (Pbc)) on (Pab) if Pbc is strict for B. Unlike what it says in optimizer/README, I've written the first form as "Pb*c" to indicate that any B Vars appearing in that clause will be marked as possibly nulled by the A/B join. This makes the problem apparent: we cannot use the same representation of Pbc for both join orders, because in the second variant B's Vars are not nulled by anything. We've been trying to get away with writing Pbc just one way, and that leads directly to the semantic squishiness I've been fighting. What I'm thinking we should do about this, once we detect that this identity is applicable, is to generate *both* forms of Pbc, either adding or removing the varnullingrels bits depending on which form we got from the parser. Then, when we come to forming join paths, use the appropriate variant depending on which join order we're considering. build_join_rel() already has the concept that the join restriction list varies depending on which input relations we're trying to join, so this doesn't require any fundamental restructuring -- only a way to identify which RestrictInfos to use or ignore for a particular join. That will probably require some new field in RestrictInfo, but I'm not fussed about that because there are other fields we'll be able to remove as this work progresses. Similarly, generate_join_implied_equalities() will need to generate EquivalenceClass-derived join clauses with or without varnullingrels marks, as appropriate. I'm not quite sure how to do that, but it feels like just a small matter of programming, not a fundamental problem with the model which is where things are right now. We'll only need this for ECs that include source clauses coming from a movable outer join clause (i.e., Pbc in identity 3). An interesting point is that I think we want to force movable outer joins into the second format for the purpose of generating ECs, that is we want to use Pbc not Pb*c as the EC source form. The reason for that is to allow generation of relation-scan-level clauses from an EC, particularly an EC that includes a constant. As an example, given A leftjoin (B leftjoin C on (B.b = C.c)) on (A.a = B.b) where A.a = constant we can decide unconditionally that A.a, B.b, C.c, and the constant all belong to the same equivalence class, and thereby generate relation scan restrictions A.a = constant, B.b = constant, and C.c = constant. If we start with the other join order, which will include "B.b* = C.c" (ie Pb*c) then we'd have two separate ECs: {A.a, B.b, constant} and {B.b*, C.c}. So we'll fail to produce any scan restriction for C, or at least we can't do so in any principled way. Furthermore, if the joins are done in the second order then we don't need any additional join clauses -- both joins can act like "LEFT JOIN ON TRUE". (Right now, we'll emit redundant B.b = C.c and A.a = B.b join clauses in addition to the scan-level clauses, which is inefficient.) However, if we make use of identity 3 to do the joins in the other order, then we do need an extra join clause, like (A leftjoin B on (true)) leftjoin C on (B.b* = C.c) (or maybe we could just emit "B.b* IS NOT NULL" for Pb*c?) Without any Pb*c join constraint we get wrong answers because nulling of B fails to propagate to C. So while there are lots of details to work out, it feels like this line of thought can lead to something where setrefs.c doesn't have to ignore varnullingrels mismatches (yay) and there is no squishiness in EquivalenceClass semantics. regards, tom lane
Instrumented pages/tuples frozen in autovacuum's server log out (and VACUUM VERBOSE)
Attached is another autovacuum (and VACUUM VERBOSE) instrumentation patch. This one adds instrumentation about freezing to the report autovacuum makes to the server log. Specifically, it makes the output look like this: regression=# vacuum (freeze,verbose) foo; INFO: aggressively vacuuming "regression.public.foo" INFO: finished vacuuming "regression.public.foo": index scans: 0 pages: 0 removed, 45 remain, 45 scanned (100.00% of total) tuples: 0 removed, 1 remain, 0 are dead but not yet removable removable cutoff: 751, which was 0 XIDs old when operation ended new relfrozenxid: 751, which is 2 XIDs ahead of previous value XIDs processed: 45 pages from table (100.00% of total) had 1 tuples frozen index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed I/O timings: read: 0.023 ms, write: 0.000 ms avg read rate: 2.829 MB/s, avg write rate: 5.658 MB/s buffer usage: 95 hits, 2 misses, 4 dirtied WAL usage: 91 records, 1 full page images, 133380 bytes system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s VACUUM Notice the new line about freezing, which we always output -- it's the line that begins with "XIDs processed", that appears about half way down. The new line is deliberately placed after the existing "new relfrozenxid" line and before the existing line about dead item identifiers. This placement of the new instrumentation seems logical to me; freezing is related to relfrozenxid (obviously), but doesn't need to be shoehorned into the prominent early line that reports on tuples removed/remain[ing]. Like its neighboring "dead item identifier" line, this new line shows the number of items/tuples affected, and the number of heap pages affected -- with heap pages shown both as an absolute number and as a percentage of rel_pages (in parentheses). The main cost associated with freezing is the WAL overhead, so emphasizing pages here seems like the way to go -- pages are more interesting than tuples. This format also makes it relatively easy to get a sense of the *relative* costs of the overhead of each distinct class/category of maintenance performed. -- Peter Geoghegan v1-0001-Add-freezing-instrumentation-to-VACUUM-VERBOSE.patch Description: Binary data
configure --with-uuid=bsd fails on NetBSD
Our documentation claims that --with-uuid=bsd works on both FreeBSD and NetBSD: installation.sgml says bsd to use the UUID functions found in FreeBSD, NetBSD, and some other BSD-derived systems and there is comparable wording in uuid-ossp.sgml. In the course of setting up a NetBSD buildfarm animal, I discovered that this is a lie. NetBSD indeed has the same uuid_create() function as FreeBSD, but it produces version-4 UUIDs not version-1, which causes the contrib/uuid-ossp regression tests to fail. You have to dig down a level to the respective uuidgen(2) man pages to find documentation about this, but each system appears to be conforming to its docs, and the old DCE standard they both refer to conveniently omits saying anything about what kind of UUID you get. So this isn't a bug as far as either BSD is concerned. I'm not personally inclined to do anything about this; I'm certainly not excited enough about it to write our own v1-UUID creation code. Perhaps we should just document that on NetBSD, uuid_generate_v1() and uuid_generate_v1mc() don't conform to spec. regards, tom lane
Re: [RFC] building postgres with meson
Hi, On 2022-08-09 08:37:16 -0400, Andrew Dunstan wrote: > On 2022-08-09 Tu 03:10, Andres Freund wrote: > > Hi, > > > > I was looking at re-unifying gendef2.pl that the meson patchset had > > introduced > > for temporary ease during hacking with gendef.pl. Testing that I noticed > > that > > either I and my machine is very confused, or gendef.pl's check whether it > > can > > skip work is bogus. > > > > I noticed that, despite having code to avoid rerunning when the input files > > are older than the .def file, it always runs. > > > > # if the def file exists and is newer than all input object files, skip > > # its creation > > if (-f $deffile > > && (-M $deffile > max(map { -M } <$ARGV[0]/*.obj>))) > > { > > print "Not re-generating $defname.DEF, file already exists.\n"; > > exit(0); > > } > > > > My understanding of -M is that it returns the time delta between the file > > modification and the start of the script. Which makes the use of max() > > bogus, > > since it'll return the oldest time any input has been modified, not the > > newest. And the condition needs to be inverted, because we want to skip the > > work if $deffile is *newer*, right? > > > > Am I missing something here? > > > No, you're right, this is bogus. Reversing the test and using min > instead of max is the obvious fix. > > > > I'm tempted to just remove the not-regenerating logic - gendef.pl shouldn't > > run if there's nothing to do, and it'll e.g. not notice if there's an > > additional input that wasn't there during the last invocation of gendef.pl. > > > > Maybe, need to think about that more. Any thoughts? I'd like to commit 0003 in https://postgr.es/m/20220811002012.ju3rrz47i2e5tdha%40awork3.anarazel.de fairly soon. I did fix the bogus "die" message I added during some debugging since posting that... Greetings, Andres Freund
Re: configure --with-uuid=bsd fails on NetBSD
Hi, On 2022-08-20 19:39:32 -0400, Tom Lane wrote: > Our documentation claims that --with-uuid=bsd works on both > FreeBSD and NetBSD: installation.sgml says > >bsd to use the UUID functions found in FreeBSD, > NetBSD, >and some other BSD-derived systems > > and there is comparable wording in uuid-ossp.sgml. > > In the course of setting up a NetBSD buildfarm animal, I discovered > that this is a lie. Also recently reported as a bug: https://postgr.es/m/17358-89806e7420797025%40postgresql.org with a bunch of discussion. > I'm not personally inclined to do anything about this; I'm certainly > not excited enough about it to write our own v1-UUID creation code. > Perhaps we should just document that on NetBSD, uuid_generate_v1() > and uuid_generate_v1mc() don't conform to spec. Perhaps we should make them error out instead? It doesn't seem helpful to just return something wrong... Certainly would be good to get the regression tests to pass somehow, given that we don't expect this to work. Don't want to add netbsd's results as an alternative, because that'd maybe hide bugs. But if we errored out we could probably have an alternative with the errors, without a large risk of hiding bugs. Greetings, Andres Freund
Re: configure --with-uuid=bsd fails on NetBSD
Andres Freund writes: > On 2022-08-20 19:39:32 -0400, Tom Lane wrote: >> In the course of setting up a NetBSD buildfarm animal, I discovered >> that this is a lie. > Also recently reported as a bug: > https://postgr.es/m/17358-89806e7420797025%40postgresql.org > with a bunch of discussion. Ah, I'd totally forgotten that thread :-(. After Peter pointed to the existence of new UUID format proposals, I kind of lost interest in doing a lot of work to implement our own not-quite- per-spec V1 generator. > Perhaps we should make them error out instead? It doesn't seem helpful to > just return something wrong... Yeah, might be appropriate. regards, tom lane
Re: Schema variables - new implementation for Postgres 15
On Sat, Aug 20, 2022 at 08:44:49PM +0200, Erik Rijkers wrote: > Op 20-08-2022 om 20:09 schreef Pavel Stehule: > > Hi > > > > > LET public.svar2 = (10, 20, 30); > > > ALTER TYPE public.svar_test_type DROP ATTRIBUTE b; > > > SELECT public.svar; > > > - svar > > > > > > - (10,) > > > + svar > > > +- > > > + (10,16) > > > (1 row) > > > > > > SELECT public.svar2; > > > svar2 > > > - > > >(10,30) > > > (1 row) > > > > > > > I hope so I found this error. It should be fixed > > > [patches v20220820-1-0001 -> 0012] > > > I'm afraid it still gives the same errors during 'make check', and again > only errors when compiling without --enable-cassert It still fails for me for both --enable-cassert and --disable-cassert, with a different number of errors though. The cfbot is green, but it's unclear to me which version was applied on the last run. AFAICS there's no log available for the branch creation if it succeeds. --enable-cassert: LET public.svar = (10, 20); ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; SELECT public.svar; - svar --- - (10,20,) +svar + + (10,20,16) (1 row) LET public.svar2 = (10, 20, 30); ALTER TYPE public.svar_test_type DROP ATTRIBUTE b; SELECT public.svar; - svar - (10,) + svar +- + (10,16) (1 row) --disable-cassert: ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; -- should to fail too (different type, different generation number); SELECT public.svar; - svar --- - (10,20,) +svar + + (10,20,32) (1 row) LET public.svar = ROW(10,20,30); -- should be ok again for new value SELECT public.svar; svar (10,20,30) (1 row) @@ -1104,31 +1104,31 @@ (1 row) DROP VARIABLE public.svar; DROP TYPE public.svar_test_type; CREATE TYPE public.svar_test_type AS (a int, b int); CREATE VARIABLE public.svar AS public.svar_test_type; CREATE VARIABLE public.svar2 AS public.svar_test_type; LET public.svar = (10, 20); ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; SELECT public.svar; - svar --- - (10,20,) +svar + + (10,20,16) (1 row) LET public.svar2 = (10, 20, 30); ALTER TYPE public.svar_test_type DROP ATTRIBUTE b; SELECT public.svar; - svar - (10,) + svar +- + (10,16) (1 row)
Re: [PATCH] Optimize json_lex_string by batching character copying
On Fri, Aug 19, 2022 at 01:42:15PM -0700, Nathan Bossart wrote: > On Fri, Aug 19, 2022 at 03:11:36PM +0700, John Naylor wrote: >> This is done. Also: >> - a complete overhaul of the pg_lfind8* tests >> - using a typedef for the vector type >> - some refactoring, name changes and other cleanups (a few of these >> could also be applied to the 32-byte element path, but that is left >> for future work) >> >> TODO: json-specific tests of the new path > > This looks pretty good to me. Should we rename vector_broadcast() and > vector_has_zero() to indicate that they are working with bytes (e.g., > vector_broadcast_byte())? We might be able to use vector_broadcast_int() > in the 32-bit functions, and your other vector functions already have a > _byte suffix. > > In general, the approach you've taken seems like a decent readability > improvement. I'd be happy to try my hand at adjusting the 32-bit path and > adding ARM versions of all this stuff. I spent some more time looking at this one, and I had a few ideas that I thought I'd share. 0001 is your v6 patch with a few additional changes, including simplying the assertions for readability, splitting out the Vector type into Vector8 and Vector32 (needed for ARM), and adjusting pg_lfind32() to use the new tools in simd.h. 0002 adds ARM versions of everything, which obsoletes the other thread I started [0]. This is still a little rough around the edges (e.g., this should probably be more than 2 patches), but I think it helps demonstrate a more comprehensive design than what I've proposed in the pg_lfind32-for-ARM thread [0]. Apologies if I'm stepping on your toes a bit here. [0] https://postgr.es/m/20220819200829.GA395728%40nathanxps13 -- Nathan Bossart Amazon Web Services: https://aws.amazon.com >From 7dd35c8ffe8e42885586fb16a77b6c3e792c6a6d Mon Sep 17 00:00:00 2001 From: Nathan Bossart Date: Sat, 20 Aug 2022 21:14:01 -0700 Subject: [PATCH 1/2] json_lex_string() SIMD --- src/common/jsonapi.c | 11 +- src/include/port/pg_lfind.h | 132 ++ src/include/port/simd.h | 227 ++ .../test_lfind/expected/test_lfind.out| 18 +- .../modules/test_lfind/sql/test_lfind.sql | 4 +- .../modules/test_lfind/test_lfind--1.0.sql| 10 +- src/test/modules/test_lfind/test_lfind.c | 91 ++- 7 files changed, 443 insertions(+), 50 deletions(-) diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c index fefd1d24d9..87e1d0b192 100644 --- a/src/common/jsonapi.c +++ b/src/common/jsonapi.c @@ -19,6 +19,7 @@ #include "common/jsonapi.h" #include "mb/pg_wchar.h" +#include "port/pg_lfind.h" #ifndef FRONTEND #include "miscadmin.h" @@ -844,7 +845,7 @@ json_lex_string(JsonLexContext *lex) } else { - char *p; + char *p = s; if (hi_surrogate != -1) return JSON_UNICODE_LOW_SURROGATE; @@ -853,7 +854,13 @@ json_lex_string(JsonLexContext *lex) * Skip to the first byte that requires special handling, so we * can batch calls to appendBinaryStringInfo. */ - for (p = s; p < end; p++) + while (p < end - sizeof(Vector8) && + !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) && + !pg_lfind8('"', (uint8 *) p, sizeof(Vector8)) && + !pg_lfind8_le(0x1F, (uint8 *) p, sizeof(Vector8))) +p += sizeof(Vector8); + + for (; p < end; p++) { if (*p == '\\' || *p == '"') break; diff --git a/src/include/port/pg_lfind.h b/src/include/port/pg_lfind.h index fb125977b2..def858cbe1 100644 --- a/src/include/port/pg_lfind.h +++ b/src/include/port/pg_lfind.h @@ -1,7 +1,7 @@ /*- * * pg_lfind.h - * Optimized linear search routines. + * Optimized linear search routines using SIMD intrinsics where available * * Copyright (c) 2022, PostgreSQL Global Development Group * @@ -15,6 +15,68 @@ #include "port/simd.h" +/* + * pg_lfind8 + * + * Return true if there is an element in 'base' that equals 'key', otherwise + * return false. + */ +static inline bool +pg_lfind8(uint8 key, uint8 *base, uint32 nelem) +{ + uint32 i; + /* round down to multiple of vector length */ + uint32 tail_idx = nelem & ~(sizeof(Vector8) - 1); + Vector8 chunk; + + for (i = 0; i < tail_idx; i += sizeof(Vector8)) + { + vector8_load(&chunk, &base[i]); + if (vector8_eq(chunk, key)) + return true; + } + + /* Process the remaining elements one at a time. */ + for (; i < nelem; i++) + { + if (key == base[i]) + return true; + } + + return false; +} + +/* + * pg_lfind8_le + * + * Return true if there is an element in 'base' that is less than or equal to + * 'key', otherwise return false. + */ +static inline bool +pg_lfind8_le(uint8 key, uint8 *base, uint32 nelem) +{ + uint32 i; + /* round down to multiple of vector length */ + uint32 tail_idx = nelem & ~(sizeof(Vector8) - 1); + Vector8 chunk; + + for (i = 0; i < tail_idx; i += sizeof(Vector8)) + { + vec
Re: Schema variables - new implementation for Postgres 15
ne 21. 8. 2022 v 6:36 odesílatel Julien Rouhaud napsal: > On Sat, Aug 20, 2022 at 08:44:49PM +0200, Erik Rijkers wrote: > > Op 20-08-2022 om 20:09 schreef Pavel Stehule: > > > Hi > > > > > > > LET public.svar2 = (10, 20, 30); > > > > ALTER TYPE public.svar_test_type DROP ATTRIBUTE b; > > > > SELECT public.svar; > > > > - svar > > > > > > > > - (10,) > > > > + svar > > > > +- > > > > + (10,16) > > > > (1 row) > > > > > > > > SELECT public.svar2; > > > > svar2 > > > > - > > > >(10,30) > > > > (1 row) > > > > > > > > > > I hope so I found this error. It should be fixed > > > > [patches v20220820-1-0001 -> 0012] > > > > > > I'm afraid it still gives the same errors during 'make check', and again > > only errors when compiling without --enable-cassert > > It still fails for me for both --enable-cassert and --disable-cassert, > with a > different number of errors though. > > The cfbot is green, but it's unclear to me which version was applied on the > last run. AFAICS there's no log available for the branch creation if it > succeeds. > > --enable-cassert: > > LET public.svar = (10, 20); > ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; > SELECT public.svar; > - svar > --- > - (10,20,) > +svar > + > + (10,20,16) > (1 row) > > LET public.svar2 = (10, 20, 30); > ALTER TYPE public.svar_test_type DROP ATTRIBUTE b; > SELECT public.svar; > - svar > > - (10,) > + svar > +- > + (10,16) > (1 row) > > > > --disable-cassert: > > ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; > -- should to fail too (different type, different generation number); > SELECT public.svar; > - svar > --- > - (10,20,) > +svar > + > + (10,20,32) > (1 row) > > LET public.svar = ROW(10,20,30); > -- should be ok again for new value > SELECT public.svar; > svar > > (10,20,30) > (1 row) > > @@ -1104,31 +1104,31 @@ > (1 row) > > DROP VARIABLE public.svar; > DROP TYPE public.svar_test_type; > CREATE TYPE public.svar_test_type AS (a int, b int); > CREATE VARIABLE public.svar AS public.svar_test_type; > CREATE VARIABLE public.svar2 AS public.svar_test_type; > LET public.svar = (10, 20); > ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int; > SELECT public.svar; > - svar > --- > - (10,20,) > +svar > + > + (10,20,16) > (1 row) > > LET public.svar2 = (10, 20, 30); > ALTER TYPE public.svar_test_type DROP ATTRIBUTE b; > SELECT public.svar; > - svar > > - (10,) > + svar > +- > + (10,16) > (1 row) > I got the same result, when I did build without assertions, so I can debug it now.