plphyton function - return each list value as a row ?
Hi All, I am trying to create my first plphyton function ( retrieving earthquake data from an API) . Overall I was able to get one single row to be returned, but am struggling how to return the entire list I have as multiple rows - see below. Currently I get the following error when running the GetEartquakeAll function: select GetEartquakeAll('2020-01-01' ,'2020-03-01', -120, 40,200, 1.7) gives me ERROR: length of returned sequence did not match number of columns in row How can I 'simply' return each list value as a row ? Thanks Karsten Vennemann CREATE OR REPLACE FUNCTION GetEartquakeAll(start date ,stop date, lon float, lat float,radius int, minmagnitude float) RETURNS equake_values AS $$ import urllib2 import json as json data = urllib2.urlopen('https://earthquake.usgs.gov/fdsnws/event/1/query?format=geo json&starttime=%s&endtime=%s&latitude=%s&longitude=%s&maxradiuskm=%s&minmagn itude=%s&orderby=magnitude' % (start,stop,lat,lon,radius,minmagnitude)) js_data = json.load(data) equake = js_data equakearray = [] a = 0 for i in equake['features']: equakeplace = i['properties']['place'] # tile for earthquake location magnitude = i['properties']['mag'] qlong = i['geometry']['coordinates'][0] qlat =i['geometry']['coordinates'][1] equakevalue = {"place": equakeplace, "magnitude": magnitude , "qlong": qlong, "qlat": qlat} equakearray.append(equakevalue) a = a+1 return equakearray $$ LANGUAGE plpythonu; # create custom data type that is returned from equake data API query CREATE TYPE equake_values AS ( place text, magnitude float, qlong float, qlat float );
RE: plphyton function - return each list value as a row ?
Answering my own question I got it to work by a tiny change add SETOF for the return definition: Cheers Karsten ... RETURNS SETOF equake_values AS $$ ... -Original Message- From: karsten [mailto:kars...@terragis.net] Sent: Saturday, July 25, 2020 14:42 To: pgsql-general@lists.postgresql.org Subject: plphyton function - return each list value as a row ? Hi All, I am trying to create my first plphyton function ( retrieving earthquake data from an API) . Overall I was able to get one single row to be returned, but am struggling how to return the entire list I have as multiple rows - see below. Currently I get the following error when running the GetEartquakeAll function: select GetEartquakeAll('2020-01-01' ,'2020-03-01', -120, 40,200, 1.7) gives me ERROR: length of returned sequence did not match number of columns in row How can I 'simply' return each list value as a row ? Thanks Karsten Vennemann CREATE OR REPLACE FUNCTION GetEartquakeAll(start date ,stop date, lon float, lat float,radius int, minmagnitude float) RETURNS equake_values AS $$ import urllib2 import json as json data = urllib2.urlopen('https://earthquake.usgs.gov/fdsnws/event/1/query?format=geo json&starttime=%s&endtime=%s&latitude=%s&longitude=%s&maxradiuskm=%s&minmagn itude=%s&orderby=magnitude' % (start,stop,lat,lon,radius,minmagnitude)) js_data = json.load(data) equake = js_data equakearray = [] a = 0 for i in equake['features']: equakeplace = i['properties']['place'] # tile for earthquake location magnitude = i['properties']['mag'] qlong = i['geometry']['coordinates'][0] qlat =i['geometry']['coordinates'][1] equakevalue = {"place": equakeplace, "magnitude": magnitude , "qlong": qlong, "qlat": qlat} equakearray.append(equakevalue) a = a+1 return equakearray $$ LANGUAGE plpythonu; # create custom data type that is returned from equake data API query CREATE TYPE equake_values AS ( place text, magnitude float, qlong float, qlat float );
Re: Wanted: postgresql-9.6-dbg_9.6.5-1.pgdg80+1_amd64.deb
On Thu, May 03, 2018 at 04:23:14PM +0800, Craig Ringer wrote: > I'm trying to debug a PostgreSQL install with a very hard to reproduce > bug. The user did not install debug info, and apt.postgresql.org has > purged the packages. 2ndQuadrant doesn't yet have a mirror of all > historical packages up and running (though we're working on it). > > So I need postgresql-9.6-dbg_9.6.5-1.pgdg80+1_amd64.deb . > > If anyone has it in a mirror repo, their /var/cache/apt/archives/, or > even has it installed and can supply the files it contains, I'd be > extremely grateful. This is the closest I found with a quick searc http://snapshot.debian.org/archive/debian/20170831T163230Z/pool/main/p/postgresql-9.6/postgresql-9.6-dbg_9.6.5-1_amd64.deb Not sure this is close enough though. Karsten --
Re: Domain based on TIMEZONE WITH TIME ZONE
On Thu, May 10, 2018 at 11:19:36AM +0100, Ben Hood wrote: > The question should not be “how does Postgres store the timestamp internally”. > > Rather it should read “is enforcing the submission of UTC denominated > timestamps in the server by using a domain a sensible way to enforce a policy > that will blow up when apps attempt to use non-UTC timestamps (when they > shouldn’t be)”. > > So the question is not how does the timestamp get stored, rather, is it an > anti-pattern to use Postgres as a linter for apps that forget to use UTC > exclusively? I dare say it is one of PG's strengths' to be usable as a "linter". However, maybe rephrase to: Is it an anti-pattern to use Postgres as a linter for apps that forget to use ... timezones *appropriately* ... ? As long as you can force apps to submit proper timestamp-with- timezone data is there _really_ a need to care whether apps do submit in UTC ? After all, it is always converted to UTC servside anyway ? In case you want to enforce only ever _handing out_ UTC data you could wrap the table in a view with forces the output timezone to UTC and only offers timestamp-withOUT-timezone to the outside. Then force read access via the view. Karsten -- GPG key ID E4071346 @ eu.pool.sks-keyservers.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
Re: Domain based on TIMEZONE WITH TIME ZONE
On Thu, May 10, 2018 at 06:41:04AM -0700, Adrian Klaver wrote: >> OK, so by using TIMESTAMP WITH TIME ZONE, you force all apps to submit >> timezone qualified timestamps in what language they are written in. > > Not really: > > https://www.postgresql.org/docs/10/static/datatype-datetime.html > > "For timestamp with time zone, the internally stored value is always in UTC > (Universal Coordinated Time, traditionally known as Greenwich Mean Time, > GMT). An input value that has an explicit time zone specified is converted > to UTC using the appropriate offset for that time zone. If no time zone is > stated in the input string, then it is assumed to be in the time zone > indicated by the system's TimeZone parameter, and is converted to UTC using > the offset for the timezone zone." True enough, I didn't remember that behaviour. And since a BEFORE UPDATE/INSERT trigger will see the data to be inserted PG will have already done that while parsing from on-the-wire data into in-memory ts-w-tz presentation so we can't simply use a trigger to enforce explicit specification of a timezone. Therefore, a domain could work but will require client language support for easy integration. > > And because of the internal UTC representation, there is no room for > > ambiguous timezones. > > Define ambiguous timezone? OP likely means underspecified for his use case (= not assuming "unspecified" to mean "TimeZone value"). But, then, OP could always force TimeZone to UTC on his servers :-) Karsten --
Re: Domain based on TIMEZONE WITH TIME ZONE
I think for the use case you describe, and given the fact that it does work in production, your solution simply shows The Power Of PostgreSQL. Java tries to be cross-platform, JDBC tries to be cross-database -- these goals don't seem to lend themselves to advocating or understanding what one particular database may have to offer over and above basic SQL for solving a particular problem. Karsten --
Re: Code of Conduct plan
On Fri, Sep 14, 2018 at 10:38:56AM +0200, Chris Travers wrote: > > I really have to object to this addition: > > "This Code is meant to cover all interaction between community members, > > whether or not it takes place within postgresql.org infrastructure, so > > long as there is not another Code of Conduct that takes precedence (such as > > a conference's Code of Conduct)." > > > > That covers things like public twitter messages over live political > > controversies which might not be personally directed. At least if one is > > going to go that route, one ought to *also* include a safe harbor for > > non-personally-directed discussions of philosophy, social issues, and > > politics. Otherwise, I think this is asking for trouble. See, for > > example, what happened with Opalgate and how this could be seen to > > encourage use of this to silence political controversies unrelated to > > PostgreSQL. > > > > Suggestion instead: > > "Personally directed behavior is not automatically excluded from this code > of conduct merely because it does not happen on the postgresql.org > infrastructure. In the case where a dispute of such a nature occurs > outside said infrastructure, if other parties are unable to act, this code > of conduct may be considered where it is, on the balance, in the interest > of the global community to do so." > > This preserves the ability to act, without basically providing the same > invitation for problems. Sounds pretty balanced to me. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Code of Conduct plan
On Sat, Sep 15, 2018 at 12:11:37PM -0400, Melvin Davidson wrote: > How about we just simplify the code of conduct to the following: > Any member in the various PostgreSQL lists is expected to maintain > respect to others and not use foul language. A variation from > the previous sentence shall be considered a violation of the CoC. That is, unfortunately, not possible, because "foul language" is quite definitional to a large extent. Functioning communities can usually intrinsically develop, informally agree upon, and pragmatically enforce a workable definition for themselves. And often it will be extremely hard to *codify* such working definitions to even remotely the same degree of success. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: PostgreSQL 11 and security
On Fri, Oct 19, 2018 at 10:22:05AM +, ROS Didier wrote: >I would like to know what's new in security with PostgreSQL 11 https://www.postgresql.org/docs/current/static/release-11.html#id-1.11.6.5.5 sections "Permissions" and "Authentication" And then, bugs have been fixed, the security implications of which are not necessarily fully known. Other than that, your question may need to become more specific. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: timestamp out of range while casting return value to function's return type
On Sat, Aug 24, 2019 at 12:57:07AM +, Thiemo Kellner wrote: > Call: select utils.get_max_timestamptz(); > -- > > Function > -- > create or replace function GET_MAX_TIMESTAMPTZ() > returns timestamptz > language plpgsql > immutable > -- Include the hosting schema into search_path so that dblink > -- can find the pglogger objects. There is no need to access > -- objects in other schematas not covered with public. > as > $body$ > begin > -- highest timestamps on 64bit lubuntu vanilla PostgreSQL 11.3 > return '294277-01-01 00:59:59.99'::timestamptz; > end; > $body$; Also, but that's a nitpick perhaps not relevant to your use case: This $> psql -d gnumed_v22 -U psql (11.5 (Debian 11.5-1+deb10u1)) gnumed_v22=> select 'infinity'::timestamptz; -[ RECORD 1 ]- timestamptz | infinity gnumed_v22=> is the highest timestamp. (You *can* count the horses in *your* corral but there's always more of them elsewhere ;-) Just so you are aware. Best, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: database "cdf_100_1313" does not exist
On Mon, Sep 09, 2019 at 05:16:20PM +0530, nikhil raj wrote: > > The DB is already there and getting the error of the DB does not exist > where I execute it via shell. > DBLIST=/usr/lib/postgresql/11/bin/psql -p 5432 -U postgres -d postgres -q > -t -c "SELECT datname from pg_database where datname in > ('CDF_100_1313')"/usr/lib/postgresql/11/bin/psql > -p 5432 -U postgres -d $DBLIST -c "GRANT CONNECT ON DATABASE "$DBLIST" TO > cpupdate" > > ERROR: database "cdf_100_1313" does not exist Likely a quoting issue. Karsten Hilbert -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Has there been any discussion of custom dictionaries being defined in the database?
On Thu, Oct 17, 2019 at 11:52:39AM +0200, Tom Lane wrote: > Morris de Oryx writes: > > Given that Amazon is bragging this week about turning off Oracle, it seems > > like they could kick some resources towards contributing something to the > > Postgres project. With that in mind, is the idea of defining dictionaries > > within a table somehow meritless, or unexpectedly difficult? > > Well, it'd just be totally different. I don't think anybody cares to > provide two separate definitions of common dictionaries (which'd have to > somehow be kept in sync). Might crafty use of server side COPY TO ... PROGRAM ... enable OP to drop in dictionary data files as needed ? Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Encoding/collation question
On Thu, Dec 12, 2019 at 05:03:59AM +, Andrew Gierth wrote: > Rich> I doubt that my use will notice meaningful differences. Since > Rich> there are only two or three databases in UTF8 and its collation > Rich> perhaps I'll convert those to LATIN1 and C. > > Note that it's perfectly fine to use UTF8 encoding and C collation (this > has the effect of sorting strings in Unicode codepoint order); this is > as fast for comparisons as LATIN1/C is. > > For those cases where you need data to be sorted in a > culturally-meaningful order rather than in codepoint order, you can set > collations on specific columns or in individual queries. Nice, thanks for pointing that out. One addition: while this may seem like "the" magic bullet it should be noted that one will need additional indexes for culturally-meaningful ORDER BY sorts to be fast (while having a default non-C collation one will get a by-default culturally-meaningful index for that one non-C locale). Question: is C collation expected to be future-proof / rock-solid /stable -- like UTF8 for encoding choice -- or could it end up like the SQL-ASCII encoding did: Yeah, we support it, it's been in use a long time, it should work, but, nah, one doesn't really want to choose it over UTF8 if at all possible, or at least know *exactly* what one is doing and BTW YMMV ? Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Encoding/collation question
On Thu, Dec 12, 2019 at 08:35:53AM -0500, Tom Lane wrote: > C collation basically devolves to strcmp/memcmp, which are as standard > and well-defined as can be. If you're happy with the way it sorts > things then there's no reason not to use it. So that's the collation to use when "technical" sorting is required (say, when uniqueness does not depend on the notion of culturally equivalent characters). > It's actually all the *other* collations where you should worry about > their behavior being a moving target :-(. But then that is to be expected. Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536
Dear community, we are seeing the below error on PG 9.6.16 on Debian: (different issue from the psycopg2 one recently posted by me) > /home/marc# pg_dump -p 5432 --username=gm-dbo --dbname=gnumed_v22 > --compress=0 --no-sync --format=custom --file=/dev/null > pg_dump: Ausgabe des Inhalts der Tabelle »doc_obj« fehlgeschlagen: > PQgetResult() fehlgeschlagen. > pg_dump: Fehlermeldung vom Server: ERROR: unexpected chunk number 2 > (expected 0) for toast value 99027 in pg_toast_18536 > pg_dump: Die Anweisung war: COPY blobs.doc_obj (pk, fk_doc, seq_idx, comment, > fk_intended_reviewer, data, filename) TO stdout; (to note: column "data" is of type BYTEA) We have been able to identify the row (there may be more) in blobs.doc_obj which leads to the above error. blobs.doc_obj.pk -> 82224 We have ruled out (?) below-PG hardware problems by a successful run of: cp -rv —preserve=all /var/lib/postgresql/9.6 /tmp/ We then tried gnumed_v22=# REINDEX TABLE pg_toast.pg_toast_18536; REINDEX gnumed_v22=# REINDEX TABLE blobs.doc_obj ; REINDEX gnumed_v22=# VACUUM ANALYZE pg_toast.pg_toast_18536; VACUUM gnumed_v22=# VACUUM FULL pg_toast.pg_toast_18536; VACUUM gnumed_v22=# VACUUM ANALYZE blobs.doc_obj ; VACUUM gnumed_v22=# VACUUM FULL blobs.doc_obj ; ERROR: unexpected chunk number 2 (expected 0) for toast value 99027 in pg_toast_18536 We then tried to DELETE the offending row delete from blobs.doc_obj where pk = 82224; but that, again, shows the "unexpected chunk" problem. Now, what else can we try to address the problem short of doing the pg_dump --exclude-table-data=blobs.doc_obj judicious use of COPY-FROM-with-subselect from blobs.doc_obj restore dance ? Many thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536
On Sun, Mar 15, 2020 at 07:23:49PM +0100, Karsten Hilbert wrote: > We then tried to DELETE the offending row > > delete from blobs.doc_obj where pk = 82224; > > but that, again, shows the "unexpected chunk" problem. According to http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html an UPDATE of the row is recommended -- should that work better than a DELETE ? I can't find documentation pointing to a fundamental implementation difference that suggests so. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536
On Sun, Mar 15, 2020 at 12:58:53PM -0700, Adrian Klaver wrote: > > > We then tried to DELETE the offending row > > > > > > delete from blobs.doc_obj where pk = 82224; > > > > > > but that, again, shows the "unexpected chunk" problem. > > > > According to > > > > http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html > > > > an UPDATE of the row is recommended -- should that work > > better than a DELETE ? > > > > I can't find documentation pointing to a fundamental > > implementation difference that suggests so. > > https://www.postgresql.org/docs/12/storage-toast.html#STORAGE-TOAST-ONDISK > > "During an UPDATE operation, values of unchanged fields are normally > preserved as-is; so an UPDATE of a row with out-of-line values incurs no > TOAST costs if none of the out-of-line values change." However, where is the fault in my thinking ? -> An UPDATE actually *would* change the TOASTed BYTEA field (which is corrupt). I had hoped that the DELETE would NOT have to touch the TOAST table at all (and thereby not check the chunks) as "all it needs to do" is mark the row in the *primary* table as not-needed-anymore. I must be misunderstanding something. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536
On Sun, Mar 15, 2020 at 05:04:06PM -0400, Jan Wieck wrote: > Have you tried to reindex the table? Toast internally forces an index scan, > so missing index tuples or an otherwise corrupted toast index would have > the same symptoms as toast chunks actually missing. We sure did, but thanks for reminding. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536
On Sun, Mar 15, 2020 at 02:35:39PM -0700, Adrian Klaver wrote: > On 3/15/20 1:21 PM, Karsten Hilbert wrote: > > On Sun, Mar 15, 2020 at 12:58:53PM -0700, Adrian Klaver wrote: > > > > > > > We then tried to DELETE the offending row > > > > > > > > > > delete from blobs.doc_obj where pk = 82224; > > > > > > > > > > but that, again, shows the "unexpected chunk" problem. > > > > > > > > According to > > > > > > > > > > > > http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html > > > > > > > > an UPDATE of the row is recommended -- should that work > > > > better than a DELETE ? > > > > > > > > I can't find documentation pointing to a fundamental > > > > implementation difference that suggests so. > > > > > > https://www.postgresql.org/docs/12/storage-toast.html#STORAGE-TOAST-ONDISK > > > > > > "During an UPDATE operation, values of unchanged fields are normally > > > preserved as-is; so an UPDATE of a row with out-of-line values incurs no > > > TOAST costs if none of the out-of-line values change." > > > > However, where is the fault in my thinking ? > > > > -> An UPDATE actually *would* change the TOASTed BYTEA field (which is > > corrupt). > > > > I had hoped that the DELETE would NOT have to touch the TOAST > > table at all (and thereby not check the chunks) as "all it > > needs to do" is mark the row in the *primary* table as > > not-needed-anymore. > > > > I must be misunderstanding something. > > Except it would also need to delete the toast entries as well. OK, got that. What I now don't understand is how the UPDATE won't have to touch the TOAST table when the TOASTed value *is* UPDATEd: update blobs.doc_obj set data = '' where pk = the_faulty_row; (data is the BYTEA column) Slightly confused :-) Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536
On Sun, Mar 15, 2020 at 08:11:18PM -0400, Tom Lane wrote: > Karsten Hilbert writes: > >>> According to > >>> http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html > >>> an UPDATE of the row is recommended -- should that work > >>> better than a DELETE ? > > > OK, got that. What I now don't understand is how the UPDATE > > won't have to touch the TOAST table when the TOASTed value > > *is* UPDATEd: > > update blobs.doc_obj set data = '' where pk = the_faulty_row; > > (data is the BYTEA column) > > It makes no sense to me either; I wonder if Josh's recipe ever > really worked? But it's clearly not working now, and that's > what I'd expect, because any mechanism for removing the busted > toast reference is going to cause the system to try to mark > the toast rows deleted. > > Since you reindexed the toast table and it still doesn't find > the missing chunks, The user has reported that gratuitious and repeated use of REINDEX/VACUUM has eventually led to a consistent database. That one row went missing but can be re-created. Unfortunately, I neither have the original data for testing (it is a medical record database and the client won't hand out copies for obvious reasons) nor can I ascertain the exact order of steps they eventually took. For the record. Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536
On Mon, Mar 16, 2020 at 12:38:35PM -0700, Andres Freund wrote: > > We have ruled out (?) below-PG hardware problems by a > > successful run of: > > > > cp -rv —preserve=all /var/lib/postgresql/9.6 /tmp/ > > FWIW, I don't think that rules out hardware problems at all. In plenty > cases of corruption you can just end up with corrupted on-disk data > (swapped blocks, zeroed blocks, randomly different values ...). ... hence the (?) ... > But obviously it is not at all guaranteed that is the case. Could you > describe the "history" of the database? Replication set up, failovers, > etc? No replication, no failovers. There may have been hard shutdowns as in power failure but there's no history of that to relate. > A plpgsql function that returns the rows one-by-one and catches the > exception is probably your best bet. We have done that (in Python) for good measure during recovery procedures. > It could roughly look something like: > > CREATE OR REPLACE FUNCTION salvage(p_tblname regclass) > RETURNS SETOF text > LANGUAGE plpgsql AS > $$ > DECLARE > v_row record; > BEGIN > FOR v_row IN EXECUTE 'SELECT * FROM '||p_tblname::text LOOP >BEGIN >-- this forces detoasting >RETURN NEXT v_row::text; >EXCEPTION WHEN internal_error OR data_corrupted OR index_corrupted THEN >-- add pkey or something else >RAISE NOTICE 'failed to return data'; >END; > END LOOP; > END > $$ > > should work. You can call it like > SELECT (salvaged_rec.rec).* FROM (SELECT salvaged_text::salvage_me FROM > salvage('salvage_me') AS salvaged_text) AS salvaged_rec(rec) Thanks for taking the time. Would something like this be a useful addition to the adminpack extension ? Best, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Enforcing uniqueness on [real estate/postal] addresses
On Tue, May 12, 2020 at 09:55:56PM +0100, Peter Devoy wrote: > >Is is possible to have two entries which have the same > >address_identifier_general, street and postcode, but different > >descriptions? > > Unfortunately, yes. The data comes from gov't systems to > regulate the development/alteration of arbitrary pieces of property and > those pieces do not always have a postal address. E.g. a farmer may > one year apply to erect a wind turbine in "field north of Foo Cottage" > and the next year apply to demolish "barnhouse west of Foo Cottage". LAT/LON ? https://plus.codes/ ? Best, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Strategy for materialisation and centralisation of data
On Thu, May 21, 2020 at 03:35:59PM +0100, Rory Campbell-Lange wrote: > We have quite a few databases of type a and many of type b in a cluster. > Both a and b types are fairly complex and are different solutions to a > similar problem domain. All the databases are very read-centric, and all > database interaction is currently through plpgsql with no materialised > data. > > Some organisations have several type a and many type b databases, and > need to query these in a homogeneous manner. We presently do this with > many middleware requests or pl/proxy. An a or b type database belongs to > 0 or 1 organisations. Might postgres_fdw help in any way ? Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Strategy for materialisation and centralisation of data
On Thu, May 21, 2020 at 09:52:02PM +0100, Rory Campbell-Lange wrote: > > Might postgres_fdw help in any way ? > > Thanks for the suggestion. As I noted we are already using pl/proxy and > it works well, although we are soaking up a lot of connections with it. > >From my reading of the postgres_fdw docs it is much more featureful than > pl/proxy but it is likely to have the same connection characteristics. > > The main issues we're trying to solve is standardising data access > through (I think) materialisation and centralisation. While plausible I still would not be entirely surprised if testing were to reveal something different. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Suggestion to Monitoring Tool
On Wed, May 27, 2020 at 10:15:49PM +0530, postgann2020 s wrote: > Environment Details: > OS: RHEL 7.2 > Postgres: 9.5.15 > Master-Slave with Streaming replication > > We are planning to implement the monitoring tool for our environment. > > Could someone please suggest the Monitoring Tool based on your experience. I suggest you read up on the fine manual first. It covers a lot of ground already. And to stick to one major project at a time. Best, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Linux Update Experience
On Thu, May 28, 2020 at 02:36:34PM +, Zwettler Markus (OIZ) wrote: > Hi Adrian, > > I'm not talking about this specific bug or its resolution. > > I want to talk about the Linux update problem in general. > > Anyone updating Linux might get such nerving dependency troubles. > > How do you handle this situation? Updating more frequently? Updating less > frequently? Not updating anymore? If we ask ourselves general questions there can't be, by the very nature of the question, any more specific answer beyond: It depends. Conventional wisdom holds it that updating "more frequently" (not "accruing technical debt") helps -- the trick is to find the balance between early effort and lag. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Oracle vs. PostgreSQL - a comment
On Mon, Jun 01, 2020 at 12:36:14PM +0700, Stefan Knecht wrote: > The rubber duck barely tells you how and why it floats It sure doesn't spoonfeed but it certainly does tell us *exactly* how and why it floats. https://www.postgresql.org/docs/devel/install-getsource.html Best, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: psql: FATAL: database "postgres" does not exist or ERROR: 23505: duplicate key value violates unique constraint "pg_namespace_nspname_index"
On Wed, Jul 15, 2020 at 10:03:06PM +0530, Naresh Kumar wrote: > Already I tried to connect with template0, template1, user database name, > postgres, nothing is working getting same error message as mentioned in > trail mail. > > For the second one, yes we don't have backups to restore, we don't have any > option apart from data directory(no wal files) At this point you are very likely in need of (highly) specialized professional help. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Extension vs Implementing Wire Protocol
On Mon, Jul 20, 2020 at 10:36:26PM +0200, Thomas Kellerer wrote: > > * 'SELECT * FROM table;' should return masked versions of the columns > > based on policy for that specific user. > > * 'SELECT * FROM table;' should return just the columns accessible to a > > specific user. > > > > Questions: > > > > 1. Are there easily extensible (ideally open-source) proxies that > > already implement the Postgres protocol that we could modify/extend > > to support this? > > You might want to look this extension: > > https://postgresql-anonymizer.readthedocs.io/en/latest/ or https://github.com/marcmunro/veil Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: Implement a new data type
> If I want to create a new type Person (firstname varchar, lastname varchar, > address varchar ...) > what is the best way to procede in Postgresql The best way is to re-evaluate the "I want". Karsten
Re: Problem close curser after rollback
On Wed, Sep 30, 2020 at 09:06:13PM +0200, Matthias Apitz wrote: > Btw: In all of the other DBS (Informix, Sybase, Oracle) we could define that > point with START TRANSACTION. You can always use SET SAVEPOINT. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Set COLLATE on a session level
On Fri, Nov 20, 2020 at 03:32:48PM +0100, Pavel Stehule wrote: > pá 20. 11. 2020 v 15:28 odesílatel Dirk Mika > napsal: > > > Let's assume there is an app that accesses the same database from > > different countries. And in this app data should be displayed ordered. And > > the sort order is not identical in all countries. > > > > Does the app have to send different SQL commands depending on the country? > > Not nice. > > Do the data have to be sorted in the app? Not nice either. > > > > the query is the same - you just use a different COLLATE clause. For > Postgres there is not any other way. One might use a function producing a SELECT taking the locale as a parameter. Or views in schemas per locale. Selecting the search path per locale pulls in the right view. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Set COLLATE on a session level
Am Mon, Nov 30, 2020 at 10:11:38AM + schrieb Dirk Mika: > > Or views in schemas per locale. Selecting the search path > > per locale pulls in the right view. > > And one view per locale would mean that I would have to > create a whole schema including all views for each locale I > want to support. I would have to roll out a new version of > the data model, just for an additional locale. Or a "smart" view. Set a session variable before running the query and have the (one) view return the locale'd data based on the session variable ... set session "mika.current_locale" = 'locale@2_use'; and use select current_setting('mika.current_locale') as needed inside the view definition > This all seems unnecessarily complicated to me. No one said it is going to be particularly convenient... You asked for possible, I guess. :-) Best, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: Re: Set COLLATE on a session level
> > Or a "smart" view. Set a session variable before running the > > query and have the (one) view return the locale'd data based > > on the session variable ... > > > > set session "mika.current_locale" = 'locale@2_use'; > > > > and use > > > > select current_setting('mika.current_locale') > > > > as needed inside the view definition > > But the argument that follows COLLATE is an identifier, not a parameter, ah, true enough > so it must be hardcoded in the view, just like column names or table names. > I don't see how you could use the setting to control the COLLATE clause > through a view. The view might produce its rows by calling a function which in turn reads the setting and dynamically constructs and exexcutes the query needed to produce the locale-dependant rows, no ? =8-) Convoluted ? I guess so ... Karsten
Aw: Re: Set COLLATE on a session level
> There's [...] but few differences across linguistic sorts. > These differences tend to be subtle and ignorable by end users. But _when_ they matter they matter a lot: Lists of peoples' names in some not-quite expected order are a major pain to skim over, for example. OP is in the business of timekeeping the worklife of people so I guess sorting might matter there. Karsten
Re: Getting "could not read block" error when creating an index on a function.
Am Wed, Dec 30, 2020 at 02:37:59PM -0500 schrieb Demitri Muna: > I want to index the results of these repeated, unchanging calculations to > speed up other queries. Which mechanism would be best to do this? Create > additional columns? Create another table? A materialized view ? Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: How to keep format of views source code as entered?
Am Fri, Jan 08, 2021 at 08:38:29AM + schrieb Markhof, Ingolf: > I am really surprised that PostgreSQL is unable to keep the > source text of a view. Honestly, for me the looks like an > implementation gap. Consider software development. You are > writing code in C++ maybe on a UNIX host. And whenever you > feed you source code into the compiler, it will delete it, > keeping the resulting executable, only. You expect the compiler to keep your source code for you ? Most certainly, PostgreSQL does not delete your view source code, just as the compiler does. I am not so sure that analogy holds up. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: count(*) vs count(id)
Am Wed, Feb 03, 2021 at 01:43:14AM -0500 schrieb Cherio: > I just ran a few practical tests on large (~14mil rows) tables that have > multiple indexes. > > SELECT COUNT(id) forces PostgreSQL to use the primary key index. > SELECT COUNT(*) allows PostgreSQL to chose an index to use and it seems to > be choosing one of smaller size which leads to less IO and hence returns > the result faster. Would you mind throwing in a test for select count(1) ... ? Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: Re: Localizing stored functions by replacing placeholders in their body
> I'm not sure this is a great approach to in-database translations: you > have one function per string, which is cumbersome, bloated and probably > slow. I would suggest having a function that takes a string and returns > its translation, which is obtained from a couple of tables: one where > the original strings are stored and another which stores the > translations for each string into each language. Perhaps like so https://www.gnumed.de/documentation/schema/release/#i18n.table.curr-lang SQL to be found here https://github.com/ncqgm/gnumed/tree/master/gnumed/gnumed/server/sql Karsten
Aw: Re: How long does iteration over 4-5 million rows usually take?
> > On Apr 1, 2022, at 10:18 PM, Ron wrote: > > > > On 4/1/22 20:34, Shaozhong SHI wrote: > >> > >> I have a script running to iterate over 4-5 million rows. It keeps > >> showing up in red in PgAdmin. It remains active. > >> > >> How long does iteration over 4-5 million rows usually take? > > 4-5 million times as long as it takes to do one iteration ( if you’re doing > it correctly) I may not take quite that long because setup/teardown times might not be needed for each iteration. Best, Karsten
Re: Transaction and SQL errors
Am Mon, Apr 04, 2022 at 11:33:14AM + schrieb Sebastien Flaesch: > Is there any plan to have an equivalent of psql's > > set ON_ERROR_ROLLBACK on > > in the DB engine? That is already what happens. SQL fails, transaction rolls back. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
PG14: "is of" vs pg_typeof
PostgreSQL 14.2-1 Dear readers, it used to be possible to say (inside plpgsql): if _value is of (text, char, varchar, name) then val_type := ''string''; elsif _value is of (smallint, integer, bigint, numeric, boolean) then val_type := ''numeric''; elsif _value is of (bytea) then val_type := ''data''; elsif _value is of (text[]) then val_type := ''str_array''; else raise exception ''cfg.set_option(text, any, text, text, text): invalid type of value''; end if; where _value is some typed value. As of now (not sure when that started) PG 14 is giving me gnumed_v23=> select 'a'::text is of (text, integer); ERROR: syntax error at or near "of" (same thing inside plpgsql) where the log says 2022-04-23 19:58:33 GMT ERROR: 42601: syntax error at or near "of" at character 21 2022-04-23 19:58:33 GMT LOCATION: scanner_yyerror, scan.l:1176 2022-04-23 19:58:33 GMT STATEMENT: select 'a'::text is of (text, integer); I can't find anything in the changelog saying that "is of" was removed. For what it's worth, nothing in the docs ever said it existed either (though it did, as per real life). Now, pg_typeof is an alternative but that only goes so far: it requires laboriously constructing an array on the right hand side for the above use case, along the lines of: select pg_typeof('a'::text) = any(ARRAY[pg_typeof(''::text), pg_typeof(''::name)]); Is there anything obvious I am missing for easily resurrecting the above "is of" use ? Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: PG14: "is of" vs pg_typeof
Am Sat, Apr 23, 2022 at 10:14:03PM +0200 schrieb Karsten Hilbert: > I can't find anything in the changelog saying that "is of" > was removed. For what it's worth, nothing in the docs ever > said it existed either (though it did, as per real life). Oh, wait, https://www.postgresql.org/message-id/1129826.1605805...@sss.pgh.pa.us is that it ? It is gone ? Alright, alright, https://www.postgresql.org/message-id/1052846.1605802...@sss.pgh.pa.us I am rewriting my code already. Interesting how one discovers the proper search strategy only eventually, *after* asking for help. Anyway, so there, "IS OF" (the old PG one, at any rate) is gone. Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: PG14: "is of" vs pg_typeof
Am Sat, Apr 23, 2022 at 01:43:52PM -0700 schrieb Adrian Klaver: > > select pg_typeof('a'::text) = any(ARRAY[pg_typeof(''::text), > > pg_typeof(''::name)]); > > > >Is there anything obvious I am missing for easily > >resurrecting the above "is of" use ? > > Actually it can be done as: > > select pg_typeof('test'::text) in ('text'::regtype, 'varchar'::regtype); Found that, but thanks anyway. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: How to set password in psql -h -d -U command line?
> I tried various ways to set password in psql command line, but got no luck. Have you tried all the methods that you showed in your mail or did you try others as well ? Best regards, Karsten
Re: lifetime of the old CTID
Am Wed, Jul 06, 2022 at 08:18:42AM +0200 schrieb Matthias Apitz: > > On first glance, it appears that you are using the ctid as a primary key > > for a row, and that's highly not-recommended. The ctid is never intended > > to be stable in the database, as you have discovered. There are really no > > particular guarantees about ctid values being retained. > > > > I'd suggest having a proper primary key column on the table, and using that > > instead. > > Ofc, each table has its own primary key(s), used for example for the > SELECT ctid, * FROM d01buch WHERE ... > > As I said, we came to PostgreSQL from Sybase (and Oracle) and Sybase has > for each table a so called SYB_IDENTITY_COLUMN which is static for the > table and its value does not change. When we would add now to some 400 tables > an > additional INTEGER column (and triggers to fill this on INSERT) this > would be a big change in our DB layer and migration of databases in the > field. Your suggesting (thanks for it in any case) is not that easy to > implement, and no option at the moment. Christopher suggested to *use* the primary key, not to *add* one. You said that there *is* a primary key. So, more thought/explanation would need to go into why that cannot be used. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: lifetime of the old CTID
Am Wed, Jul 06, 2022 at 08:41:58AM +0200 schrieb Laurenz Albe: > Using the primary key is the proper solution. To be safe from concurrent > modifications, use a logic like in this pseudo-code: > > FOR b IN SELECT pk, other_columns FROM books WHERE condition >UPDATE books SET ... WHERE pk = ... AND condition > > Checking the condition again on the inner UPDATE will detect concurrent > data modifications. If the UPDATE changes nothing, then a book has been > removed or updated by a concurrent transaction, and you ignore it. UPDATEing conditional on XMIN not having changed might be another solution for detecting concurrent transacations if one is bent on using system columns for that. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: lifetime of the old CTID
> Said this, we can end this thread. Re-think the data model is not an > option. Why not ? - add a primary key to each table, say db01buch.pk - rename tables, say db01buch -> db01buch__real_table - add views, say db01buch over db1buch__real_table with "pk AS ctid" (untested, however) Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Concurrent INSERT statements with RETURNING clause resetting SERIAL sequence
Am Wed, Jul 20, 2022 at 09:15:29AM + schrieb Sebastien Flaesch: > Thomas, we already have a similar solution. > The idea is to use the native PostgreSQL SERIAL type. Which does not guarantuee gaplessness. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: Re: Surprising results from current_role in a "security invoker" trigger function in a "cascade delete via FK" scenario
I'll be happy to make a smaller example. It will, however, need to create users, a database, schemas, and some number of triggers. Because the triggers, their functions, and everything else about them follow a pattern, I can use "format()" and dynamic SQL to generate them. I'll still need those three "security definer" procedures to make the table changes that I explained. And the code to call these procedures to implement the test. So the result won't be exactly small. But, while I'm generating the triggers, I may just as well generate all eight. After all, how would I know which of the eight to skip while I don't know the intended rules for the current_role? = You'd certainly start out with all eight but then whittle down to what still exhibits the problem and post that. = Karsten
Aw: Re: toast useless
Gesendet: Dienstag, 13. September 2022 um 19:13 Uhr Von: "Ron" An: pgsql-general@lists.postgresql.org Betreff: Re: toast useless On 9/13/22 08:18, Simon Riggs wrote: On Tue, 13 Sept 2022 at 13:29, Marcos Pegoraro wrote: What problem do they cause you? They don't cause any problem, I was just trying to get my database as clean as possible. I have some thousands of these toast tables with 8k bytes, so I was trying to eliminate them But if there is no way, ok They might be optimized away one day, but for now, they are essential. > Why are they essential? Essential to proper operation of the database code as of now. Best, Karsten
Re: Is it possible to stop sessions killing eachother when they all authorize as the same role?
Am Tue, Sep 13, 2022 at 05:10:58PM -0400 schrieb Tom Lane: > (I recall that somewhere we have some code that warns about no-op > grants. I wonder if issuing a warning for no-op revokes would be > helpful.) Surely, in the light of security a no-op revoke is potentially more dangerous than a no-op grant. Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Is it possible to stop sessions killing eachother when they all authorize as the same role?
Am Wed, Sep 14, 2022 at 10:10:30AM +0200 schrieb Karsten Hilbert: > Am Tue, Sep 13, 2022 at 05:10:58PM -0400 schrieb Tom Lane: > > > (I recall that somewhere we have some code that warns about no-op > > grants. I wonder if issuing a warning for no-op revokes would be > > helpful.) > > Surely, in the light of security a no-op revoke is > potentially more dangerous than a no-op grant. In the sense where no-op means "despite being revoked it is still granted by another grant" rather than "the revoke is a no-op because it is already revoked", that is. (although the latter can be used to inform on the first if the latter extends to all "levels" of revokage ... :-) Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: Re: I slipped up so that no existing role allows connection. Is rescue possible?
> Yes, after Tom's hint, a search for "single-user" took me to that page. But, > beginner as I am, > I didn't know that single-user mode was the thing that I needed. I need a > remedial class. > Something like "PostgreSQL for those whose mental model has been conditioned > by decades of working with Oracle Database". I think it's normal to not know the Ins and Outs of a ... new software. And that's the point why that class should be an easy one: Drop the idea that PG works like Oracle 101. Then, read the manual, back to cover. Yes, one will forget most of what's written there. However, a coarse structure of a new mental model will form. Karsten
Aw: Re: ('dog$house' = quote_ident('dog$house')) is surprisingly FALSE
What we deal with in our ordinary professional work is SQL texts, program source texts, within these, SQL identifier texts, and then the conventional display of the results of SQL and program execution. To emphasize the point about resulst display, try "\d s.*" in "\t off" mode. You'll see this: Table "s.silly name" Column | Type | Collation | Nullable | Default +-+---+--+- n | integer | | | But this SQL text: drop table "s.silly name"; tells me that there's no such table. And, indeed, there isn't. Perhaps there's a table s."silly name". It is accidental if unfortunate that the is quoted with ""'s in the \d output... Karsten
Aw: Re: How to store "blobs" efficiently for small and large sizes, with random access
> I don't know what exactly they are, but I suspect that they are just > files (segments?) in Oracle's "file system" (tablespaces/datafiles). > So pretty much what we recommend. Maybe so, but if those large segments are presented "seamlessly" in the form of a table integrated with PGs access/security infrastructure that would be really helpful for some scenarios. A view-on-top-of-file_fdw kind of thing ? LO seems to nearly be there by now, or am I misunderstanding ? Karsten
Re: "peer" authentication: cannot make "pg_ident.conf" work as I believe that the doc says that it should
Am Sat, Oct 29, 2022 at 08:20:50PM -0700 schrieb Bryn Llewellyn: > For the purpose of the tests that follow, I set up the O/S users "bob" and > "mary" so that "id bob mary postgres" shows this: > > id=1002(bob) gid=1001(postgres) groups=1001(postgres) > uid=1003(mary) gid=1001(postgres) groups=1001(postgres) > uid=1001(postgres) gid=1001(postgres) > groups=1001(postgres),27(sudo),114(ssl-cert) > > And I set up the cluster-roles "bob" and "mary" so that "\du" shows this: > > bob || {} > mary || {} > postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {} Just a hint: you may want to use "mary_os" and "mary_db", respectively, such that error messages can become less ambivalent... (same for bob*). Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: "peer" authentication: cannot make "pg_ident.conf" work as I believe that the doc says that it should
Am Sat, Oct 29, 2022 at 09:15:08PM -0700 schrieb Adrian Klaver: > >*# MAPNAME SYSTEM-USERNAME PG-USERNAME* > >*# --- --- --- > > bllewell mary mary > >* > > > > > >As has been said numerous times, it is utterly pointless to define a mapping > >like this > >- you get mary-is-mary for free just by saying peer. It certainly is but he probably did it just to check whether anything changes when another code path is run (the mapping) while the previously-working result (= successful login) should not change (mary is still mary). despite the additional code path. It seems to be a way of bisecting in order to verify/falsify assumptions in his mental model. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: Re: Putting the O/S user for "local" "peer" authentication in the "postgres" group vs chmod'ing the "pg*.conf" files to be readable by "all"
> The client user should *never* read the PostgreSQL configuration files, so if > changing > the permissions (which you should *never* do) has an effect, you must be > doing something > very strange, like trying to start the database server with the wrong user. It smells of trying to *embed* PostgreSQL ? But that would not go with the account of multi-tenancy that's been presented. Karsten
Aw: Information to CVE-2022-42889
> the german bureau for IT-Security "BSI" (Bundesamt für Sicherheit in der > Informationstechnik) has issued a warning for CVE CVE-2022-42889 with the > name commons-text. Insurance companies are obliged to analyse the installed > software for vulnerabilities of this type. As the Barmenia is using your product PostgreSQL Server it is necessary to obtain all information regarding any vulnerability against above CVE. We kindly ask you to provide information if the above product is affected by the CVE and if yes, when a fix will be available. > With the request for short-term feedback. It might be prudent for Barmenia, a large insurance company, to consider purchasing commercial support rather than requesting short-term feedback from volunteers. Other than that there's also excellent documentation and freely inspectable source code. Best regards, Karsten
Q: documentation improvement re collation version mismatch
Dear all, regarding changed collation versions this https://www.postgresql.org/docs/devel/sql-altercollation.html says: The following query can be used to identify all collations in the current database that need to be refreshed and the objects that depend on them: SELECT pg_describe_object(refclassid, refobjid, refobjsubid) AS "Collation", pg_describe_object(classid, objid, objsubid) AS "Object" FROM pg_depend d JOIN pg_collation c ON refclassid = 'pg_collation'::regclass AND refobjid = c.oid WHERE c.collversion <> pg_collation_actual_version(c.oid) ORDER BY 1, 2; I feel the result of that query can be slightly surprising because it does not return (to my testing) any objects depending on the database default collation, nor the database itself (as per a collation version mismatch in pg_database). Now, there is a line For the database default collation, there is an analogous command ALTER DATABASE ... REFRESH COLLATION VERSION. right above that query but the query comment does not really make it clear that the database default collation is _not_ identified to be in mismatch, if so. IOW, the database default collation may still need to be refreshed even if the query does not return any rows. Perhaps this query (taken from the net) SELECT -- get collation-change endangered indices indrelid::regclass::text, indexrelid::regclass::text, collname, pg_get_indexdef(indexrelid) FROM ( SELECT indexrelid, indrelid, indcollation[i] coll FROM pg_index, generate_subscripts(indcollation, 1) g(i) ) s JOIN pg_collation c ON coll=c.oid WHERE collprovider IN ('d', 'c') AND collname NOT IN ('C', 'POSIX'); could be added to the paragraph (or it could be folded into the first query by a UNION or some such) ? Or perhaps one could move the "ALTER DATABASE ... REFRESH ..." hint _below_ the query paragraph and add "Note: you may need to refresh the default collation even if the query above does not show any objects directly affected by a collation version change" ? Thanks for considering. Best, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: Re: Q: documentation improvement re collation version mismatch
Thanks, Julien, for your explanation. > > regarding changed collation versions this > > > > https://www.postgresql.org/docs/devel/sql-altercollation.html > > > > says: > > > > The following query can be used to identify all > > collations in the current database that need to be > > refreshed and the objects that depend on them: > > > > SELECT pg_describe_object(refclassid, refobjid, refobjsubid) AS > > "Collation", > >pg_describe_object(classid, objid, objsubid) AS "Object" > > FROM pg_depend d JOIN pg_collation c > >ON refclassid = 'pg_collation'::regclass AND refobjid = c.oid > > WHERE c.collversion <> pg_collation_actual_version(c.oid) > > ORDER BY 1, 2; > > > > I feel the result of that query can be slightly surprising > > because it does not return (to my testing) any objects > > depending on the database default collation, nor the database > > itself (as per a collation version mismatch in pg_database). > > Indeed. The default collation is "pinned", so we don't record any dependency > on it. Indirectly we do, don't we ? Or else > > WHERE > > collprovider IN ('d', 'c') would not make much sense, right ? The comment above the query in the official documentation is rather assertive (even if may true to the letter) and may warrant some more cautionary wording ? Added, perhaps, some variation of this: > For now, the only safe way to go is either reindex everything, or everything > except some safe cases (non-partial indexes on plain-non-collatable datatypes > only). Best, Karsten
Aw: Re: Q: documentation improvement re collation version mismatch
> > The comment above the query in the official documentation is rather > > assertive > > (even if may true to the letter) and may warrant some more cautionary > > wording ? Added, perhaps, some variation of this: > > > > > For now, the only safe way to go is either reindex everything, or > > > everything > > > except some safe cases (non-partial indexes on plain-non-collatable > > > datatypes > > > only). > > I think the comment is very poorly worded, as it leads readers to believe that > objects with a pg_depend dependency on a collation are the only one that would > get corrupted in case of glibc/ICU upgrade. > > I agree that there should be a big fat red warning saying something like > "reindex everything if there's any discrepancy between the recorded collation > version and the currently reported one unless you REALLY know what you're > doing." Given that it does not seem straightforward to mechanically detect objects in need of a collation-associated rebuild I would think that such a warning would change matters for the better, documentation-wise. Karsten
Q: fixing collation version mismatches
Dear all, just to confirm my understanding: Is it correct to say that the following sequence will "fix" all current collation version issues in a given database ? REINDEX DATABASE db_in_question; ALTER DATABASE db_in_question REFRESH COLLATION VERSION; ALTER COLLATION every_collation_from_pg_collation REFRESH VERSION; Note that I am currently _not_ concerned with minimizing work by running this on objects only that really need a reindex/refresh. Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Q: fixing collation version mismatches
Am Sun, Nov 13, 2022 at 12:46:53PM -0800 schrieb Christophe Pettus: > > On Nov 13, 2022, at 12:45, Karsten Hilbert wrote: > > REINDEX DATABASE db_in_question; > > ALTER DATABASE db_in_question REFRESH COLLATION VERSION; > > ALTER COLLATION every_collation_from_pg_collation REFRESH VERSION; > > I may be totally off-base here, but shouldn't the REINDEX be the last step? To my understanding, the REFRESH statements "merely" update the version information stored in the related objects. They do not change anything else; and the REINDEX does not reference them in any way. I suppose the REINDEX goes first as it does the actual fixing of now-invalid objects by rebuilding them. After that one is back to a usable database state, even if left with pesky (albeit harmless) warnings on version mismatches -- which to get rid of one runs the REFRESH statements. Or so my understanding... Which is why my question still stands: does the above three-strikes operation safely take care of any collation issues that may currently exist in a database ? Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Q: fixing collation version mismatches
Am Mon, Nov 14, 2022 at 05:42:16PM +0100 schrieb Daniel Verite: > > Which is why my question still stands: does the above > > three-strikes operation safely take care of any collation > > issues that may currently exist in a database ? > > For the indexes, yes, but theorically, all constraints involving collatable > types need a recheck. > > For foreign key constraints with non-deterministic collations, there > might be equality tests that pass with an older Unicode version and fail > with a newer Unicode version. Which gives weight to the argument that using real-world data (instead of surrogate keys) may lead to trouble. > For check constraints as well, checks applied to strings with recent > Unicode characters can give different results after an upgrade. Thanks for pointing this out more clearly. My thinking already evolved towards also including VALIDATE CONSTRAINT. I shall, for the record, update the sequence in question: -- indices REINDEX DATABASE db_in_question; -- constraints (check, foreign key) UPDATE pg_constraint SET convalidated = false WHERE all_check_and_FK_constraints; ALTER TABLE table_with_constraint VALIDATE CONSTRAINT constraint_on_that_table; -- other things, see below -- ... -- refresh collation versions if no errors above ALTER DATABASE db_in_question REFRESH COLLATION VERSION; ALTER COLLATION every_collation_from_pg_collation REFRESH VERSION; What else needs to be taken care of, and how ? partitions Need to re-sort rows into the proper partition as needed. Can this be achievd by UPDATE each_partitioned_table SET each_partitioned_key = each_partitioned_key; ? Courtesy of (ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION): Updating the partition key of a row will cause it to be moved into a different partition if it no longer satisfies the partition bounds of its original partition. range types Will this UPDATE table_with_range_type_column SET ranged_column = ranged_column find all relevant issues ? domains Will this UPDATE table_with_domain_type_column SET domained_column = domained_column find all relevant issues ? custom types ?? function immutability ?? It can be argued that functions marked IMMUTABLE really are not in case they involve sorting of a collatable data type, and are thus wrongly marked as IMMUTABLE. IOW pre-existing user error. If all this has been discussed in detail, I'd be glad for a pointer into the archive. Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
Am Tue, Nov 22, 2022 at 11:59:59PM -0500 schrieb Kirk Wolak: > > It's OK to post a work-in-progress patch to pgsql-hackers, even if it > > doesn't work right yet. With any luck, people will show up to help > > with problems. I am 100% sure that our Windows user community would > > love this feature. It would be good if the tests in > > src/bin/psql/t/010_tab_completion.pl pass on Windows, but if that's > > hard, don't let that stop you sharing a patch. > > > > Thomas, thanks for that! So new to this, I didn't realize... That's a > great idea. > Honestly not sure how to even run it? > > Thanks for the support, it's encouraging... especially when I know there's > an 80% chance that > this may fail to get accepted for any number of reasons. I don't think that estimate needs to be that pessimistic. Thanks for the effort to bring tab completion to psql on windows. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: Re: Stored procedure code no longer stored in v14 and v15, changed behaviour
> You would need to wrap the function creation calls into some automation to > generate and store those diffs, comparing it back, etc, but that may be > doable. I would also generate new diffs right after major version updates of > the database (a before and after of the output of pg_get_functiondef, applied > to the stored diff?). I wonder whether that would tie the sanity check to a particular PG version. I mean, pg_get_functiondef output being a server runtime artifact it might well change between server versions, no ? Best, Karsten
Q: error on updating collation version information
Dear all, following an ICU upgrade, collations in a stock Debian PG 15.1 cluster now have divergent version information in pg_collations. Now gnumed_v22=> ALTER COLLATION pg_catalog."br_FR@euro" REFRESH VERSION; ERROR: collation "pg_catalog.br_FR@euro" for encoding "UTF8" does not exist despite gnumed_v22=> select *, pg_encoding_to_char(collencoding) from pg_collation where collname = 'br_FR@euro'; -[ RECORD 1 ]---+--- oid | 12413 collname| br_FR@euro collnamespace | 11 collowner | 10 collprovider| c collisdeterministic | t collencoding| 16 collcollate | br_FR@euro collctype | br_FR@euro colliculocale | collversion | 2.35 pg_encoding_to_char | LATIN9 However, note the UTF8 vs LATIN9. The manual sayeth: Some (less frequently used) encodings are not supported by ICU. When the database encoding is one of these, ICU collation entries in pg_collation are ignored. Attempting to use one will draw an error along the lines of “collation "de-x-icu" for encoding "WIN874" does not exist”. which sounds somewhat related. The database encoding is UTF8. That br_FR@euro.LATIN9 had _not_ been added manually. It is also not actively used in my database(s). What is the canonical advice on the way forward here ? Is the _suggested_ solution to delete the collation or am I missing to see the "proper" approach to fixing it ? Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Q: error on updating collation version information
Am Sun, Dec 04, 2022 at 01:22:02PM +0100 schrieb Karsten Hilbert: > gnumed_v22=> ALTER COLLATION pg_catalog."br_FR@euro" REFRESH VERSION; > ERROR: collation "pg_catalog.br_FR@euro" for encoding "UTF8" does not > exist The OS (libc) does seem to know that collation: @hermes:~$ locale -a | grep br_FR br_FR br_FR@euro br_FR.utf8 Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Q: error on updating collation version information
Am Sun, Dec 04, 2022 at 01:22:02PM +0100 schrieb Karsten Hilbert: > following an ICU upgrade, collations in a stock Debian PG 15.1 > cluster now have divergent version information in pg_collations. Correction: this is following a libc upgrade 2.35 -> 2.36 Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Q: error on updating collation version information
Am Sun, Dec 04, 2022 at 10:09:47AM -0800 schrieb Adrian Klaver: > >>following an ICU upgrade, collations in a stock Debian PG 15.1 > >>cluster now have divergent version information in pg_collations. > > > >Correction: this is following a libc upgrade 2.35 -> 2.36 > > So to be clear this database is not using ICU, but collations from libc? Sorry for the confusion. This database carries collations from _both_ libc and ICU in pg_collations. The collation in question (br_FR@euro) is _not_ in use (as in being depended on by any in-database object). > How was the database installed? stock Debian apt-get install postgresql-15 (which gives 15.1) followed by CREATE DATABASE "gnumed_v22" with owner = "redacted :-)" template = "template1" encoding = 'unicode'; as "postgres". > In first post you had: > > gnumed_v22=> select *, pg_encoding_to_char(collencoding) from pg_collation > where > collname = 'br_FR@euro'; > -[ RECORD 1 ]---+--- > oid | 12413 > collname| br_FR@euro > collnamespace | 11 > collowner | 10 > collprovider| c > collisdeterministic | t > collencoding| 16 > collcollate | br_FR@euro > collctype | br_FR@euro > colliculocale | > collversion | 2.35 > pg_encoding_to_char | LATIN9 > > where collprovider c means libc and collversion 2.35. Yeah, that's when I figured that I misspoke about the ICU upgrade. Yes, there was an ICU upgrade, and yes, it did affect collations. Those I was able to fix up (the "reindex / revalidate constraint / refresh collation version" dance). There also was a libc upgrade which also affected locales. Most of them were fixable by that dance but some popped up (such as br_FR@euro) to not be "correctable" showing the "does not exist for encoding" error. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Stored procedure code no longer stored in v14 and v15, changed behaviour
Am Sun, Dec 04, 2022 at 06:05:30PM +0100 schrieb Alban Hertroys: > > I mean, pg_get_functiondef output being a server runtime artifact it might > > well change between server versions, no ? > > I meant to write: “I would also generate new diffs right > _before and_ after…”, precisely for that reason. I see. That makes sense. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Q: error on updating collation version information
Is this to be expected ? PG 15.1 on Debian: gnumed_v22=# select *, pg_collation_actual_version(oid), pg_encoding_to_char(collencoding) from pg_collation where collname = 'zh_TW'; -[ RECORD 1 ]---+ oid | 12985 collname| zh_TW collnamespace | 11 collowner | 10 collprovider| c collisdeterministic | t collencoding| 4 collcollate | zh_TW.euctw collctype | zh_TW.euctw colliculocale | collversion | 2.35 pg_collation_actual_version | 2.36 pg_encoding_to_char | EUC_TW -[ RECORD 2 ]---+ oid | 12986 collname| zh_TW collnamespace | 11 collowner | 10 collprovider| c collisdeterministic | t collencoding| 6 collcollate | zh_TW.utf8 collctype | zh_TW.utf8 colliculocale | collversion | 2.36 pg_collation_actual_version | 2.36 pg_encoding_to_char | UTF8 gnumed_v22=# begin; BEGIN gnumed_v22=*# alter collation pg_catalog."zh_TW" refresh version ; NOTICE: version has not changed ALTER COLLATION gnumed_v22=*# alter collation pg_catalog."zh_TW.utf8" refresh version ; NOTICE: version has not changed ALTER COLLATION gnumed_v22=*# alter collation pg_catalog."zh_TW.euctw" refresh version ; ERROR: collation "pg_catalog.zh_TW.euctw" for encoding "UTF8" does not exist gnumed_v22=!# As far as I can tell the documentation asserts that since the database encoding is UTF8 the pg_catalog."zh_TW.euctw" will be ignored by the server for all practical purposes. Does this mean it is impossible to "correct" its version information ? And if so, that is expected to be non-harmful and is not expected to trigger nag messages ? Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: plpgsql_check_function issue after upgrade
Am Mon, Dec 05, 2022 at 03:54:28PM +0530 schrieb shashidhar Reddy: > Is there any way to get the older version 1.1 of plpgsql_check to install > it? Is there any way to get you to respect the kind requests to follow this list's posting customs as expressed by the very people you want to help you ? Such as not to top post. Best regards, Karsten > On Mon, 5 Dec, 2022, 1:36 pm Pavel Stehule, wrote: > > > > > > > po 5. 12. 2022 v 8:42 odesílatel shashidhar Reddy < > > shashidharreddy...@gmail.com> napsal: > > > >> Pavel, > >> > >> Below is the back trace result > >> > >> libffi6-dbg libgcc1-dbg libkrb5-dbg libstdc++6-8-dbg libxml2-dbg > >> postgresql-13-dbgsym postgresql-13-pldeb > >> ugger-dbgsym zlib1g-dbg > >> > > > > I am sorry, I don't anything > > > > > > > >> > >> On Mon, Dec 5, 2022 at 11:28 AM Pavel Stehule > >> wrote: > >> > >>> Hi > >>> > >>> > >>> po 5. 12. 2022 v 6:49 odesílatel shashidhar Reddy < > >>> shashidharreddy...@gmail.com> napsal: > >>> > >>>> Pavel, > >>>> > >>>> Where can I get list-dbgsym-packages.sh script? > >>>> > >>> > >>> > >>> https://manpages.debian.org/testing/debian-goodies/find-dbgsym-packages.1.en.html > >>> > >>> https://wiki.ubuntu.com/DebuggingProgramCrash > >>> > >>> please, don't use top-post style in this mailing list > >>> https://en.wikipedia.org/wiki/Posting_style > >>> > >>> Regards > >>> > >>> Pavel > >>> > >>> > >>> > >>>> On Wed, Nov 30, 2022 at 10:04 AM Pavel Stehule > >>>> wrote: > >>>> > >>>>> > >>>>> > >>>>> st 30. 11. 2022 v 5:28 odesílatel Pavel Stehule < > >>>>> pavel.steh...@gmail.com> napsal: > >>>>> > >>>>>> > >>>>>> > >>>>>> st 30. 11. 2022 v 1:38 odesílatel shashidhar Reddy < > >>>>>> shashidharreddy...@gmail.com> napsal: > >>>>>> > >>>>>>> I have tried updating after upgrade but that wasn't working, so I > >>>>>>> have dropped and recreated the extension. > >>>>>>> Now it is crashing every time when we call the function. > >>>>>>> > >>>>>> > >>>>>> what is version od plpgsql_check on Postgres 12, what is version of > >>>>>> plpgsql_check on Postgres 13 (with version of minor release)? > >>>>>> > >>>>>> Can you send backtrace? > >>>>>> https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD > >>>>>> > >>>>> > >>>>> Do you have installed some other extensions? > >>>>> > >>>>> > >>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>>> > >>>>>>> On Tue, 29 Nov, 2022, 9:58 pm Pavel Stehule, < > >>>>>>> pavel.steh...@gmail.com> wrote: > >>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> út 29. 11. 2022 v 16:37 odesílatel shashidhar Reddy < > >>>>>>>> shashidharreddy...@gmail.com> napsal: > >>>>>>>> > >>>>>>>>> Plogsql check version is 2.2 and one more finding is before > >>>>>>>>> calling the function if we drop and recreate the plpgsql_check > >>>>>>>>> extension > >>>>>>>>> there is no issue, but each time we can't drop and create. > >>>>>>>>> > >>>>>>>> > >>>>>>>> Maybe you need to run ALTER EXTENSION plpgsql_check UPDATE before > >>>>>>>> first usage in pg 13 > >>>>>>>> > >>>>>>>> If the extension works after re-installation, then the problem is > >>>>>>>> not in an extension. > >>>>>>>> > >>>>>>>> > >
dropped default locale
Dear all, I managed to drop the "special" collations default, C, and POSIX with OIDs 100, 950, 951. Is there a way to recreate them (short of restoring a backup) ? Naive attempts with create collation do not seem to work out. Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: dropped default locale
Am Wed, Dec 21, 2022 at 03:41:24PM +0100 schrieb Laurenz Albe: > I would definitely go for the backup, but here is how you can > create these three rows in PostgreSQL v15: > > INSERT INTO pg_collation > (oid, collname, collnamespace, collowner, collprovider, > collisdeterministic, collencoding, collcollate, collctype) > VALUES > (100, 'default', 11, 10, 'd', TRUE, -1, NULL,NULL), > (950, 'C', 11, 10, 'c', TRUE, -1, 'C', 'C'), > (951, 'POSIX', 11, 10, 'c', TRUE, -1, 'POSIX', 'POSIX'); Many thanks ! I wasn't so sure whether inserting appropriate rows would be equivalent to create collation... (pg_collation might have been a view projecting inner workings of the server engine). Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: dropped default locale
Am Wed, Dec 21, 2022 at 03:46:48PM +0100 schrieb Karsten Hilbert: > I wasn't so sure whether inserting appropriate > rows would be equivalent to create collation... For that matter, is DELETE FROM pg_collation ... equivalent to DROP COLLATION ? Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Aw: Re: REINDEX vs VACUUM
> Von: "Peter J. Holzer" > On 2023-01-04 09:34:42 -0600, Ron wrote: > > I don't think VACUUM FULL (copy the table, create new indices and other > > metadata all in one command) actually vacuums tables. It's a misleading > > name. > > Move all the stuff from the living room to the bedroom and then jettison > the living room. > > Isn't that how you normally vacuum your living room? Well, yeah, I wouldn't expect the table to be *copied*. After all, there's not that much use for more than one per living room. Karsten
Aw: Re: Thanks! Re: Who adds the "start transaction" and "commit" to the intended SQL statement in "autocommit" mode?
> >> b...@yugabyte.com wrote: > >> > >> I’ve no idea how I might have found this without human help. > > > > x...@thebuild.com wrote: > > > > That sounds like an excellent documentation patch! > > Well, it’s already documented clearly enough. The question is how to find > it—especially if you don’t know that the feature that you’re looking > for exists or not.The psql doc would print out at about thirty pages with a > normal font size. So reading it from top to bottom would be quite a task. But, then, documentation is there to be read. And it tends to be the longer the more details it is expected to cover, isn't it ? Searching for generic terms on typical search engines can be quite a task, agreed. Karsten
Re: [EXTERNAL]: Re: [EXTERNAL]: Re: UPSERT in Postgres
Am Mon, Apr 10, 2023 at 09:41:15AM +0200 schrieb Francisco Olarte: > On Mon, 10 Apr 2023 at 04:16, Rob Sargent wrote: > > > An UPSERT checks whether a row exists, if so, it does an update, if not > > > it does an insert. This is the literal definition. > > This the part that's always eluded me: How does the client, the > > UPSERTer, come to hold an id and not know whether or not it's already in > > the database. > > This is extremely easy to do if you have natural instead of surrogate keys. > > I work in telephony, upserting the last incoming call timestamp for a > phone number will be exactly that. timezones ? DST ? spoofing ? ... Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: [EXTERNAL]: Re: [EXTERNAL]: Re: UPSERT in Postgres
Am Mon, Apr 10, 2023 at 01:33:41PM +0200 schrieb Francisco Olarte: > > > > This the part that's always eluded me: How does the client, the > > > > UPSERTer, come to hold an id and not know whether or not it's already in > > > > the database. > > > > > > This is extremely easy to do if you have natural instead of surrogate > > > keys. > > > > > > I work in telephony, upserting the last incoming call timestamp for a > > > phone number will be exactly that. > > > > timezones ? > > DST ? > > A timestamp is a point in the time line, this is what I insert, just a > real number marking a line, timezones and dst are presentation stuff. Indeed, as is the assumption which time line the numbers are referring to. Hence the incoming call timestamp is usable as a (natural) PK with respect to a given time line only, right? > > spoofing ? > > ¿ Of what ? The time stamp. But then I assume that is obtained on the logging system. All I really wanted to hint at is that "incoming call timestamp" may work pretty well in given settings but does not _always_ make for a "unique enough" key. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Guidance on INSERT RETURNING order
Am Fri, Apr 14, 2023 at 10:44:19PM +0100 schrieb John Howroyd: > The problem is that SQLAlchemy is an ORM [...] ... > [...] as the majority of the python world will use this ORM for > their database needs. I wouldn't be so sure on this count ... Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
backend crash on DELETE, reproducible locally
8-11-01 13:19:41 GMT ORT: do_start_bgworker, postmaster.c:5656 2018-11-01 13:19:41 GMT DEBUG: 0: InitPostgres 2018-11-01 13:19:41 GMT ORT: InitPostgres, postinit.c:573 2018-11-01 13:19:41 GMT DEBUG: 0: my backend ID is 1 2018-11-01 13:19:41 GMT ORT: SharedInvalBackendInit, sinvaladt.c:324 2018-11-01 13:19:41 GMT DEBUG: 0: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:41 GMT LOG: 0: Datenbanksystem ist bereit, um Verbindungen anzunehmen 2018-11-01 13:19:41 GMT ORT: reaper, postmaster.c:2888 2018-11-01 13:19:41 GMT DEBUG: 0: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:41 GMT DEBUG: 0: Logical-Replication-Launcher startet 2018-11-01 13:19:41 GMT ORT: ApplyLauncherMain, launcher.c:988 2018-11-01 13:19:41 GMT DEBUG: 0: InitPostgres 2018-11-01 13:19:41 GMT ORT: InitPostgres, postinit.c:573 2018-11-01 13:19:41 GMT DEBUG: 0: my backend ID is 2 2018-11-01 13:19:41 GMT ORT: SharedInvalBackendInit, sinvaladt.c:324 2018-11-01 13:19:41 GMT DEBUG: 0: received inquiry for database 0 2018-11-01 13:19:41 GMT ORT: pgstat_recv_inquiry, pgstat.c:5640 2018-11-01 13:19:41 GMT DEBUG: 0: writing stats file "/run/postgresql/global.stat" 2018-11-01 13:19:41 GMT ORT: pgstat_write_statsfiles, pgstat.c:4684 2018-11-01 13:19:41 GMT DEBUG: 0: checkpointer updated shared memory configuration values 2018-11-01 13:19:41 GMT ORT: UpdateSharedMemoryConfig, checkpointer.c:1362 2018-11-01 13:19:41 GMT DEBUG: 0: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:41 GMT DEBUG: 0: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:41 GMT DEBUG: 0: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:41 GMT DEBUG: 0: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:41 GMT DEBUG: 0: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:41 GMT DEBUG: 0: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:41 GMT DEBUG: 0: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:41 GMT DEBUG: 0: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0 2018-11-01 13:19:41 GMT ORT: ShowTransactionStateRec, xact.c:5151 2018-11-01 13:19:56 GMT DEBUG: 0: snapshot of 0+0 running transaction ids (lsn 0/4699D564 oldest xid 128764 latest complete 128763 next xid 128764) 2018-11-01 13:19:56 GMT ORT: LogCurrentRunningXacts, standby.c:1000 2018-11-01 13:20:19 GMT LOG: 0: SIGHUP empfangen, Konfigurationsdateien werden neu geladen 2018-11-01 13:20:19 GMT ORT: SIGHUP_handler, postmaster.c:2529 2018-11-01 13:20:19 GMT LOG: 0: Parameter »log_min_messages« auf »notice« gesetzt 2018-11-01 13:20:19 GMT ORT: ProcessConfigFileInternal, guc-file.l:456 What else can I provide/test/look into ? (In case anyone remembers: GNUmed uses psycopg2 for upgrades and its databases default to default_transaction_read_only=true but I have taken both aspects out of the loop manually for the above test.) Thanks for any hints, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: backend crash on DELETE, reproducible locally
> I am running this delete statement via psql > > DELETE FROM ref.auto_hint WHERE title = 'Kontraindikation: ACE/Sartan > <-> Schwangerschaft'; > > and it crashes the backend it is running on. For what it is worth I have identified the physical file gnumed_v21=# select pg_relation_filepath('ref.auto_hint'); pg_relation_filepath -- base/139580/111088 (1 Zeile) and phyically copied that to another place (so there's no read error on the disk - but the DELETE would perhaps ask for a physical write test ?). Attached the output of pg_filedump which, to me, doesn't seem to show anything out of the ordinary either. Corrupt index should not be the case because the databse is REINDEXed during the upgrade process before the failing statement. Will attempt to get a stack trace ... Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B *** * PostgreSQL File/Block Formatted Dump Utility - Version 11.0 * * File: 111088_fsm * Options used: -v -f -i -k * * Dump created on: Thu Nov 1 15:37:59 2018 *** Block0 - Block Offset: 0x Offsets: Lower 24 (0x0018) Block: Size 8192 Version4Upper8192 (0x2000) LSN: logid 0 recoff 0x Special 8192 (0x2000) Items:0 Free Space: 8168 Checksum: 0x6560 Prune XID: 0x Flags: 0x () Length (including item array): 24 : 6065 1820 `e. 0010: 00200420 . . -- Empty block - no items listed Block1 - Block Offset: 0x2000 Offsets: Lower 24 (0x0018) Block: Size 8192 Version4Upper8192 (0x2000) LSN: logid 0 recoff 0x Special 8192 (0x2000) Items:0 Free Space: 8168 Checksum: 0x655f Prune XID: 0x Flags: 0x () Length (including item array): 24 : 5f65 1820 _e. 0010: 00200420 . . -- Empty block - no items listed Block2 - Block Offset: 0x4000 Offsets: Lower 24 (0x0018) Block: Size 8192 Version4Upper8192 (0x2000) LSN: logid 0 recoff 0x465b8208 Special 8192 (0x2000) Items:0 Free Space: 8168 Checksum: 0xeb9b Prune XID: 0x Flags: 0x () Length (including item array): 24 : 08825b46 9beb 1820 ..[F... 0010: 00200420 . . -- Empty block - no items listed *** End of File Encountered. Last Block Read: 2 *** *** * PostgreSQL File/Block Formatted Dump Utility - Version 11.0 * * File: 111088 * Options used: -v -f -i -k * * Dump created on: Thu Nov 1 15:37:24 2018 *** Block0 - Block Offset: 0x Offsets: Lower 68 (0x0044) Block: Size 8192 Version4Upper2680 (0x0a78) LSN: logid 0 recoff 0x46a073a0 Special 8192 (0x2000) Items: 11 Free Space: 2612 Checksum: 0x3086 Prune XID: 0x0001df49 Flags: 0x () Length (including item array): 68 : a073a046 8630 4400780a .s.F.0..D.x. 0010: 00200420 49df0100 009e0004 00800100 . . I... 0020: 00800100 e89a2a06 48994003 00800100 ..*.H.@. 0030: 14956208 b892b604 b890fa03 3c8df606 ..b.<... 0040: 788a8405 x... -- Item 1 -- Length: 512 Offset: 7680 (0x1e00) Flags: NORMAL XMIN: 119568 XMAX: 0 CID|XVAC: 0 Block Id: 0 linp Index: 1 Attributes: 12 Size: 24 infomask: 0x0902 (HASVARWIDTH|XMIN_COMMITTED|XMAX_INVALID) 1e00: 10d30100 1e10: 01000c00 02091800 a634 .4.. 1e20: 8f2bee84 971c0200 676d2d64 626f .+..gm-dbo.. 1e30: 1e40: 1e50: 1e60: 0100 fb53454c .SEL 1e70: 45435420 45584953 54532853 454c4543 ECT EXISTS(SELEC 1e80: 54203120 46524f4d 20636c69 6e2e765f T 1 FROM clin.v_ 1e90: 66616d69 6c795f68 6973746f 72792057 family_history W 1ea0: 48455245 2063
Re: backend crash on DELETE, reproducible locally
On Thu, Nov 01, 2018 at 03:42:57PM +0100, Karsten Hilbert wrote: > Will attempt to get a stack trace ... Meanwhile, in case it helps, an strace of the crash. epoll_wait(7, [{EPOLLIN, {u32=47607120, u64=35184419695952}}], 1, -1) = 1 recv(13, "Q\0\0\0`DELETE FROM ref.auto_hint W"..., 8192, 0) = 97 gettimeofday({tv_sec=1541084179, tv_usec=197395}, NULL) = 0 gettimeofday({tv_sec=1541084179, tv_usec=197442}, NULL) = 0 write(2, "2018-11-01 14:56:19 GMT LOG: 00"..., 209) = 209 _llseek(45, 0, [16384], SEEK_END) = 0 _llseek(47, 0, [16384], SEEK_END) = 0 _llseek(48, 0, [16384], SEEK_END) = 0 _llseek(49, 0, [16384], SEEK_END) = 0 _llseek(45, 0, [16384], SEEK_END) = 0 kill(10377, SIGUSR1)= 0 _llseek(45, 0, [0], SEEK_SET) = 0 read(45, "\0\0\0\0\234\270\246F\10\373\0\0D\0x\n\0 \4 I\337\1\0\0\236\0\4\0\200\1\0"..., 8192) = 8192 _llseek(43, 81920, [81920], SEEK_SET) = 0 read(43, "\0\0\0\0x{pF8o\0\0\264\1\34\33\360\37\4 \0\0\0\0\344\237\30\0\330\237\30\0"..., 8192) = 8192 _llseek(44, 925696, [925696], SEEK_SET) = 0 read(44, "\0\0\0\0\310\324\231FE\231\1\0L\0l\1\0 \4 \0\0\0\0\24\234\326\7<\231\254\5"..., 8192) = 8192 openat(AT_FDCWD, "base/139580/139695", O_RDWR|O_LARGEFILE) = 56 read(56, "\0\0\0\0\350\347\220E`B\0\0<\0\360\37\360\37\4 \0\0\0\0b1\5\0\3\0\0\0"..., 8192) = 8192 read(56, "\0\0\0\0h\347\220E\1,\0\0(\0\300\37\360\37\4 \0\0\0\0\344\237\30\0\330\237\30\0"..., 8192) = 8192 openat(AT_FDCWD, "base/139580/2612", O_RDWR|O_LARGEFILE) = 57 read(57, "\0\0\0\0\200\334s\1a\6\4\0(\0@\36\0 \4 \0\0\0\0\220\237\340\0 \237\340\0"..., 8192) = 8192 _llseek(43, 65536, [65536], SEEK_SET) = 0 read(43, "\0\0\0\0\214e^F\215w\0\0H\7`\n\360\37\4 \0\0\0\0x\212\30\0\344\237\30\0"..., 8192) = 8192 _llseek(44, 606208, [606208], SEEK_SET) = 0 read(44, "\0\0\0\0\20\206\237FVP\1\0p\0\230\3\0 \4 \361\360\1\\237\232\1T\236\266\1"..., 8192) = 8192 stat64("/usr/lib/postgresql/11/lib/plpgsql", 0xbfbda4ec) = -1 ENOENT (No such file or directory) stat64("/usr/lib/postgresql/11/lib/plpgsql.so", {st_mode=S_IFREG|0644, st_size=197944, ...}) = 0 stat64("/usr/lib/postgresql/11/lib/plpgsql.so", {st_mode=S_IFREG|0644, st_size=197944, ...}) = 0 futex(0xb78b904c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 openat(AT_FDCWD, "/usr/lib/postgresql/11/lib/plpgsql.so", O_RDONLY|O_CLOEXEC) = 58 read(58, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300a\0\0004\0\0\0"..., 512) = 512 fstat64(58, {st_mode=S_IFREG|0644, st_size=197944, ...}) = 0 mmap2(NULL, 201108, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 58, 0) = 0xb293f000 mmap2(0xb2944000, 102400, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 58, 0x5000) = 0xb2944000 mmap2(0xb295d000, 69632, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 58, 0x1e000) = 0xb295d000 mmap2(0xb296e000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 58, 0x2e000) = 0xb296e000 close(58) = 0 mprotect(0xb296e000, 8192, PROT_READ) = 0 openat(AT_FDCWD, "global/139651", O_RDWR|O_LARGEFILE) = 58 read(58, "\0\0\0\0\320\200zE\324\16\0\0<\0\360\37\360\37\4 \0\0\0\0b1\5\0\3\0\0\0"..., 8192) = 8192 read(58, "\0\0\0\0P\200zE.+\0\0\320\0\310\35\360\37\4 \0\0\0\0\344\237\30\0\330\237\30\0"..., 8192) = 8192 openat(AT_FDCWD, "base/139580/139719", O_RDWR|O_LARGEFILE) = 59 read(59, "\0\0\0\0x\347\232E\226s\0\0<\0\360\37\360\37\4 \0\0\0\0b1\5\0\3\0\0\0"..., 8192) = 8192 read(59, "\0\0\0\0l\250\16F\22\340\0\0h\0\0\37\360\37\4 \0\0\0\0\344\237\30\0\330\237\30\0"..., 8192) = 8192 read(42, "\0\0\0\0`\235^\1\177\242\4\0\320\0\30\1\0 \4 \0\0\0\0T\237X\1\250\236X\1"..., 8192) = 8192 openat(AT_FDCWD, "base/139580/139606", O_RDWR|O_LARGEFILE) = 60 read(60, "\0\0\0\0\244E{F\206q\0\0<\0\360\37\360\37\4 \0\0\0\0b1\5\0\3\0\0\0"..., 8192) = 8192 _llseek(60, 24576, [24576], SEEK_SET) = 0 read(60, "\0\0\0\0\224\272wE\re\0\0008\0\20\37\360\37\4 \0\0\0\0\350\237\20\0\320\2370\0"..., 8192) = 8192 read(60, "\0\0\0\0tD{F\34S\0\0\220\3\324\6\360\37\4 \0\0\0\0t\2078\0\300\237`\0"..., 8192) = 8192 brk(0x2e07000) = 0x2e07000 _llseek(41, 32768, [32768], SEEK_SET) = 0 read(41, "\0\0\0\0\320B{F\23I\0\0\230\6p\f\360\37\4 \0\0\0\0p\214\30\0\344\237\30\0"..., 8192) = 8192 _l
Re: backend crash on DELETE, reproducible locally
On Thu, Nov 01, 2018 at 04:11:33PM +0100, Karsten Hilbert wrote: >> Will attempt to get a stack trace ... Eventually, the stack trace (attached). Thanks for insights, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B Script started on 2018-11-01 16:16:02+01:00 root@hermes:~/tmp/pg# gdb --pid=22367 --readnow GNU gdb (Debian 8.1-4+b1) 8.1 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i686-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". Attaching to process 22367 Reading symbols from /usr/lib/postgresql/11/bin/postgres...Reading symbols from /usr/lib/debug/.build-id/c0/e1914ba590a236d6ee6e9ccdf7bee87a5c00da.debug...expanding to full symbols...done. expanding to full symbols...done. Reading symbols from /lib/i386-linux-gnu/libpthread.so.0...Reading symbols from /usr/lib/debug/.build-id/9b/6447528b2fffba536afc815a31304adde16f8b.debug...expanding to full symbols...done. expanding to full symbols...done. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1". Reading symbols from /usr/lib/i386-linux-gnu/libxml2.so.2...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /lib/i386-linux-gnu/libpam.so.0...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /usr/lib/i386-linux-gnu/libssl.so.1.1...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /usr/lib/i386-linux-gnu/libcrypto.so.1.1...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /lib/i386-linux-gnu/librt.so.1...Reading symbols from /usr/lib/debug/.build-id/d1/de1cfe9b8c85c6195a28a26206906acd9083ff.debug...expanding to full symbols...done. expanding to full symbols...done. Reading symbols from /lib/i386-linux-gnu/libdl.so.2...Reading symbols from /usr/lib/debug/.build-id/65/c7a119514fa310ef4bd22bbf97f82b30511805.debug...expanding to full symbols...done. expanding to full symbols...done. Reading symbols from /lib/i386-linux-gnu/libm.so.6...Reading symbols from /usr/lib/debug/.build-id/59/1fca8f9efd1964477652a51e7936f88ce15440.debug...expanding to full symbols...done. expanding to full symbols...done. Reading symbols from /usr/lib/i386-linux-gnu/libldap_r-2.4.so.2...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /usr/lib/i386-linux-gnu/libicui18n.so.60...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /usr/lib/i386-linux-gnu/libicuuc.so.60...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /lib/i386-linux-gnu/libsystemd.so.0...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /lib/i386-linux-gnu/libc.so.6...Reading symbols from /usr/lib/debug/.build-id/a2/03af6935d5ab00f79fe12b4098d27d3f70c19f.debug...expanding to full symbols...done. expanding to full symbols...done. Reading symbols from /lib/ld-linux.so.2...Reading symbols from /usr/lib/debug/.build-id/d3/4d111ccc131067fcbc126906e01374c678c06e.debug...expanding to full symbols...done. expanding to full symbols...done. Reading symbols from /usr/lib/i386-linux-gnu/libicudata.so.60...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /lib/i386-linux-gnu/libz.so.1...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /lib/i386-linux-gnu/liblzma.so.5...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /lib/i386-linux-gnu/libaudit.so.1...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /usr/lib/i386-linux-gnu/libkrb5.so.3...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /usr/lib/i386-linux-gnu/libk5crypto.so.3...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /lib/i386-linux-gnu/libcom_err.so.2...expanding to full symbols...(no debugging symbols found)...done. Reading symbols from /usr/lib/i386-linux-gnu/libkrb5support.so.0...expanding to full symbols...(no debugging symbols found)...done. Readin
Re: backend crash on DELETE, reproducible locally
On Thu, Nov 01, 2018 at 12:27:23PM -0300, Alvaro Herrera wrote: > In general terms, this bug report would have been more actionable if you > had shown the definition of the tables involved right from the start. Sorry for that, will supply. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: backend crash on DELETE, reproducible locally
On Thu, Nov 01, 2018 at 12:27:23PM -0300, Alvaro Herrera wrote: > Ah, now this is interesting. Can you please supply the definition of > the table? Attached. > I'm wondering if there is a partitioned table with an FK to > this one. There is. Both ref.auto_hint and clin.suppressed_hint are using inheritance (from audit.audit_fields). However, GNUmed does not use inheritance for explicit partitioning but rather similar to how classes and subclasses are used in OO languages. > I'm not quite seeing how come 'tup' is NULL there. Can you > 'print trigdata' in frame 2? Sure, how ? :-) (I can surely type "print trigdata" but does that already auto-select from "frame 2" ?) Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B Ausgabeformat ist »wrapped«. Tabelle »audit.audit_fields« Spalte | Typ| Sortierfolge | NULL erlaubt? | Vorgabewert | Speicherung | Statistikziel | Beschreibung ---+--+--+---+--+-+---+ pk_audit | integer | | not null | nextval('audit.audit_fields_pk_audit_seq'::regclass) | plain | | row_version | integer | | not null | 0 | plain | | the version of the row; mainly just a count modified_when | timestamp with time zone | | not null | CURRENT_TIMESTAMP| plain | | when has this row been committed (created/modified) modified_by | name | | not null | CURRENT_USER | plain | | by whom has this row been committed (created/modified) Indexe: "audit_fields_pkey" PRIMARY KEY, btree (pk_audit) Check-Constraints: "audit_audit_fields_sane_modified_when" CHECK ((modified_when <= clock_timestamp()) IS TRUE) Regeln: audit_fields_no_del AS ON DELETE TO audit.audit_fields DO INSTEAD NOTHING audit_fields_no_ins AS ON INSERT TO audit.audit_fields DO INSTEAD NOTHING audit_fields_no_upd AS ON UPDATE TO audit.audit_fields DO INSTEAD NOTHING Kindtabellen: bill.bill, bill.bill_item, blobs.doc_desc, blobs.doc_med, blobs.lnk_doc2hospital_stay, blobs.lnk_doc_med2episode, cfg.report_query, clin.allergy_state, clin.clin_diag, clin.clin_item_type, clin.clin_root_item, clin.encounter, clin.episode, clin.external_care, clin.fhx_relation_type, clin.form_data, clin.health_issue, clin.incoming_data_unmatchable, clin.incoming_data_unmatched, clin.lnk_code2item_root, clin.lnk_constraint2vacc_course, clin.lnk_pat2vaccination_course, clin.lnk_substance2episode, clin.lnk_tst2norm, clin.lnk_type2item, clin.lnk_vaccination_course2schedule, clin.lnk_vaccine2inds, clin.patient, clin.review_root, clin.suppressed_hint, clin.test_org, clin.test_panel, clin.test_type, clin.vaccination_course, clin.vaccination_course_constraint, clin.vaccination_definition, clin.vaccination_schedule, clin.vacc_indication, clin.vaccine, clin.vaccine_batches, clin.vacc_route, clin.waiting_list, de_de.beh_fall_typ, de_de.lab_test_gnr, de_de.prax_geb_paid, dem.address, dem.gender_label, dem.identity, dem.identity_tag, dem.inbox_item_category, dem.inbox_item_type, dem.lnk_identity2ext_id, dem.lnk_job2person, dem.lnk_org_unit2comm, dem.lnk_org_unit2ext_id, dem.lnk_person2relative, dem.message_inbox, dem.occupation, dem.org, dem.org_unit, dem.praxis_branch, dem.region, dem.relation_types, dem.staff, dem.street, dem.urb, gm.access_log, ref.auto_hint,
Re: backend crash on DELETE, reproducible locally
On Thu, Nov 01, 2018 at 12:27:23PM -0300, Alvaro Herrera wrote: > > Program received signal SIGSEGV, Segmentation fault. > > heap_attisnull (tup=0x0, attnum=5, tupleDesc=0xb2990ef4) at > > ./build/../src/backend/access/common/heaptuple.c:403 > > 403 ./build/../src/backend/access/common/heaptuple.c: Datei oder > > Verzeichnis nicht gefunden. > > (gdb) thread apply all bt full > > > > Thread 1 (Thread 0xb4c429c0 (LWP 22367)): > > #0 heap_attisnull (tup=0x0, attnum=5, tupleDesc=0xb2990ef4) at > > ./build/../src/backend/access/common/heaptuple.c:403 > > __func__ = "heap_attisnull" > > #1 0x0087690d in ri_NullCheck (tupDesc=0xb2990ef4, tup=0x0, > > riinfo=0x2e1d548, rel_is_pk=true) at > > ./build/../src/backend/utils/adt/ri_triggers.c:2894 > > attnums = 0x2e1d5a4 > > i = 0 > > allnull = true > > nonenull = true > > #2 0x00879bf7 in RI_FKey_cascade_del (fcinfo=0xbfbda9e4) at > > ./build/../src/backend/utils/adt/ri_triggers.c:917 > > Can you 'print trigdata' in frame 2? Does this help at all ? (gdb) frame 2 #2 0x00879bf7 in RI_FKey_cascade_del (fcinfo=0xbfbda9e4) at ./build/../src/backend/utils/adt/ri_triggers.c:917 917 ./build/../src/backend/utils/adt/ri_triggers.c: Datei oder Verzeichnis nicht gefunden. (gdb) print trigdata $1 = (TriggerData *) 0xbfbdaca4 It is from another run, however. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: backend crash on DELETE, reproducible locally
On Thu, Nov 01, 2018 at 11:43:56AM -0400, Tom Lane wrote: > Alvaro Herrera writes: > > Ah, now this is interesting. Can you please supply the definition of > > the table? I'm wondering if there is a partitioned table with an FK to > > this one. I'm not quite seeing how come 'tup' is NULL there. > > Yeah, apparently we've passed a null OLD tuple to an RI_FKey_cascade_del > trigger, which surely shouldn't happen. It'd be interesting to look at > the set of triggers on this table. I don't entirely trust psql's \d > to show us reality if there's something screwed up about the triggers, > so in addition to \d output, could we see > > select * from pg_trigger where tgrelid = 'ref.auto_hint'::regclass; Ausgabeformat ist »wrapped«. tgrelid |tgname | tgfoid | tgtype | tgenabled | tgisinternal | tgconstrrelid | tgconstrindid | tgconstraint | tgdeferrable | tginitdeferred | tgnargs | tgattr | tgargs | tgqual | tgoldtable | tgnewtable -+---+++---+--+---+---+--+--++-+++++ 111088 | RI_ConstraintTrigger_a_116970 | 1646 | 9 | O | t |116789 |111099 | 116969 | f| f | 0 || \x ||| 111088 | RI_ConstraintTrigger_a_116971 | 1649 | 17 | O | t |116789 |111099 | 116969 | f| f | 0 || \x ||| 111088 | zt_ins_auto_hint | 141652 | 7 | O | f | 0 | 0 |0 | f| f | 0 || \x ||| 111088 | zt_upd_auto_hint | 141654 | 19 | O | f | 0 | 0 |0 | f| f | 0 || \x ||| 111088 | zt_del_auto_hint | 141656 | 11 | O | f | 0 | 0 |0 | f| f | 0 || \x ||| (5 Zeilen) BTW, do you want me to attach text files or include into the body (potential wrapping issues on display) ? Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: backend crash on DELETE, reproducible locally
On Fri, Nov 02, 2018 at 11:56:58PM -0400, Tom Lane wrote: > > On Thu, Nov 01, 2018 at 11:43:56AM -0400, Tom Lane wrote: > >> Yeah, apparently we've passed a null OLD tuple to an RI_FKey_cascade_del > >> trigger, which surely shouldn't happen. It'd be interesting to look at > >> the set of triggers on this table. I don't entirely trust psql's \d > >> to show us reality if there's something screwed up about the triggers, > >> so in addition to \d output, could we see > >> select * from pg_trigger where tgrelid = 'ref.auto_hint'::regclass; > > > [ pretty normal-looking trigger entries ] > > I was feeling baffled about this, but it suddenly occurs to me that maybe > the bug fixed in 040a1df61/372102b81 explains this. If the particular > tuple you're trying to delete predates the last ALTER TABLE ADD COLUMN > operation on the table, then this bug would result in t_self getting > set to "invalid", which'd cause AfterTriggerSaveEvent to save "ate_ctid1" > as "invalid", which'd lead to a null tuple getting passed when the trigger > eventually gets invoked. Now, that is a theory I can follow up on -- all the bootstrapping SQL scripts are under version control so I can pin down the exact sequence of events. ... goes off to do some checks ... And sure enough there's an ALTER TABLE ADD COLUMN related to that table: alter table ref.auto_hint add column recommendation_query text; -- (audit log table needs to get the new column, too) alter table audit.log_auto_hint add column recommendation_query text; before the DELETE of the pre-existing tuple. When running a rigged upgrade that stops right before those ALTER TABLEs and then doing the following: begin; -- instrument: insert into ref.auto_hint (title, query, hint, source, lang) values ( 'DELETE test', 'select 1;', 'insertion before ADD COLUMN', 'testing', 'en' ); -- works: DELETE from ref.auto_hint where title = 'DELETE test'; -- instrument, again: insert into ref.auto_hint (title, query, hint, source, lang) values ( 'DELETE test', 'select 1;', 'insertion before ADD COLUMN', 'testing', 'en' ); alter table ref.auto_hint add column recommendation_query text; -- audit log table needs to get the message alter table audit.log_auto_hint add column recommendation_query text; -- fails: DELETE from ref.auto_hint where title = 'DELETE test'; rollback; the expected segfault does indeed occur. Conversely, moving the offending DELETE FROM ref.auto_hint WHERE title = 'Kontraindikation: ACE/Sartan <-> Schwangerschaft'; to right before the ALTER TABLEs makes the full upgrade run through without further problems. Looking at 040a1df61/372102b81 feels like it fits the bill. So, I guess I can work around the issue by the above manoeuvre and report back once 040a1df61/372102b81 is released. Anything else you'd want me to look into ? Many thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: backend crash on DELETE, reproducible locally
On Sat, Nov 03, 2018 at 11:39:49AM -0400, Tom Lane wrote: > Karsten Hilbert writes: > > On Fri, Nov 02, 2018 at 11:56:58PM -0400, Tom Lane wrote: > >> I was feeling baffled about this, but it suddenly occurs to me that maybe > >> the bug fixed in 040a1df61/372102b81 explains this. > > > So, I guess I can work around the issue by the above > > manoeuvre and report back once 040a1df61/372102b81 is > > released. > > Anything else you'd want me to look into ? > > Don't suppose you'd want to build a server with 372102b81 applied > and see if it works? Yeah, I feared that answer ;-) However, I lack the skills and the machine(s) to do so ... I will, at any rate, report back when the existing fix is released. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
FIXED: backend crash on DELETE, reproducible locally
For the record: Regarding backend crash when DELETEing tuples older than a recent ALTER TABLE ADD COLUMN: > > > On Fri, Nov 02, 2018 at 11:56:58PM -0400, Tom Lane wrote: > > >> I was feeling baffled about this, but it suddenly occurs to me that maybe > > >> the bug fixed in 040a1df61/372102b81 explains this. > > > > > So, I guess I can work around the issue by the above > > > manoeuvre and report back once 040a1df61/372102b81 is > > > released. > > I will, at any rate, report back when the existing fix is released. Now that PG11.1 is released on Debian postgresql-11: Installiert: 11.1-1 Installationskandidat: 11.1-1 Versionstabelle: *** 11.1-1 500 500 http://httpredir.debian.org/debian unstable/main i386 Packages 100 /var/lib/dpkg/status 11.0-1+b1 990 990 http://httpredir.debian.org/debian buster/main i386 Packages I can report that my issued is fixed by that version. Thanks to all, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Does PostgreSQL use atomic file creation of FS?
On Wed, Dec 12, 2018 at 02:48:12PM +0300, Dmitry Lazurkin wrote: > Does PostgreSQL use atomic file creation on FS? How does PostgreSQL > catch situation when system crashes between open call and write call? I > am interesting in this because I would like use PostgreSQL on network > file system. I doubt we can get more certainty than this: https://www.postgresql.org/docs/devel/creating-cluster.html#CREATING-CLUSTER-NFS Best, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
multi-SQL command string aborts despite IF EXISTS
Hello all, the Orthanc DICOM server tries to create a trigram index using this code: db->Execute( "CREATE EXTENSION IF NOT EXISTS pg_trgm; " "CREATE INDEX DicomIdentifiersIndexValues2 ON DicomIdentifiers USING gin(value gin_trgm_ops);"); which results in this sequence of events inside PG11: 2019-01-28 08:52:50 GMT ORT: exec_execute_message, postgres.c:2011 2019-01-28 08:52:50 GMT LOG: 0: Anweisung: CREATE EXTENSION IF NOT EXISTS pg_trgm; CREATE INDEX DicomIdentifiersIndexValues2 ON DicomIdentifiers USING gin(value gin_2019-01-28 08:52:50 GMT ORT: exec_simple_query, postgres.c:975 2019-01-28 08:52:50 GMT FEHLER: 42501: keine Berechtigung, um Erweiterung »pg_trgm« zu erzeugen 2019-01-28 08:52:50 GMT TIPP: Nur Superuser können diese Erweiterung anlegen. 2019-01-28 08:52:50 GMT ORT: execute_extension_script, extension.c:809 2019-01-28 08:52:50 GMT ANWEISUNG: CREATE EXTENSION IF NOT EXISTS pg_trgm; CREATE INDEX DicomIdentifiersIndexValues2 ON DicomIdentifiers USING gin(value gin_trgm_ops); 2019-01-28 08:52:50 GMT LOG: 0: Anweisung: ABORT Apparently, the two SQL commands are being sent as one command string. It is quite reasonable that the CREATE EXTENSION part fails because the connected user, indeed, does not have sufficient permissions, as it should be. However, the pg_trgm extension is pre-installed by the database superuser such that index creation should succeed. Now, I would have thought that the "IF NOT EXISTS" part of the CREATE EXTENSION would have allowed the subsequent CREATE INDEX to succeed. I am wrong ? Will aborts inside a multi-SQL string prevent from being executed any SQL commands later in that same string ? (Mind you, the code above does not abort the *transaction* but does not execute the second SQL command.) Many thanks for insights, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: multi-SQL command string aborts despite IF EXISTS
On Mon, Jan 28, 2019 at 03:17:47PM +0100, Laurenz Albe wrote: > > Now, I would have thought that the "IF NOT EXISTS" part of > > the CREATE EXTENSION would have allowed the subsequent CREATE > > INDEX to succeed. > > > > I am wrong ? > > No, you are right. > > The "pg_trgm" extension does *not* exist in the database, and that is your > problem. Wow, I didn't realize the difference between pg_extension and pg_available_extensions ... > Perhaps you preinstalled the extension in the wrong database (postgres?). It wasn't, indeed, *installed* (despite being available ;) Thanks, Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Methods to quickly spin up copies of an existing databases
On Fri, Mar 01, 2019 at 04:51:32PM -0500, Arjun Ranade wrote: > Pre-copying is not really an option since we could potentially need 1-X > instances so it needs to be scalable. Would it work to have a PG instance with a ready-to-use cluster/DB inside a VM, and make a copy of that as needed ? The copies then get thrown away. Same here, pre-copying a bunch of VMs would help. Disk space is (apparently) cheaper than time (for your use case). Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Where to store Blobs?
On Wed, Mar 13, 2019 at 11:50:37AM -0400, Christopher Browne wrote: >> I guess most people do not store Blobs in PostgresSQL. - BYTEA puts practical limits on size - LO storage happens inside the system (!) table Nowadays, there are Foreign Data Wrappers which might encapsulate files as if they lived inside the database. Also, a combination of COPY TO FORMAT binary pg_read_binary_file() and suitable plpgsql security definer functions might provide for a Poor Man's binary file integrated external storage. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: s3 was: Where to store Blobs?
On Mon, Mar 18, 2019 at 11:42:00AM +0100, Thomas Güttler wrote: > I am curious. Why did nobody say: > > store blobs in a storage server (like s3) and only store the blob-id in > PostgreSQL? That's been rehashed to the point of becoming a FAQ https://wiki.postgresql.org/wiki/BinaryFilesInDB Karsten Hilbert -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B