SV: SV: SV: SV: Problem with ssl and psql in Postgresql 13

2020-12-22 Thread Gustavsson Mikael
Hi,


Yes it´s odd. I think we begin with download/reinstall and take it from there.

The server name is just letters and numbers so I think we can rule that out.


Christmas is coming up fast as usual so I think I will pick this up in January.

Thanks for all the help and Happy Christmas! Or God Jul as we say in Sweden.


KR

Mikael Gustavsson, SMHI


Från: Tom Lane 
Skickat: den 18 december 2020 21:02:50
Till: Gustavsson Mikael
Kopia: Magnus Hagander; Kyotaro Horiguchi; pgsql-gene...@postgresql.org; 
Svensson Peter
Ämne: Re: SV: SV: SV: Problem with ssl and psql in Postgresql 13

Gustavsson Mikael  writes:
> pgsql-13 with require:
> $ /usr/pgsql-13/bin/psql "dbname=postgres user=kalle host=server 
> sslmode=require"
> Password for user kalle:
> psql (13.1)
> Type "help" for help.

That is just bizarre.  libpq should not ignore the sslmode=require option
like that, unless it thinks it's making a Unix-socket connection, which
it should not think given the host specification.  (There's not a slash
in your server's real name, is there?  But if there was, v11 should
misbehave too.)

It seems like there must be some environment setting, or maybe a service
file, changing the behavior from what it should be on its face.  But
that theory has big flaws too: an explicit sslmode=require setting should
not be overridable from environment, and even if it was, why wouldn't v11
act the same?

The only other conclusion I can think of is that your copy of libpq.so
is broken.  Maybe you should try redownloading/reinstalling v13.

regards, tom lane




Re: pg_upgrade question

2020-12-22 Thread Paul Förster
Hi Dan,

> I am trying to find out if there is any step by step instruction to reconcile 
> old data dir and upgraded data dir after using “—link” option to do an 
> upgrade.
>  
> I ran this to do an upgrade from 11.5 to 12.1: pg_upgrade -d 
> /hostname/pg/dev115/data -D /hostname/pg/dev121upg/data --link -b 
> /pgdbadevbal800/pg/PostgreSQL-11.5/bin -B 
> /pgdbadevbal800/pg/PostgreSQL-12.1/bin -p 1432 -P 2432 –v
>  
> postgresdbad:dev115:pgdbadevbal800:> pwd
> /hostname/pg
>  
> postgresdbad:dev115:pgdbadevbal800:> du -sh dev121upg
> 2.3Gdev121upg
>  
> postgresdbad:dev115:pgdbadevbal800:> du -sh dev115
> 22G dev115
>  
> My goal is to be able to do an in place upgrade from 11.5 to 12.1 using the 
> same data dir “/hostname/pg/dev115/data”.  Without the “—link” option I need 
> to double up the space usage for the instance.  What is the easiest way to 
> accomplish this task?
>  
> Thanks so much for your help.

after a successful upgrade, you may delete the dev115 directory and move the 
dev121upg directory in its place. That's how I usually do it. Something like 
this example:

(DB = cluster name)

/data/pg/DB/db <= PGDATA old
/data/pg/DB/dbnew  <= PGDATA new, do the initdb here!

initdb -k -D /data/pg/DB/dbnew ...
pg_upgrade -d /data/pg/DB/db -D /data/pg/DB/dbnew ...
pg_ctl -D /data/pg/DB/dbnew stop
rm -rf /data/pg/DB/db
mv /data/pg/DB/dbnew /data/pg/DB/db
pg_ctl -D /data/pg/DB/db start

Your milage may vary. Use at your own risk. ;-)

If you shut down a PostgreSQL cluster properly, you can then easily move PGDATA 
to virtually any place you want and start it there because PostgreSQL doesn't 
keep references to absolute paths anywhere.

Cheers,
Paul



Re: Avoid excessive inlining?

2020-12-22 Thread Laurenz Albe
On Mon, 2020-12-21 at 11:45 -0500, Philip Semanchuk wrote:
> > On Dec 19, 2020, at 12:59 AM, Joel Jacobson  wrote:
> > Is there a way to avoid excessive inlining when writing pure SQL functions, 
> > without having to use PL/pgSQL?
> 
> The rules for inlining are here:
> https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions
> 
> According to those rules, if you declared your SQL function as VOLATILE, then 
> Postgres wouldn’t
>  inline it. From your question, I’m not sure if you want to have the same 
> function inlined
>  sometimes and not others. I can’t think of a way to do that offhand.

Where do you see that?  As far as I know, VOLATILE is the best choice if you
want the function to be inlined.

I would say that the simplest way to prevent a function from being inlined
is to set a parameter on it:

  ALTER FUNCTION f() SET enable_seqscan = on;

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Avoid excessive inlining?

2020-12-22 Thread Philip Semanchuk



> On Dec 22, 2020, at 8:40 AM, Laurenz Albe  wrote:
> 
> On Mon, 2020-12-21 at 11:45 -0500, Philip Semanchuk wrote:
>>> On Dec 19, 2020, at 12:59 AM, Joel Jacobson  wrote:
>>> Is there a way to avoid excessive inlining when writing pure SQL functions, 
>>> without having to use PL/pgSQL?
>> 
>> The rules for inlining are here:
>> https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions
>> 
>> According to those rules, if you declared your SQL function as VOLATILE, 
>> then Postgres wouldn’t
>> inline it. From your question, I’m not sure if you want to have the same 
>> function inlined
>> sometimes and not others. I can’t think of a way to do that offhand.
> 
> Where do you see that?  As far as I know, VOLATILE is the best choice if you
> want the function to be inlined.

Ugh, you’re absolutely right, and I’m sorry for spreading misinformation. 
That’s what I get from quoting from memory rather than reading the link that I 
posted. 


> 
> I would say that the simplest way to prevent a function from being inlined
> is to set a parameter on it:
> 
>  ALTER FUNCTION f() SET enable_seqscan = on;

I appreciate the correction and education. 

Cheers
Philip



Re: Avoid excessive inlining?

2020-12-22 Thread Joel Jacobson
On Tue, Dec 22, 2020, at 14:40, Laurenz Albe wrote:
>I would say that the simplest way to prevent a function from being inlined
>is to set a parameter on it:
>  ALTER FUNCTION f() SET enable_seqscan = on;

Thanks, very useful, didn't know about that trick.

I think I was a bit unclear about my problem, and might have used the wrong 
terminology.

In my LATERAL query, there are calculations in a certain order.
For each step, "columns" are computed named e.g. "g", "c", "h", "i", etc.
However, when looking at the query plan, these steps are gone, and instead 
there is just one huge fully expanded expression, which doesn't look very 
efficient.

Ideas?

PREPARE q_lateral AS SELECT make_date($1, easter_month, easter_day)
FROM (VALUES ($1 % 19, $1 / 100)) AS Q1(g,c)
JOIN LATERAL (VALUES ((c - c/4 - (8*c + 13)/25 + 19*g + 15) % 30)) AS
Q2(h) ON TRUE
JOIN LATERAL (VALUES (h - (h/28)*(1 - (h/28)*(29/(h + 1))*((21 -
g)/11 AS Q3(i) ON TRUE
JOIN LATERAL (VALUES (($1 + $1/4 + i + 2 - c + c/4) % 7)) AS Q4(j) ON TRUE
JOIN LATERAL (VALUES (i - j)) AS Q5(p) ON TRUE
JOIN LATERAL (VALUES (3 + (p + 26)/30, 1 + (p + 27 + (p + 6)/40) % 31))
AS Q6(easter_month, easter_day) ON TRUE;

SET plan_cache_mode = 'force_generic_plan';

EXPLAIN (ANALYZE, VERBOSE) EXECUTE q_lateral(2020);

Result  (cost=0.00..1.14 rows=1 width=4) (actual time=0.038..0.038 rows=1 
loops=1)
  Output: make_date($1, (3 + (($1 / 100) - (($1 / 100) / 4)) - (((8 * 
($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 30) - $1 / 100) - 
(($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 
30) / 28) * (1 - ($1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 
13) / 25)) + (19 * ($1 % 19))) + 15) % 30) / 28) * (29 / ((($1 / 100) - 
(($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 
30) + 1))) * ((21 - ($1 % 19)) / 11) - (($1 + ($1 / 4)) + ((($1 / 
100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) 
+ 15) % 30) - $1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) 
/ 25)) + (19 * ($1 % 19))) + 15) % 30) / 28) * (1 - ($1 / 100) - (($1 / 
100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 30) / 
28) * (29 / ((($1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 
25)) + (19 * ($1 % 19))) + 15) % 30) + 1))) * ((21 - ($1 % 19)) / 11)) + 2) 
- ($1 / 100)) + (($1 / 100) / 4)) % 7)) + 26) / 30)), (1 + ((($1 / 100) 
- (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) 
% 30) - $1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) 
+ (19 * ($1 % 19))) + 15) % 30) / 28) * (1 - ($1 / 100) - (($1 / 100) / 
4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 30) / 28) * 
(29 / ((($1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + 
(19 * ($1 % 19))) + 15) % 30) + 1))) * ((21 - ($1 % 19)) / 11) - (($1 + 
($1 / 4)) + ((($1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 
25)) + (19 * ($1 % 19))) + 15) % 30) - $1 / 100) - (($1 / 100) / 4)) - 
(((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 30) / 28) * (1 - 
($1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 
* ($1 % 19))) + 15) % 30) / 28) * (29 / ((($1 / 100) - (($1 / 100) / 4)) - 
(((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 30) + 1))) * ((21 - 
($1 % 19)) / 11)) + 2) - ($1 / 100)) + (($1 / 100) / 4)) % 7)) + 27) + 
(($1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 
* ($1 % 19))) + 15) % 30) - $1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 
/ 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 30) / 28) * (1 - ($1 / 
100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) 
+ 15) % 30) / 28) * (29 / ((($1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 
100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 30) + 1))) * ((21 - ($1 % 19)) / 
11) - (($1 + ($1 / 4)) + ((($1 / 100) - (($1 / 100) / 4)) - (((8 * 
($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 30) - $1 / 100) - 
(($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 
30) / 28) * (1 - ($1 / 100) - (($1 / 100) / 4)) - (((8 * ($1 / 100)) + 
13) / 25)) + (19 * ($1 % 19))) + 15) % 30) / 28) * (29 / ((($1 / 100) - 
(($1 / 100) / 4)) - (((8 * ($1 / 100)) + 13) / 25)) + (19 * ($1 % 19))) + 15) % 
30) + 1))) * ((21 - ($1 % 19)) / 11)) + 2) - ($1 / 100)) + (($1 / 100) / 
4)) % 7)) + 6) / 40)) % 31)))
Planning Time: 4.144 ms
Execution Time: 0.220 ms



Re: Avoid excessive inlining?

2020-12-22 Thread Tom Lane
"Joel Jacobson"  writes:
> I think I was a bit unclear about my problem, and might have used the wrong 
> terminology.
> In my LATERAL query, there are calculations in a certain order.
> For each step, "columns" are computed named e.g. "g", "c", "h", "i", etc.
> However, when looking at the query plan, these steps are gone, and instead 
> there is just one huge fully expanded expression, which doesn't look very 
> efficient.

Yeah, this isn't really about function inlining, it's about subquery
flattening (which is similar in some ways, but not the same thing).

Unfortunately, subquery flattening happens early enough in the planner
that there's no chance of making any useful cost comparisons to decide
whether to do it or not.  So we just do it unconditionally.  I'm
not really sure that failing to do it would provide a better outcome
in this situation anyway --- sure, you'd save a few scalar calculations,
but the overhead of running additional plan nodes could outweigh that.

The long and the short of it is that SQL isn't terribly well suited to
execute a fundamentally stepwise, imperative algorithm like this one.
Rather than hacking up cute tricks with LATERAL, you should just use
a language that *is* well suited.  That's why we provide PLs.

FWIW, another trick for inserting optimization fences is WITH.
So you could do something like

WITH Q1(g,c) AS MATERIALIZED
  (SELECT year % 19, year / 100),
Q2(h) AS MATERIALIZED
  (SELECT (c - c/4 - (8*c + 13)/25 + 19*g + 15) % 30 FROM Q1),
...
SELECT make_date(year, easter_month, easter_day) FROM Q6;

But I'd bet lunch that that won't be faster for this example,
because there's a lot of overhead in CTEs.

regards, tom lane




Re: Missing rows after migrating from postgres 11 to 12 with logical replication

2020-12-22 Thread Lars Vonk
>
> Did you have some other replication running on the 11 instance?
>

Yes the 11 instance also had another (11) replica running. (But these logs
are from the 12 instance)

The new 12 instance also had a replica running.

In any case what was the command logged just before the ERROR.
>

There is nothing logged.

These are the only log statements just before the error message, one second
later the ERROR is logged:

2020-12-10 13:26:43 UTC::@:[5537]:LOG:  checkpoints are occurring too
frequently (20 seconds apart)
2020-12-10 13:26:43 UTC::@:[5537]:HINT:  Consider increasing the
configuration parameter "max_wal_size".
2020-12-10 13:26:43 UTC::@:[5537]:LOG:  checkpoint starting: wal

Lars

On Mon, Dec 21, 2020 at 11:51 PM Adrian Klaver 
wrote:

> On 12/21/20 2:42 PM, Lars Vonk wrote:
> > What was being run when the above ERROR was triggered?
> >
> >
> > The initial copy of a table. Other than that we ran select
> > pg_size_pretty(pg_relation_size('table_name')) to see the current size
> > of the table being copied to get a feeling on progress.
> >
> > And whenever we added a new table to the publication we ran ALTER
> > SUBSCRIPTION migration REFRESH PUBLICATION; to add any new table to the
> > subscription. But not around that timestamp, about 50 minutes before the
> > first occurence of that ERROR. (no ERRORS after prior ALTER
> SUBSCRIPTIONs).
> >
> > But after the initial copy's ended there are more ERROR's on different
> > WAL segments missing. Each missing wal segment is logged as ERROR a
> > couple of times and then no more. After a couple of hours no errors are
> > logged.
>
> Something was looking for the WAL segment.
>
> Did you have some other replication running on the 11 instance?
>
> In any case what was the command logged just before the ERROR.
>
> >
> > Lars
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Avoid excessive inlining?

2020-12-22 Thread Joel Jacobson
Thanks Tom,

this was exactly what I needed to hear.

I guess I recently have become too fond of all the nice new "recent" advanced 
SQL features,
such as LATERAL and MATERIALIZED CTEs, now in my possession since I now only 
code on hobby
projects, after all the years stuck in an old PostgreSQL version in my previous 
job,
to realise that such SQL features are not always a good fit for the job at all 
times.

I rewrote all the slow pure SQL code in PL/pgSQL and got as 568% speed-up in 
the CBOR to JSON converter I'm working on. Thanks also for giving me 
inspiration on the wording for my own commit message:

https://github.com/truthly/pg-cbor/commit/7ea7640f699cdf271ffa9cfb5d059f8141f541ed

Author: Joel Jakobsson 
Date:   Tue Dec 22 18:21:47 2020 +0100

Optimize by rewriting declarative pure SQL code into imperative PL/pgSQL

PostgreSQL's SQL language isn't terribly well suited to execute
a fundamentally stepwise, imperative algorithm like CBOR.

Rather than hacking up cute tricks with LATERAL, we should just use
a language that *is* well suited, a PL, like PL/pgSQL.

-- Pure SQL (before):
select * from pg_stat_xact_user_functions ;
 schemaname |  funcname  | calls | total_time | self_time
++---++---
 cbor   | next_item  |14 |   48.91024 | 38.964918
 cbor   | next_array | 1 |   7.297435 |  1.816102
 cbor   | next_map   | 2 |  40.844352 |7.8957
 cbor   | to_jsonb   | 1 |  50.222183 |  1.311943

-- PL/pgSQL (after):
select * from pg_stat_xact_user_functions ;
 schemaname |   funcname   | calls | total_time | self_time
+--+---++---
 cbor   | next_item|14 |   8.021371 |  3.358271
 cbor   | next_array   | 1 |   0.565398 |  0.353071
 cbor   | next_map | 2 |   5.607702 |  1.324057
 cbor   | to_jsonb | 1 |   8.823691 |   0.80232

FUNCTIONS/major_type_0.sql  |  23 +++
FUNCTIONS/major_type_1.sql  |  23 +++
FUNCTIONS/major_type_2.sql  |  23 +++
FUNCTIONS/major_type_3.sql  |  23 +++
FUNCTIONS/major_type_4.sql  |  23 +++
FUNCTIONS/major_type_5.sql  |  23 +++
FUNCTIONS/major_type_6.sql  |  40 
FUNCTIONS/major_type_7.sql  |  43 
+++
FUNCTIONS/next_item.sql | 109 
-
Makefile|   8 
expected/rfc7049_appendix_a.out |  52 
+++-
11 files changed, 304 insertions(+), 86 deletions(-)

Best regards,

Joel

On Tue, Dec 22, 2020, at 17:32, Tom Lane wrote:
"Joel Jacobson"  writes:
> I think I was a bit unclear about my problem, and might have used the wrong 
> terminology.
> In my LATERAL query, there are calculations in a certain order.
> For each step, "columns" are computed named e.g. "g", "c", "h", "i", etc.
> However, when looking at the query plan, these steps are gone, and instead 
> there is just one huge fully expanded expression, which doesn't look very 
> efficient.

Yeah, this isn't really about function inlining, it's about subquery
flattening (which is similar in some ways, but not the same thing).

Unfortunately, subquery flattening happens early enough in the planner
that there's no chance of making any useful cost comparisons to decide
whether to do it or not.  So we just do it unconditionally.  I'm
not really sure that failing to do it would provide a better outcome
in this situation anyway --- sure, you'd save a few scalar calculations,
but the overhead of running additional plan nodes could outweigh that.

The long and the short of it is that SQL isn't terribly well suited to
execute a fundamentally stepwise, imperative algorithm like this one.
Rather than hacking up cute tricks with LATERAL, you should just use
a language that *is* well suited.  That's why we provide PLs.

FWIW, another trick for inserting optimization fences is WITH.
So you could do something like

WITH Q1(g,c) AS MATERIALIZED
  (SELECT year % 19, year / 100),
Q2(h) AS MATERIALIZED
  (SELECT (c - c/4 - (8*c + 13)/25 + 19*g + 15) % 30 FROM Q1),
...
SELECT make_date(year, easter_month, easter_day) FROM Q6;

But I'd bet lunch that that won't be faster for this example,
because there's a lot of overhead in CTEs.

regards, tom lane


Kind regards,

Joel


ts_parse reports different between MacOS, FreeBSD/Linux

2020-12-22 Thread Mark Felder
Hello,

We have an application whose test suite fails on MacOS when running the search 
tests on unicode characters.

I've narrowed it down to the following:

macos=# select * from ts_parse('default','天');
 tokid | token
---+---
12 | 天
(1 row)

freebsd=# select * from ts_parse('default','天');
 tokid | token
---+---
 2 | 天
(1 row)


This has been bugging me for a while, but it's a test our devs using MacOS just 
ignores for now as we know it passes our CI/CD pipeline on FreeBSD/Linux. It 
seems if anyone is shipping an app on MacOS and bundling Postgres they're going 
to have a bad time with searching.


Please let me know if there's anything I can do to help. Will gladly test 
patches.



Thanks,



-- 
  Mark Felder
  ports-secteam & portmgr alumni
  f...@freebsd.org




Re: Missing rows after migrating from postgres 11 to 12 with logical replication

2020-12-22 Thread Adrian Klaver

On 12/22/20 9:10 AM, Lars Vonk wrote:

Did you have some other replication running on the 11 instance?


Yes the 11 instance also had another (11) replica running. (But these 
logs are from the 12 instance)


The 11 instance had the data that went missing in the 12 instance, so 
what shows up in logs for the 11 instance during this period that is 
relevant?




The new 12 instance also had a replica running.


So the setup was?:

1) 11 primary --> 11 standby via what replication logical or binary?
| --> 12 new instance via logical

2) 12(new) primary --> 12(?) standby via what replication logical or binary?



In any case what was the command logged just before the ERROR.


There is nothing logged.

These are the only log statements just before the error message, one 
second later the ERROR is logged:


2020-12-10 13:26:43 UTC::@:[5537]:LOG:  checkpoints are occurring too 
frequently (20 seconds apart)
2020-12-10 13:26:43 UTC::@:[5537]:HINT:  Consider increasing the 
configuration parameter "max_wal_size".

2020-12-10 13:26:43 UTC::@:[5537]:LOG:  checkpoint starting: wal

Lars

On Mon, Dec 21, 2020 at 11:51 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 12/21/20 2:42 PM, Lars Vonk wrote:
 >     What was being run when the above ERROR was triggered?
 >
 >
 > The initial copy of a table. Other than that we ran select
 > pg_size_pretty(pg_relation_size('table_name')) to see the current
size
 > of the table being copied to get a feeling on progress.
 >
 > And whenever we added a new table to the publication we ran ALTER
 > SUBSCRIPTION migration REFRESH PUBLICATION; to add any new table
to the
 > subscription. But not around that timestamp, about 50 minutes
before the
 > first occurence of that ERROR. (no ERRORS after prior ALTER
SUBSCRIPTIONs).
 >
 > But after the initial copy's ended there are more ERROR's on
different
 > WAL segments missing. Each missing wal segment is logged as ERROR a
 > couple of times and then no more. After a couple of hours no
errors are
 > logged.

Something was looking for the WAL segment.

Did you have some other replication running on the 11 instance?

In any case what was the command logged just before the ERROR.

 >
 > Lars
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: ts_parse reports different between MacOS, FreeBSD/Linux

2020-12-22 Thread Tom Lane
"Mark Felder"  writes:
> We have an application whose test suite fails on MacOS when running the 
> search tests on unicode characters.

Yeah, known problem :-(.  The text search parser relies on the C library's
locale data to classify characters as being letters, digits, etc.
Unfortunately, the UTF8 locales on macOS are just horribly bad, and
report many results that are different from other platforms.

I suppose that Apple has got reasonable Unicode character knowledge
somewhere in their OS; they are just not very interested in making the
POSIX locale APIs work well.  Which leaves us with a bit of a problem
for getting consistent results cross-platform.

regards, tom lane




Information schema sql_identifier

2020-12-22 Thread Adrian Klaver

Per version 12 release notes:

"Treat object-name columns in the information_schema views as being of 
type name, not varchar (Tom Lane)


Per the SQL standard, object-name columns in the information_schema 
views are declared as being of domain type sql_identifier. In 
PostgreSQL, the underlying catalog columns are really of type name. This 
change makes sql_identifier be a domain over name, rather than varchar 
as before. ..."


This came up in this SO question:

https://stackoverflow.com/questions/65416748/postgres-12-4-gives-function-does-not-exists-error

Where the query is:

SELECT (TABLE_SCHEMA || '"."' || TABLE_NAME) as table_name,
   pg_size_pretty(pg_table_size(table_name)) as table_size,
   pg_size_pretty(pg_indexes_size(table_name)) AS indexes_size,
   pg_size_pretty(pg_total_relation_size(table_name)) as total_size
from information_schema.TABLES nowait
where TABLE_SCHEMA='myschema'
order by pg_total_relation_size(table_name) desc;

And the error is:

"ERROR:  function pg_table_size(information_schema.sql_identifier) does 
not exist

LINE 1: ..."."' || TABLE_NAME) as table_name, pg_size_pretty(pg_table_s..."

My attempts:

SELECT pg_table_size(table_name)  from information_schema.tables;
ERROR:  function pg_table_size(information_schema.sql_identifier) does 
not exist

LINE 1: SELECT pg_table_size(table_name)  from information_schema.ta...

SELECT pg_table_size(table_name::text)  from information_schema.tables;
ERROR:  invalid name syntax

SELECT pg_table_size(table_name::regclass)  from information_schema.tables;
ERROR:  invalid name syntax

SELECT table_name::text::regclass  from information_schema.tables;
ERROR:  invalid name syntax

So how does one go about using a table name from 
information_schema.tables in pg_table_size()?



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Information schema sql_identifier

2020-12-22 Thread David G. Johnston
On Tue, Dec 22, 2020 at 5:08 PM Adrian Klaver 
wrote:

> SELECT pg_table_size(table_name::regclass)  from information_schema.tables;
> ERROR:  invalid name syntax
>


> So how does one go about using a table name from
> information_schema.tables in pg_table_size()?
>

Find that the function signature in the documentation requires an input of
"regclass" and ignore attempts to pass anything but that to the function.

pg_table_size ( regclass ) → bigint

I observe in v13 that the expected syntax works just fine:

select pg_table_size((table_schema || '.' || table_name)::regclass) from
information_schema.tables;

David J.


Re: Information schema sql_identifier

2020-12-22 Thread Tom Lane
Adrian Klaver  writes:
> So how does one go about using a table name from 
> information_schema.tables in pg_table_size()?

You want something like

select pg_table_size(quote_ident(table_schema)||'.'||quote_ident(table_name))
  from information_schema.tables;

I imagine that the failures you got are a consequence of having
some table names that aren't valid unless quoted (ie contain
spaces, funny characters, etc).  In a general-purpose query,
you can't ignore the schema name either.

I might be more excited about v12's failure to provide an implicit
cast to regclass if there were any prospect of queries like this
working in a bulletproof way without accounting for schema names
and funny characters.  But there isn't, so the query shown in SO
is a house of cards to start with.  When you do it right, with
quote_ident() or format(), no special casting is needed.

regards, tom lane




Re: Information schema sql_identifier

2020-12-22 Thread Adrian Klaver

On 12/22/20 4:33 PM, David G. Johnston wrote:
On Tue, Dec 22, 2020 at 5:08 PM Adrian Klaver > wrote:


SELECT pg_table_size(table_name::regclass)  from
information_schema.tables;
ERROR:  invalid name syntax

So how does one go about using a table name from
information_schema.tables in pg_table_size()?


Find that the function signature in the documentation requires an input 
of "regclass" and ignore attempts to pass anything but that to the function.


pg_table_size ( regclass ) → bigint

I observe in v13 that the expected syntax works just fine:

select pg_table_size((table_schema || '.' || table_name)::regclass) from 
information_schema.tables;


Yeah I tried that, didn't include in my previous post:

select pg_table_size((table_schema || '.' || table_name)::regclass) from 
information_schema.tables;

ERROR:  invalid name syntax



David J.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Information schema sql_identifier

2020-12-22 Thread Adrian Klaver

On 12/22/20 4:39 PM, Tom Lane wrote:

Adrian Klaver  writes:

So how does one go about using a table name from
information_schema.tables in pg_table_size()?


You want something like

select pg_table_size(quote_ident(table_schema)||'.'||quote_ident(table_name))
   from information_schema.tables;

I imagine that the failures you got are a consequence of having
some table names that aren't valid unless quoted (ie contain
spaces, funny characters, etc).  In a general-purpose query,
you can't ignore the schema name either.

I might be more excited about v12's failure to provide an implicit
cast to regclass if there were any prospect of queries like this
working in a bulletproof way without accounting for schema names
and funny characters.  But there isn't, so the query shown in SO
is a house of cards to start with.  When you do it right, with
quote_ident() or format(), no special casting is needed.


Thanks, that pushed me in right direction.

I see now the previous query worked because the alias table_name and the 
column table_name where the same and the column previously was a 
varchar. This meant the pg_table_size() was actually working on the 
column value not the concatenated value.


So the query can be simplified to:

SELECT
pg_size_pretty(pg_table_size(quote_ident(table_name))),
pg_size_pretty(pg_indexes_size(quote_ident(table_name))) AS 
indexes_size,
pg_size_pretty(pg_total_relation_size(quote_ident(table_name))) AS 
total_size

FROM
information_schema.tables
WHERE
table_schema = 'public'
;




regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Information schema sql_identifier

2020-12-22 Thread Laurenz Albe
On Tue, 2020-12-22 at 16:07 -0800, Adrian Klaver wrote:
> This came up in this SO question:
> 
> https://stackoverflow.com/questions/65416748/postgres-12-4-gives-function-does-not-exists-error
> 
> Where the query is:
> 
> SELECT (TABLE_SCHEMA || '"."' || TABLE_NAME) as table_name,
> pg_size_pretty(pg_table_size(table_name)) as table_size,
> pg_size_pretty(pg_indexes_size(table_name)) AS indexes_size,
> pg_size_pretty(pg_total_relation_size(table_name)) as total_size
> from information_schema.TABLES nowait
> where TABLE_SCHEMA='myschema'
> order by pg_total_relation_size(table_name) desc;
> 
> And the error is:
> 
> "ERROR:  function pg_table_size(information_schema.sql_identifier) does not 
> exist
> LINE 1: ..."."' || TABLE_NAME) as table_name, pg_size_pretty(pg_table_s..."

I don't see the problem.

Cast "table_name" and "table_schema" to "text" wherever it occurs.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com