Postgresql 13.7 hangs down

2022-07-04 Thread Bogdan Siara
Hi all,
I have a postgresql cluster (13.7) with two nodes (master,slave) compiled
in docker (alpine 3.15) with flags:

./configure --prefix=${PG_DIR} --exec-prefix=${PG_DIR}
--enable-integer-datetimes --enable-thread-safety --disable-rpath
--with-uuid=e2fs --with-gnu-ld --with-pgport=5432
--with-system-tzdata=/usr/share/zoneinfo --with-llvm --with-gssapi
--with-ldap --with-icu --with-tcl --with-perl --with-python --with-pam
--with-openssl --with-libxml --with-libxslt

and running with the following configuration:

listen_addresses = '*'
port = 5432
max_connections = 1000
unix_socket_directories = '/opt/pg/data'
superuser_reserved_connections = 3
shared_buffers = 6GB
temp_buffers = 32MB
max_prepared_transactions = 100
work_mem = 1146kB
maintenance_work_mem = 1792MB
max_stack_depth = 4MB
dynamic_shared_memory_type = posix
archive_command = '/opt/pg/bin/pgbackrest --stanza=aws-prdan archive-push
%p'
archive_mode = on
max_wal_senders = 10
min_wal_size = 2GB
max_wal_size = 3GB
wal_level = logical
checkpoint_completion_target = 0.9
effective_cache_size = 18GB
log_destination = 'stderr'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d.log'
log_truncate_on_rotation = on
log_rotation_age = 1d
log_rotation_size = 0
log_checkpoints = on
log_line_prefix = '%m [%p] - [%a - %u - %d] [%h] : %e'
log_min_duration_statement = 0
stats_temp_directory = '/opt/pg/pg_stat_tmp'
autovacuum = on
autovacuum_max_workers = 3
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
datestyle = 'iso, dmy'
default_text_search_config = 'pg_catalog.simple'
jit = on
jit_above_cost = 10
jit_debugging_support = off
jit_dump_bitcode = off
jit_expressions = on
jit_inline_above_cost = 50
jit_optimize_above_cost = 50
jit_profiling_support = off
jit_provider = llvmjit
jit_tuple_deforming = on
max_worker_processes = 8
max_parallel_workers_per_gather = 2
max_parallel_workers = 2
default_statistics_target = 100
synchronous_commit = off
random_page_cost = 1.1
effective_io_concurrency = 200
track_activity_query_size = 1
pg_stat_statements.track = all

Sometimes my cluster nodes hangs down and not responging. Logs ends without
any error:

2022-07-02 00:42:51.755 P00   INFO: archive-push command begin 2.39:
[pg_wal/00015596002C] --archive-async --compress-type=lz4
--exec-id=787394-196ba324 --log-level-console=info
--log-level-file=detail --pg1-path=/opt/pg/data --process-max=4
--repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
--repo1-path=/repo-path --repo1-s3-bucket=backup-postgresql
 --repo1-s3-endpoint=s3.eu-central-1.amazonaws.com
--repo1-s3-key= --repo1-s3-key-secret=
--repo1-s3-region=eu-central-1 --repo1-type=s3 --stanza=aws-prdan
2022-07-02 00:42:51.755 P00   INFO: pushed WAL file
'00015596002C' to the archive asynchronously
2022-07-02 00:42:51.755 P00   INFO: archive-push command end: completed
successfully (1ms)
2022-07-02 07:54:34.333 GMT [15] - [ -  - ] [] : 0LOG:  starting
PostgreSQL 13.7 on x86_64-pc-linux-musl, compiled by gcc (Alpine
10.3.1_git20211027) 10.3.1 20211027, 64-bit
2022-07-02 07:54:34.335 GMT [15] - [ -  - ] [] : 0LOG:  listening on
IPv4 address "0.0.0.0", port 5432

I can login to database using psql but can't execute command, I can't stop
database kill -INT, kill -QUIT, docker stop. Database size on zfs volume
already have 3,8T. At the end I restart entry host and all back to normal
work. Situation occurs on master and slave nodes.

Please tell me what I should do to trace and fix the problem.

Thanks for your attention.
Regards
BS


Re: AIX and EAGAIN on open()

2022-07-04 Thread Christoph Berg
Re: Thomas Munro
> > Does anything of that ring a bell for someone? Is that an AIX bug, a
> > PG bug, or something else?
> 
> No clue here.  Anything unusual about the file system (NFS etc)?  Can
> you truss/strace the system calls, to sanity check the flags arriving
> into open(), and see if there's any unexpected other activity around
> open() calls that might be coming from something you're linked
> against?

Hi,

it's local storage, 16Gb SAN, Unity 500 storage, all data is on SSD
disks, and file system is JFS2 (mount options are rw,log=INLINE).

Good point about the flags, but we don't have access to the servers,
so not sure if it will be possible to retrieve strace information.
I'll try asking.

Thanks,
Christoph
-- 
Senior Consultant, Tel.: +49 2166 9901 187
credativ GmbH, HRB Mönchengladbach 12080, USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Geoff Richardson, Peter Lilley
Unser Umgang mit personenbezogenen Daten unterliegt folgenden
Bestimmungen: https://www.credativ.de/datenschutz




Getting data from a record variable dynamically

2022-07-04 Thread Rhys A.D. Stewart
Greetings All,

I have a trigger that is attached to several different tables. In the
trigger function I retrieve a single row and I want the info from a
specific column. This column is dependent on the table in question.
and I have the column name stored in a variable as well. Without
writing a conditional for each table, what is the best way to
dynamically get the data from the record variable?

The only workaround I can think of is to wrap the query that populates
the record variable in a to_jsonb function and then subscript that.
Like so:

DO $$
 DECLARE
  rec record;
  colname text;
 BEGIN
  SELECT to_jsonb(n.*) FROM kgn21.__nodes n limit 1 INTO rec;
  colname = 'lw_id';  -- colname is different for each table
  RAISE NOTICE '%', rec.to_jsonb['lw_table'];
 END;
 $$ language plpgsql;

Is there a better way?

Regards,


Rhys
Peace & Love | Live Long & Prosper




Re: Getting data from a record variable dynamically

2022-07-04 Thread David G. Johnston
On Mon, Jul 4, 2022 at 4:03 PM Rhys A.D. Stewart 
wrote:

>
> I have a trigger that is attached to several different tables. In the
> trigger function I retrieve a single row and I want the info from a
> specific column. This column is dependent on the table in question.
>
> The only workaround I can think of is to wrap the query that populates
> the record variable in a to_jsonb function and then subscript that.
>
>
> Is there a better way?
>
>
Not in the strongly typed language that is SQL; you will be unable to use
first-tier types in that way.  You indeed need to create a type that itself
is dynamic in terms of the information it stores (jsonb works) which also
involves throwing away the static typing of the outer layer and devolving
everything to basically text.

David J.