Re: Postgres Wal Full
At Thu, 3 Mar 2022 07:46:45 +, pgdba pgdba wrote in > I have a problem from pg_wal. I am using postgresql version 11 and taking > backup and writing archive_command in postgresql.conf but did not archive wal > and my disk ise full from pg_wal. I research why is my wal is full and dont > found anything could you help me please ? The immediate cause of the WAL retension is the archive failure. PostgreSQL, while archiving is enabled, won't remove a WAL file until it is successfully archived. You will see the lines like the following in sergver log if archive failure happened. > 2022-03-03 17:15:21.314 JST LOG: archive command failed with exit code 1 If the archive command emitted some error messages, it can be caught in the server log together. The follwoing error is emitted by the archive_command "cp %p /archive/%f". > cp: cannot create regular file '/archive/00010001': No such > file or directory regards. -- Kyotaro Horiguchi NTT Open Source Software Center
Ynt: Postgres Wal Full
Hi, thanks your feedback but not have same error in my log file. I am using streaming replication and have 1 master and 2 slave nodes in async. master node different size file wal from slave nodes such as; [root@master1 pg_wal]# ls | wc -l 6400 [root@slave2 pg_wal]# ls | wc -l 332 [root@slave3 pg_wal]# ls | wc -l 344 wal_keep_segments = 512 max_wal_size = 2GB min_wal_size = 1GB data size : 290G cpu:8 ram:16 Gönderen: Kyotaro Horiguchi Gönderildi: 3 Mart 2022 Perşembe 11:22 Kime: postgresdb...@outlook.com Bilgi: pgsql-general@lists.postgresql.org Konu: Re: Postgres Wal Full At Thu, 3 Mar 2022 07:46:45 +, pgdba pgdba wrote in > I have a problem from pg_wal. I am using postgresql version 11 and taking > backup and writing archive_command in postgresql.conf but did not archive wal > and my disk ise full from pg_wal. I research why is my wal is full and dont > found anything could you help me please ? The immediate cause of the WAL retension is the archive failure. PostgreSQL, while archiving is enabled, won't remove a WAL file until it is successfully archived. You will see the lines like the following in sergver log if archive failure happened. > 2022-03-03 17:15:21.314 JST LOG: archive command failed with exit code 1 If the archive command emitted some error messages, it can be caught in the server log together. The follwoing error is emitted by the archive_command "cp %p /archive/%f". > cp: cannot create regular file '/archive/00010001': No such > file or directory regards. -- Kyotaro Horiguchi NTT Open Source Software Center
Re: Postgres Wal Full
On Thu, Mar 3, 2022 at 2:17 PM pgdba pgdba wrote: > > Hi, thanks your feedback but not have same error in my log file. I am using > streaming replication and have 1 master and 2 slave nodes in async. master > node different size file wal from slave nodes such as; > [root@master1 pg_wal]# ls | wc -l > 6400 > [root@slave2 pg_wal]# ls | wc -l > 332 > [root@slave3 pg_wal]# ls | wc -l > 344 > > wal_keep_segments = 512 > max_wal_size = 2GB > min_wal_size = 1GB > data size : 290G > cpu:8 > ram:16 Few things to check when WAL file grows: 1) archive failures 2) inactive replication slots (use select * from pg_replication_slots;) 3) infrequent checkpoints (use select * from pg_control_checkpoint; to know the last checkpoint time)4) high write (WAL-generating) workloads. Regards, Bharath Rupireddy.
Couldn't cast to record[]
Hi, I created an table with composite type array as datatype for one column. When I query that table I need the pg_typeof(column) as record[] instead of composite_type[]. I tried creating a separate function and returning record[], but below error is thrown, ERROR: PL/pgSQL functions cannot return type record[] Below is sample for my requirement. pali=# create type address as (number bigint, city varchar); CREATE TYPE pali=# create table person(name varchar, addresses address[]); CREATE TABLE pali=# insert into person values ('Michael', array[(82, 'NYC'),(24, 'Dunkirk')]::address[]); INSERT 0 1 pali=# select pg_typeof(addresses::record[]) from person ; pg_typeof --- address[] Here I expect record[] instead of address[]. Thanks in advance.
Re: Couldn't cast to record[]
Hi čt 3. 3. 2022 v 11:01 odesílatel Suresh Kumar R napsal: > Hi, I created an table with composite type array as datatype for one > column. > When I query that table I need the pg_typeof(column) as record[] instead of > composite_type[]. > I tried creating a separate function and returning record[], but below > error > is thrown, > > ERROR: PL/pgSQL functions cannot return type record[] > > Below is sample for my requirement. > > pali=# create type address as (number bigint, city varchar); > CREATE TYPE > pali=# create table person(name varchar, addresses address[]); > CREATE TABLE > pali=# insert into person values ('Michael', array[(82, 'NYC'),(24, > 'Dunkirk')]::address[]); > INSERT 0 1 > pali=# select pg_typeof(addresses::record[]) from person ; > pg_typeof > --- > address[] > > Here I expect record[] instead of address[]. > > Thanks in advance. > record type is not real PostgreSQL's type. It is just PLpgSQL only type. You cannot use it outside PLpgSQL. Regards Pavel
oid2name: Why table and not relation?
Hi, given this simple example: postgres=# create table t ( a int ); CREATE TABLE postgres=# create index i on t(a); CREATE INDEX postgres=# select pg_relation_filepath('i'); pg_relation_filepath -- base/5/16388 (1 row) postgres=# \! oid2name --oid 16388 >From database "postgres": Filenode Table Name -- 16388 i postgres=# postgres-# \! oid2name --table=i% >From database "postgres": Filenode Table Name -- 16388 i postgres-# Is there a reason why the heading is called "Table Name" instead of "Relation Name"? The same is true for the parameters: postgres-# \! oid2name --help | grep -iw table -f, --filenode=FILENODEshow info for table with given file node -o, --oid=OID show info for table with given OID -t, --table=TABLE show info for named table This works with indexes as well, not only tables. Regards Daniel
Interesting fail when migrating Pg from Ubuntu Bionic to Focal
Hi, I know it's going to be most likely due to glibc and locales, but I found interesting case that I can't figure out how to fix. We have pg 12.6 on bionic. Works. Added focal replica (binary). Replicates OK, but then fails when I try to pg_dump -s. Error is: pg_dump: error: query failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. pg_dump: error: query was: SELECT p.tableoid, p.oid, p.proname, p.prolang, p.pronargs, p.proargtypes, p.prorettype, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(p.proacl,pg_catalog.acldefault('f',p.proowner))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('f',p.proowner))) AS init(init_acl) WHERE acl = init_acl)) as foo) AS proacl, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('f',p.proowner))) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(p.proacl,pg_catalog.acldefault('f',p.proowner))) AS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS rproacl, NULL AS initproacl, NULL AS initrproacl, p.pronamespace, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = p.proowner) AS rolname FROM pg_proc p LEFT JOIN pg_init_privs pip ON (p.oid = pip.objoid AND pip.classoid = 'pg_proc'::regclass AND pip.objsubid = 0) WHERE p.prokind <> 'a' AND NOT EXISTS (SELECT 1 FROM pg_depend WHERE classid = 'pg_proc'::regclass AND objid = p.oid AND deptype = 'i') AND ( pronamespace != (SELECT oid FROM pg_namespace WHERE nspname = 'pg_catalog') OR EXISTS (SELECT 1 FROM pg_cast WHERE pg_cast.oid > 16383 AND p.oid = pg_cast.castfunc) OR EXISTS (SELECT 1 FROM pg_transform WHERE pg_transform.oid > 16383 AND (p.oid = pg_transform.trffromsql OR p.oid = pg_transform.trftosql)) OR p.proacl IS DISTINCT FROM pip.initprivs) Based on https://wiki.postgresql.org/wiki/Locale_data_changes I wrote: SELECT 'reindex index ' || indexrelid::regclass::text FROM (SELECT indexrelid, indrelid, indcollation[i] coll FROM pg_index, generate_subscripts(indcollation, 1) g(i)) s JOIN pg_collation c ON coll=c.oid WHERE collprovider IN ('d', 'c') AND collname NOT IN ('C', 'POSIX') \gexec and let it run (80k+ indexes). Afterwards - same problem. So I did "reindex system". And the query still fails. I tried running simple "select * from table", for each of: - pg_catalog.pg_roles - pg_catalog.pg_proc - pg_catalog.pg_init_privs - pg_catalog.pg_depend - pg_catalog.pg_namespace - pg_catalog.pg_cast - pg_catalog.pg_transform and it worked, so I'm kinda at loss here. I have test system, can test anything. Any idea on what could be the reason, and if the system is fixable afterwards? Ah, one more thing - straced backend when it was doing its thing. Last 50 lines: 15:01:31.229198 futex(0xfffe49b8cb3c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229233 futex(0xfffe49b8c41c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229268 futex(0xfffe49b8cdfc, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229303 futex(0xfffe49b8ce00, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229338 futex(0xfffe49b8ce08, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229379 futex(0xfffe49b8dde4, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229415 futex(0xfffe49b91524, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229450 futex(0xfffe49b8998c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229485 futex(0xfffe49b8cf9c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229521 futex(0xfffe49b882e4, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229556 futex(0xfffe49b8e548, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229596 futex(0xfffe49b8d75c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229634 futex(0xfffe49b8cfe8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229670 futex(0xfffe49b8dfe8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229705 futex(0xfffe49b8dff0, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229740 futex(0xfffe49b8dff8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229775 futex(0xfffe49b8ddf0, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229810 futex(0xfffe49b8e0a8, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229845 futex(0xfffe49b8e200, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229881 futex(0xfffe49b8e534, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229925 futex(0xfffe49b8e734, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229961 futex(0xfffe49b8e7dc, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.12> 15:01:31.229998 futex(0xfffe49b8e924, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.0
Re: Interesting fail when migrating Pg from Ubuntu Bionic to Focal
On Thu, Mar 03, 2022 at 04:04:28PM +0100, hubert depesz lubaczewski wrote: > and it worked, so I'm kinda at loss here. based on some talk on IRC, I was able to get stack trace from fail: (gdb) bt #0 0xfffe4a36e4d8 in ?? () #1 0xbe03ffb8 in ExecProcNode (node=0xe4f87cf8) at ./build/../src/include/executor/executor.h:242 #2 ExecMergeJoin (pstate=0xe4f87da8) at ./build/../src/backend/executor/nodeMergejoin.c:656 #3 0xbe03ffb8 in ExecProcNode (node=0xe4f87ae8) at ./build/../src/include/executor/executor.h:242 #4 ExecMergeJoin (pstate=0xe4f876c8) at ./build/../src/backend/executor/nodeMergejoin.c:656 #5 0xbe039b1c in ExecProcNode (node=0xe4f876c8) at ./build/../src/include/executor/executor.h:242 #6 ExecHashJoinImpl (parallel=false, pstate=0xe4f87408) at ./build/../src/backend/executor/nodeHashjoin.c:262 #7 ExecHashJoin (pstate=0xe4f87408) at ./build/../src/backend/executor/nodeHashjoin.c:591 #8 0xbe01ed5c in ExecProcNode (node=0xe4f87408) at ./build/../src/include/executor/executor.h:242 #9 ExecutePlan (execute_once=, dest=0xfffe49be0828, direction=, numberTuples=0, sendTuples=, operation=CMD_SELECT, use_parallel_mode=, planstate=0xe4f87408, estate=0xe4ce4180) at ./build/../src/backend/executor/execMain.c:1632 #10 standard_ExecutorRun (queryDesc=0xe4e9da70, direction=, count=0, execute_once=) at ./build/../src/backend/executor/execMain.c:350 #11 0x7d1981fc in pgss_ExecutorRun (queryDesc=0xe4e9da70, direction=ForwardScanDirection, count=0, execute_once=true) at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:893 #12 0x7d182688 in explain_ExecutorRun (queryDesc=0xe4e9da70, direction=ForwardScanDirection, count=0, execute_once=true) at ./build/../contrib/auto_explain/auto_explain.c:320 #13 0xbe1754c8 in PortalRunSelect (portal=portal@entry=0xe4d476e0, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0xfffe49be0828) at ./build/../src/backend/tcop/pquery.c:938 #14 0xbe176a64 in PortalRun (portal=portal@entry=0xe4d476e0, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0xfffe49be0828, altdest=altdest@entry=0xfffe49be0828, completionTag=completionTag@entry=0xc89001f8 "") at ./build/../src/backend/tcop/pquery.c:779 #15 0xbe172a18 in exec_simple_query ( query_string=query_string@entry=0xe4c6c7b0 "SELECT\np.tableoid,\n p.oid,\np.proname,\np.prolang,\np.pronargs,\np.proargtypes,\n p.prorettype,\n(\nSELECT\n", ' ' , "pg_catalog.array_agg(acl ORDER BY row_n)\nF"...) at ./build/../src/backend/tcop/postgres.c:1215 #16 0xbe1736ac in PostgresMain (argc=, argv=argv@entry=0xe4cc9018, dbname=, username=) at ./build/../src/backend/tcop/postgres.c:4271 #17 0xbe0fa768 in BackendRun (port=0xe4cb88e0, port=0xe4cb88e0) at ./build/../src/backend/postmaster/postmaster.c:4510 #18 BackendStartup (port=0xe4cb88e0) at ./build/../src/backend/postmaster/postmaster.c:4193 #19 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1725 #20 0xbe0fb74c in PostmasterMain (argc=, argv=) at ./build/../src/backend/postmaster/postmaster.c:1398 #21 0xbde8c8a8 in main (argc=5, argv=0xe4c65ea0) at ./build/../src/backend/main/main.c:228 And one more thing - if it matters - it's on aarch64 architecture (graviton ec2 in AWS) Best regards, depesz
Re: Couldn't cast to record[]
Suresh Kumar R writes: > When I query that table I need the pg_typeof(column) as record[] instead of > composite_type[]. Why? record[] is considered a pseudotype (polymorphic type), which is why casting to it doesn't do anything. regards, tom lane
Re: Interesting fail when migrating Pg from Ubuntu Bionic to Focal
On Thu, Mar 03, 2022 at 04:11:56PM +0100, hubert depesz lubaczewski wrote: > On Thu, Mar 03, 2022 at 04:04:28PM +0100, hubert depesz lubaczewski wrote: > > and it worked, so I'm kinda at loss here. > > based on some talk on IRC, I was able to get stack trace from fail: Based on the stack trace I was able to get it to break using simple query: select p.proname, (SELECT rolname from pg_catalog.pg_roles where oid = p.proowner) from pg_proc p; I took a simple look at ranges of oid/prowner, and they look fine: =# select min(proowner), max(proowner), count(*) from pg_proc; min │ max │ count ─┼─┼─── 10 │ 10 │ 2970 (1 row) 16:38:34 db: postgres@postgres, pid:1991057 =# select min(oid), max(oid), count(*) from pg_roles; min │max│ count ─┼───┼─── 10 │ 310235824 │ 244 (1 row) Also, as I didn't mention it before: it's Pg 12.9. Best regards, depesz
Re: Interesting fail when migrating Pg from Ubuntu Bionic to Focal
On Thu, Mar 03, 2022 at 05:39:21PM +0100, hubert depesz lubaczewski wrote: > On Thu, Mar 03, 2022 at 04:11:56PM +0100, hubert depesz lubaczewski wrote: > > On Thu, Mar 03, 2022 at 04:04:28PM +0100, hubert depesz lubaczewski wrote: > > > and it worked, so I'm kinda at loss here. > > > > based on some talk on IRC, I was able to get stack trace from fail: > > Based on the stack trace I was able to get it to break using simple > query: > select p.proname, (SELECT rolname from pg_catalog.pg_roles where oid = > p.proowner) from pg_proc p; > > I took a simple look at ranges of oid/prowner, and they look fine: > > =# select min(proowner), max(proowner), count(*) from pg_proc; > min │ max │ count > ─┼─┼─── > 10 │ 10 │ 2970 > (1 row) > > 16:38:34 db: postgres@postgres, pid:1991057 > =# select min(oid), max(oid), count(*) from pg_roles; > min │max│ count > ─┼───┼─── > 10 │ 310235824 │ 244 > (1 row) > > > Also, as I didn't mention it before: it's Pg 12.9. OK. Traced it back to JIT. With JIT enabled: =# show jit; jit ─ on (1 row) =# explain select p.proname, (SELECT rolname from pg_catalog.pg_roles where oid = p.proowner) from pg_proc p; QUERY PLAN ── Seq Scan on pg_proc p (cost=0.00..156507.84 rows=63264 width=128) SubPlan 1 -> Index Scan using pg_authid_oid_index on pg_authid (cost=0.14..2.36 rows=1 width=64) Index Cond: (oid = p.proowner) JIT: Functions: 8 Options: Inlining false, Optimization false, Expressions true, Deforming true (7 rows) =# select p.proname, (SELECT rolname from pg_catalog.pg_roles where oid = p.proowner) from pg_proc p; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. 16:42:44 canvas test, cluster 999, standalone db: @, pid:2014255 ?!> But when I disable jit - query works fine. versions of things that I think are relevant: =$ dpkg -l | grep -E 'llvm|clang|gcc|glibc' ii gcc4:9.3.0-1ubuntu2 arm64GNU C compiler ii gcc-10-base:arm64 10.3.0-1ubuntu1~20.04 arm64GCC, the GNU Compiler Collection (base package) ii gcc-9 9.3.0-17ubuntu1~20.04 arm64GNU C compiler ii gcc-9-base:arm64 9.3.0-17ubuntu1~20.04 arm64GCC, the GNU Compiler Collection (base package) ii libgcc-9-dev:arm64 9.3.0-17ubuntu1~20.04 arm64GCC support library (development files) ii libgcc-s1:arm6410.3.0-1ubuntu1~20.04 arm64GCC support library ii libllvm9:arm64 1:9.0.1-12 arm64Modular compiler and toolchain technologies, runtime library Best regards, depesz
Re: Power loss test
On Thu, Mar 3, 2022 at 09:29:56AM +0200, Levente Birta wrote: > Hello > > For many years when I get a new server every time I tested the storage > system with diskchecker.pl for power loss. > > I'm just curious if there is any other/newer tool to test ? I have never found one, and had to hunt to find a link to add to the docs for that tool. If you find something better, please let us know. -- Bruce Momjian https://momjian.us EDB https://enterprisedb.com If only the physical world exists, free will is an illusion.
Re: Postgres Wal Full
On 3 March 2022 08:46:45 CET, pgdba pgdba wrote: >Hi , > >I have a problem from pg_wal. I am using postgresql version 11 and taking >backup and writing archive_command in postgresql.conf but did not archive wal >and my disk ise full from pg_wal. I research why is my wal is full and dont >found anything could you help me please ? Check if your archive_command works. Read the log. -- 2ndQuadrant - The PostgreSQL Support Company