Logical Streaming Replication stuck at specific LSN

2023-08-20 Thread Kristjan Mustkivi
Hello,

I am doing a major version upgrade from Postgres standard version
11.15 (Debian 11.15-1.pgdg90+1) to Cybertec PGEE version 14.8_EE_1.1.5
(Ubuntu 14.8ee1.1.5-1.cybertec22.04+1). While I am waiting for the
support case to be processed, I was hoping to get advice from the
Community about how to check the situation meanwhile on my own.

The initial copy of the tables completed nicely and from Aug 18 6pm
until Aug 19 6am the streaming flowed without issues. And then
suddenly the WAL sending/write/flsh/reply got stuck:

 application_name │   state   │sent_lsn│   write_lsn│
flush_lsn│   replay_lsn
──┼───┼┼┼┼
 sub_whupgrade│ streaming │ 3F5B3/7FBDDCD0 │ 3F5B3/7FBD49E0 │
3F5B3/7FBD49E0 │ 3F5B3/7FBD49E0

I restarted the subscriber but the only thing that happened was that
the LSN got now stuck with status "catchup" at exactly the same place.
I upgraded PGEE to version 14.9 EE 1.1.6 (Ubuntu
14.9ee1.1.6-1.cybertec22.04+1) but that also did not change the
situation.

With best regards,
-- 
Kristjan Mustkivi

Email: kristjan.mustk...@gmail.com


JDBC + PG-15 but not psql

2023-08-20 Thread Amn Ojee Uw

Hello.

I have searched the net in an attempt to find if others have had and 
resolved this challenge, but most of the sites talk about how, when 
using the psql, this error arises. In my case, the error arises only 
when access PG-15 using JDBC.
JDBC connects to the database, but when trying to execute a schema, it 
be to create a database or to create a tabelspace I get this error :


*StackTrace : [Ljava.lang.StackTraceElement;@7a69b07**
**Message : ERROR: CREATE TABLESPACE cannot run inside a transaction block*

I have used the same algorithm, but now modify to accommodate PG-15, 
when using SQLite. So the JDBC code might not be the problem, but the 
requirements needed from PG-15.


Change of perspective, I resolved the concern!

Here is what changed the outcome :
--- snip ---

public void connectToDatabase() throws ClassNotFoundException, 
SQLException {

    try {
    Class.forName(this.getDatabaseClass().toString());
    this.conn = 
DriverManager.getConnection(this.getDatabaseUrl().toString(),

    this.getUserID().toString(),
    this.getUserPassword().toString());

*this.conn.setAutoCommit(true);*/*//*//*game changer!!*/

        this.pout("Connected to the PostgreSQL server, success!!!");

    this.stmt = this.conn.createStatement();
    } catch (final SQLException | ClassNotFoundException e) {
    throw e;
    }
}

--

All I had to do was to setAutoCommit to true, PG-15 set this value to 
false by default.



I hope my experience can help others.



Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-20 Thread Rihad

On 8/17/23 13:01, rihad wrote:


Hi, all. After calling pg_stat_reset all statistics used by autovacuum 
got zeroed, and started accumulating from scratch. Some tables get 
acted upon properly, some don't.



Self-replying: yup, it seems there's an arbitrary limit of 100K of 
n_live_tup after which autovac/analyze kicks in, or it seems so.


There are still many tables waiting for their turn, which is long due.

Although there are some tables having only 60-70 (not 60-70K) n_live_tup 
that have had autovacuum run on them. Weird.




foo=> select 
relname,n_live_tup,n_dead_tup,cast(round(n_live_tup*cast(current_setting('autovacuum_vacuum_scale_factor') 
as float)+current_setting('autovacuum_vacuum_threshold')::int) as 
int)-n_dead_tup as left, n_mod_since_analyz
e,cast(round(n_live_tup*cast(current_setting('autovacuum_analyze_scale_factor') 
as float)+current_setting('autovacuum_analyze_threshold')::int) as 
int)-n_mod_since_analyze as 
left,n_ins_since_vacuum,last_autovacuum,last_autoanalyze from p
g_stat_all_tables where n_live_tup>0 and 
n_dead_tup>=current_setting('autovacuum_vacuum_threshold')::int order 
by 8 nulls first, 4;
relname  | n_live_tup | n_dead_tup |   left   | 
n_mod_since_analyze |  left   | n_ins_since_vacuum | 
   last_autovacuum    |   last_autoanalyze
--+++--+-+-++---+--- 

fooo     |  32781 | 240663 |  -234057 | 
 513265 | -509937 | 270291 | 
  |
fooo                 | 40 |  24891 |   -24833 | 
  49822 |  -49768 |  24931 | 
  |
f    | 46 |  18991 |   -18932 | 
  19099 |  -19044 | 46 | 
  |
fo   |  1 |  12687 |   -12637 | 
  40795 |  -40745 |  1 | 
  |
fo   |   2393 |  5 |   -10586 | 
 137599 | -137310 |   2393 | 
  |
fo   |   9465 |  11919 |    -9976 | 
 352888 | -351892 |   9466 | 
  |
fooo | 26 |   2558 |    -2503 | 
188 |    -135 |   2584 | 
  |
user_sessions    |    118 |   1231 |    -1157 | 
  19114 |  -19052 |    118 | 
  |
fo   | 32 |    562 | -506 | 
226 |    -173 |    594 | 
  |
fooo     | 53 |    537 | -476 | 
644 |    -589 | 53 | 
  |
fo   |    327 |    524 | -409 | 
804 |    -721 |    520 | 
  |
f    | 46 |    104 |  -45 | 
457 |    -402 |    183 | 
  |
foo  | 34 | 93 |  -36 | 
158 |    -105 | 34 | 
  |
f    | 47 | 95 |  -36 | 
364 |    -309 | 47 | 
  |
fo   | 84 | 91 |  -24 | 
177 |    -119 | 84 | 
  |
f    |  290504401 |    9540832 | 48560098 | 
   26663449 | 2387041 |    8319194 | 2023-08-17 
08:03:19.159161+00 | 2023-08-17 07:43:54.982375+00
fo   |  43449 |   3823 | 4917 | 
   4190 | 205 |    377 | 2023-08-17 
08:31:14.5573+00   | 2023-08-17 08:31:15.19454+00
fo   |   3913 |    715 |  118 | 
200 | 241 |  0 | 2023-08-17 
08:44:04.721272+00 | 2023-08-17 08:47:04.875214+00
f    | 73 | 63 |    2 | 
 31 |  26 | 35 | 2023-08-17 
08:45:05.115824+00 | 2023-08-17 08:47:05.032251+00
f    | 790249 | 126240 |    31860 | 
   4149 |   74926 | 119413 | 2023-08-17 
08:46:10.625275+00 | 2023-08-17 08:49:04.867481+00
pg_shdepend  |   1885 |    286 |  141 | 
116 | 122 |    270 | 2023-08-17 
08:47:03.638639+00 | 2023-08-17 08:48:42.903096+00
pg_index |    993 | 79 |  170 | 
 10 | 139 | 72 | 2023-08-17 
08:48:03.6726

Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-20 Thread Adrian Klaver

On 8/18/23 22:35, Rihad wrote:

On 8/17/23 13:01, rihad wrote:


Hi, all. After calling pg_stat_reset all statistics used by autovacuum 
got zeroed, and started accumulating from scratch. Some tables get 
acted upon properly, some don't.



Self-replying: yup, it seems there's an arbitrary limit of 100K of 
n_live_tup after which autovac/analyze kicks in, or it seems so.


To know rather then guess read:

https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM



There are still many tables waiting for their turn, which is long due.

Although there are some tables having only 60-70 (not 60-70K) n_live_tup 
that have had autovacuum run on them. Weird.







--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-20 Thread Adrian Klaver

On 8/20/23 12:10, Rihad wrote:

On 8/20/23 20:22, Adrian Klaver wrote:

On 8/18/23 22:35, Rihad wrote:

On 8/17/23 13:01, rihad wrote:


Hi, all. After calling pg_stat_reset all statistics used by 
autovacuum got zeroed, and started accumulating from scratch. Some 
tables get acted upon properly, some don't.



Self-replying: yup, it seems there's an arbitrary limit of 100K of 
n_live_tup after which autovac/analyze kicks in, or it seems so.


To know rather then guess read:

https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM



Sure, I read it before asking.

Taking the first table in the list as an example:


     relname  | n_live_tup | n_dead_tup |   left   | 
n_mod_since_analyze |  left

--+++--+-+
fooo     |  32781 | 240663 |  -234057 | 
  513265 | -509937



n_dead_tup (not the actual value, but some time after calling 
pg_stat_reset) is much larger than 20% of n_live_tup 32781, and 
n_mod_since_analyze is much larger than 10% of it.


Yet it is kept unvacuumed and unanalyzed for a long time.

autovacuum_(vacuum|analyze)_threshold is 50.

What am I missing?


Hard to say without seeing the actual settings in postgresql.conf that 
match:


https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR


Most importantly:

autovacuum

and

track_counts

https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-COUNTS








There are still many tables waiting for their turn, which is long due.

Although there are some tables having only 60-70 (not 60-70K) 
n_live_tup that have had autovacuum run on them. Weird.











--
Adrian Klaver
adrian.kla...@aklaver.com