guess I am looking for a yes
this is ok, I figured it's on by default for a reason so I was hesitant to
change it.
Jason Ralph
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee you should not
disseminate, dis
replication with this setting.
max_logical_replication_workers = 0 # taken from max_worker_processes
# (change requires restart)
Jason Ralph
-Original Message-
From: Luca Ferrari
Sent: Wednesday, September 4, 2019 12:16 PM
To: Jason Ralph
Cc
in my initial
question. But I was not able to see how it could affect my setup. Please school
me if I am being naive.
Jason Ralph
From: Benoit Lobréau
Sent: Friday, September 6, 2019 3:41:44 AM
To: Luca Ferrari
Cc: Jason Ralph ;
pgsql-general
wn
2019-09-12 23:53:12.407 EDT [12969] LOG: database system is shut down
2019-09-12 23:53:19.029 EDT [16953] LOG: database system was shut down at
2019-09-12 23:53:12 EDT
2019-09-12 23:53:19.041 [16950] LOG: database system is ready to accept
connections
Jason Ralph
This message contains c
Hello Lists,
DB1=# select version();
-[ RECORD 1
]
version | PostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-23), 64-bit
I recently upgraded a neglected
h was a very time
consuming process on pg9.3.
The process has finished almost 10 hours earlier than pg93. So thank you for
your hard work and dedication to this awesome piece of software.
Jason Ralph
This message contains confidential information and is intended only for the
individual name
>On Wed, Oct 2, 2019 at 8:41 AM Jason Ralph
>wrote:
> Since pg11 on both the target and source, the run time has decreased a lot, I
> chalk it up to the parallel index creations in pg11 which was a very time
> consuming >process on pg9.3.
> The process has finished almost
| 2019-10-13 15:42:06.043385-04
last_autovacuum | 2019-11-01 12:24:45.575283-04
last_analyze| 2019-10-13 15:42:17.370086-04
last_autoanalyze| 2019-11-01 12:25:17.181133-04
vacuum_count| 2
autovacuum_count| 15
analyze_count | 2
autoanalyze_count | 17
Thanks for yo
>time.
I agree, this is excellent advice, I overlooked the fact that this is a sample
and the new rows may not even be included in this sample. I will adjust
accordingly.
-Original Message-
From: Jason Ralph
Sent: Friday, November 1, 2019 2:59 PM
To: pgsql-general@lists.post
y analyzes and QUERY: autovacuum: VACUUM table only vacuums which
would make sense.
Thanks as always and hope this is clear.
Jason Ralph
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee you should not
d
also know that
pg_upgrade will reset statistics, so does the table remain bloated but
statistics show otherwise? Can Someone please help me answer this? Or link
where it’s outlined in the manual. Thanks as always.
Jason Ralph
This message contains confidential information and is intended only for
mp / pg_restore since
it’s a hard link to the previous data location. Pg_upgrade with link will not
recreate the table.
Jason Ralph
-Original Message-
From: Adrian Klaver
Sent: Thursday, February 13, 2020 10:46 AM
To: Jason Ralph ;
pgsql-general@lists.postgresql.org
Subject: Re: pg_upg
/wiki/Show_database_bloat it still shows bloat.
Thinking it may be left over from before the pg_upgrade and auto vacuum tuning.
Best,
Jason Ralph
From: Michael Lewis
Sent: Thursday, February 13, 2020 1:02 PM
To: Adrian Klaver
Cc: Jason Ralph ;
pgsql-general@lists.postgresql.org
Subject:
.587534-04
last_autovacuum | 2020-02-13 02:25:22.533372-05
last_analyze| 2019-10-13 01:01:41.916929-04
last_autoanalyze| 2020-02-13 13:44:46.273096-05
vacuum_count| 15
autovacuum_count| 92
analyze_count | 15
autoanalyze_count | 243
Jason Ralph
-Original Message-
Fr
lation as possible. Also I would like to
save space wherever possible. I have received a *bloat* load of information in
this thread, so thanks. 😊
@Michael Lewis thanks for the idea of pg_repack, this looks awesome and I cant
wait to test it.
Jason Ralph
-Original Message-
From: Adrian Kla
transaction? How would you guys do it?
Thanks,
Jason Ralph
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately by e-mail if
Thanks Adrian,
> You could break it down into multiple transactions if there is way to specify
> ranges of records.
Say I couldn't break it up, would it be faster in or out of the transaction?
Jason Ralph
-Original Message-
From: Adrian Klaver
Sent: Tuesday, June 23, 2020
ted, which columns are
>being updated and how big the records are.
Please see above, thanks
Jason Ralph
From: Ron
Sent: Tuesday, June 23, 2020 10:57 AM
To: pgsql-general@lists.postgresql.org
Subject: Re: UPDATE on 20 Million Records Transaction or not?
On 6/23/20 8:32 AM, Jason Ralph wrote
et diagnostics num_rows = row_count;
>raise notice 'deleted % rows', num_rows;
> exit when num_rows = 0;
>end loop;
> end;$_$;
Thanks to all the suggestions, I really like the function, I will test this. I
have autovacuum fully tuned for this table, so sho
19 matches
Mail list logo