Re: Slow statement using parallelism after 9.6>11 upgrade

2019-09-04 Thread Arnaud L.

Le 03/09/2019 à 15:43, Tom Lane a écrit :

"Arnaud L."  writes:
We have upgraded our database from 9.6 to 11 (and updated PostGIS from 
2.3 to 2.5 as well).

...


Have you re-ANALYZEd the database?  The problem with this query
seems to be the spectacularly awful rowcount estimate here:

   ->  Bitmap Index Scan on planet_osm_ways_nodes_idx 
(cost=0.00..11190.36 rows=1420982 width=0) (actual time=0.268..0.268 
rows=1 loops=1)

 Index Cond: (nodes && '{1}'::bigint[])


The planner should be able to do better than that, given up-to-date
statistics on the "nodes" column.



Tom, I can confirm that with up to date statistics the planner is still 
lost.
I did a REINDEX to rule out a broken index and the estimate is still in 
the 100k+ range.



Regards
--
Arnaud




Re: Slow statement using parallelism after 9.6>11 upgrade

2019-09-04 Thread Arnaud L.

Le 04/09/2019 à 09:04, Arnaud L. a écrit :

Tom, I can confirm that with up to date statistics the planner is still
lost.
I did a REINDEX to rule out a broken index and the estimate is still in
the 100k+ range.


Sorry, I meant 1M+ range.

EXPLAIN ANALYZE select id from planet_osm_ways WHERE nodes && 
ARRAY[123456789::bigint];


(parallel_workers = 0 on the table as per Paul's recommandation) :

Bitmap Heap Scan on planet_osm_ways  (cost=11582.45..3535447.30 
rows=1419000 width=8) (actual time=0.198..0.199 rows=1 loops=1)

  Recheck Cond: (nodes && '{123456789}'::bigint[])
  Heap Blocks: exact=1
  ->  Bitmap Index Scan on planet_osm_ways_nodes_idx 
(cost=0.00..11227.70 rows=1419000 width=0) (actual time=0.151..0.151 
rows=1 loops=1)

Index Cond: (nodes && '{123456789}'::bigint[])
Planning Time: 0.260 ms
Execution Time: 0.249 ms


Regards
--
Arnaud




Re: Bad Estimate for multi tenant database queries

2019-09-04 Thread Peter Grman
Hello Michael,

I digged a bit deeper and found an even simpler query, which can perfectly
embody that problem:

explain (ANALYZE, FORMAT text)
select *
from "Reservation"."Reservation" r
join "Rates"."TimeSliceDefinition" t on r."TimeSliceDefinitionId" = t."Id"
where r."AccountCode" = 'OXHV' and t."AccountCode" = 'OXHV'

Nested Loop  (cost=448.56..6918.67 rows=54 width=657) (actual
time=1.207..26.874 rows=9322 loops=1)
  ->  Bitmap Heap Scan on "TimeSliceDefinition" t  (cost=4.39..39.99
rows=14 width=131) (actual time=0.013..0.029 rows=14 loops=1)
Recheck Cond: ("AccountCode" = 'OXHV'::text)
Heap Blocks: exact=7
->  Bitmap Index Scan on
"IX_TimeSliceDefinition_AccountCode_EntityId"  (cost=0.00..4.39 rows=14
width=0) (actual time=0.009..0.009 rows=14 loops=1)
  Index Cond: ("AccountCode" = 'OXHV'::text)
  ->  Bitmap Heap Scan on "Reservation" r  (cost=444.17..491.21 rows=12
width=526) (actual time=1.021..1.755 rows=666 loops=14)
Recheck Cond: (("TimeSliceDefinitionId" = t."Id") AND
("AccountCode" = 'OXHV'::text))
Heap Blocks: exact=4858
->  BitmapAnd  (cost=444.17..444.17 rows=12 width=0) (actual
time=0.980..0.980 rows=0 loops=14)
  ->  Bitmap Index Scan on
"IX_Reservation_TimeSliceDefinitionId"  (cost=0.00..13.82 rows=187 width=0)
(actual time=0.057..0.057 rows=692 loops=14)
Index Cond: ("TimeSliceDefinitionId" = t."Id")
  ->  Bitmap Index Scan on
"IX_Reservation_AccountCode_EntityId"  (cost=0.00..427.72 rows=9507
width=0) (actual time=0.980..0.980 rows=9327 loops=13)
Index Cond: ("AccountCode" = 'OXHV'::text)
Planning Time: 0.353 ms
Execution Time: 27.311 ms

Above the query with wrong estimates (factor ~200x off) and below the query
with correct estimates:

explain (ANALYZE, FORMAT text)
select *
from "Reservation"."Reservation" r
join "Rates"."TimeSliceDefinition" t on r."TimeSliceDefinitionId" = t."Id"
where r."AccountCode" = 'OXHV'

Hash Join  (cost=557.83..12519.98 rows=9507 width=657) (actual
time=3.290..15.174 rows=9322 loops=1)
  Hash Cond: (r."TimeSliceDefinitionId" = t."Id")
  ->  Bitmap Heap Scan on "Reservation" r  (cost=430.10..12367.25 rows=9507
width=526) (actual time=1.931..10.821 rows=9322 loops=1)
Recheck Cond: ("AccountCode" = 'OXHV'::text)
Heap Blocks: exact=4666
->  Bitmap Index Scan on "IX_Reservation_AccountCode_EntityId"
 (cost=0.00..427.72 rows=9507 width=0) (actual time=1.398..1.398 rows=9327
loops=1)
  Index Cond: ("AccountCode" = 'OXHV'::text)
  ->  Hash  (cost=96.77..96.77 rows=2477 width=131) (actual
time=1.312..1.313 rows=2511 loops=1)
Buckets: 4096  Batches: 1  Memory Usage: 466kB
->  Seq Scan on "TimeSliceDefinition" t  (cost=0.00..96.77
rows=2477 width=131) (actual time=0.004..0.550 rows=2511 loops=1)
Planning Time: 1.394 ms
Execution Time: 15.641 ms

Given that the AccountCode should be actually the same, the in all
referenced rows, it's really just to double check, in case we have cross
tenant (cross account) references. - The extra "and t."AccountCode" =
'OXHV'" is added by the ORM as a safety net, we'd like to keep it that way,
but postgres shouldn't consider it for the row estimates.

I've tried creating the following statistics:

CREATE STATISTICS MT_ReservationBucket on "AccountCode", "DepartureUtc",
"ArrivalUtc" from "Reservation"."Reservation";
CREATE STATISTICS MT_ReservationDepartureUtc on "AccountCode",
"DepartureUtc" from "Reservation"."Reservation";
CREATE STATISTICS MT_ReservationArrivalUtc on "AccountCode", "ArrivalUtc"
from "Reservation"."Reservation";
CREATE STATISTICS MT_ReservationNoShowFeeId on "AccountCode", "NoShowFeeId"
from "Reservation"."Reservation";
CREATE STATISTICS MT_ReservationTimeSliceDefinitionId on "AccountCode",
"TimeSliceDefinitionId" from "Reservation"."Reservation";
CREATE STATISTICS MT_TimeSliceDefinition on "AccountCode", "Id" from
"Rates"."TimeSliceDefinition";
CREATE STATISTICS MT_NoShowFee on "AccountCode", "Id" from
"Reservation"."NoShowFee";

I tried creating indexes on Id+AccountCode and
TimeSliceDefinitionId+AccountCode.

I tried setting up a foreign key using both columns instead of just one:

alter table "Reservation"."Reservation"
add constraint
"FK_Reservation_TimeSliceDefinition_TimeSliceDefinitionId_test"
foreign key ("AccountCode", "TimeSliceDefinitionId") references
"Rates"."TimeSliceDefinition" ("AccountCode", "Id")
on delete restrict;

I also tried switching default_statistics_target to 1 and running full
"analyze" again afterwards, with those statistics and other indexes, but
nothing had any effect. (Maybe the row estimate grew to 55 instead of 54.)

At the end, I've found the following presentation:
https://www.postgresql.eu/events/pgconfeu2018/sessions/session/2124/slides/122/Towards%20more%20efficient%20query%20plans%20(2).pdf
- with a reference to this discussion:
https://www.postgresql.org/message-id/flat/3f

Re: Slow statement using parallelism after 9.6>11 upgrade

2019-09-04 Thread Arnaud L.

Le 04/09/2019 à 09:04, Arnaud L. a écrit :

Le 03/09/2019 à 15:43, Tom Lane a écrit :

The planner should be able to do better than that, given up-to-date
statistics on the "nodes" column.


Tom, I can confirm that with up to date statistics the planner is still
lost.
I did a REINDEX to rule out a broken index and the estimate is still in
the 100k+ range.



Setting STATISTICS target to 10.000 on the nodes column and re-analyzing 
did not help either. Estimates are still ~1.4M rows when result is 1 row.


Regards
--
Arnaud




Re: Postgres HA issue - Standby server does not start after Master compute host is shut down

2019-09-04 Thread Jehan-Guillaume (ioguix) de Rorthais
On Tue, 3 Sep 2019 21:31:23 +0530
Nagendra Bandi  wrote:

> Hi Jehan,
> Thanks for your quick response.
> Standby is built from the primary using pg_basebackup. i.e. Initial copy of
> the primary database is taken with pg_basebackup command

pg_basebackup is supposed to make an non-exclusive backup, so this error
shouldn't happen.

Did you check the online backup actually succeed?

> and then restarted the server.

What is this server restart about? Do you restart the primary? Is it a simple
startup of the standby?

There something wrong somewhere in your procedure. Could you share your precise
procedure steps please?

> On Tue, Sep 3, 2019 at 8:43 PM Jehan-Guillaume (ioguix) de Rorthais <
> iog...@free.fr> wrote:  
> 
> > On Tue, 3 Sep 2019 20:27:37 +0530
> > Nagendra Bandi  wrote:
> > ...  
> > > *Problem Description :*
> > >
> > > PostgreSQL database is set up for High Availability and *asynchronous
> > > streaming replication* with *hot standby* as described in
> > >  
> > https://www.postgresql.org/docs/9.4/warm-standby.html#STREAMING-REPLICATION
> > .  
> > > There is only 1 master and 1 standby server set up. The deployment is on
> > > Openstack cloud where master database is running on one compute host and
> > > standby server is running on another compute host. When master is brought
> > > down (*compute host is powered off*), stand by is *not* able to take  
> > over.  
> > > The PostgreSQL database server on the standby fails to start with errors
> > > messages given below.
> > >
> > > *Error logs from Standby Postgres DB:*
> > >
> > > *Logs form postgres log :
> > > *< 2019-08-22 16:00:04.110 UTC >LOG: invalid record length at C/513BF150
> > >
> > > < 2019-08-22 16:00:04.110 UTC >LOG: redo done at C/513BF118
> > >
> > > < 2019-08-22 16:00:04.115 UTC >FATAL: WAL ends before end of online  
> > backup
> >
> > It looks like the standby never received the end of online backup record
> > from
> > the master.
> >
> > Could you describe how you built your standby?
> >
> > Regards,
> >  



-- 
Jehan-Guillaume de Rorthais
Dalibo




Re: Postgres HA issue - Standby server does not start after Master compute host is shut down

2019-09-04 Thread Luca Ferrari
On Tue, Sep 3, 2019 at 6:08 PM Nagendra Bandi  wrote:
> Standby is built from the primary using pg_basebackup. i.e. Initial copy of 
> the primary database is taken with pg_basebackup command and then restarted 
> the server.

This puzzles me:

< 2019-09-01 15:43:56.440 UTC >LOG:  trigger file found:
/var/opt/rh/rh-postgresql94/lib/pgsql/trigger_switch
< 2019-09-01 15:43:56.440 UTC >FATAL:  terminating walreceiver process
due to administrator command

< 2019-09-01 15:43:56.443 UTC >DEBUG:  switched WAL source from stream
to archive after failure
< 2019-09-01 15:43:56.443 UTC >LOG:  record with zero length at C/95193A70
< 2019-09-01 15:43:56.443 UTC >LOG:  redo done at C/95193A38
< 2019-09-01 15:43:56.443 UTC >LOG:  last completed transaction was at
log time 2019-09-01 15:39:18.804265+00
< 2019-09-01 15:43:56.443 UTC >DEBUG:  resetting unlogged relations:
cleanup 0 init 1
< 2019-09-01 15:43:56.448 UTC >FATAL:  WAL ends before end of online backup
< 2019-09-01 15:43:56.448 UTC >HINT:  All WAL generated while online
backup was taken must be available at recovery.

so the standby found the trigger file and terminated the wal receiver,
but then tried to get wals from local archive, why?
Seems to me the local WALs are not completed or removed, and so the
standby cannot get the last part, I suspect this is because there's a
restore_command that is not streaming (and that should be fine, since
it is optional for streaming).
Also the version is quite old.

Luca




RE: Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-04 Thread Kumar, Virendra
Here is function code:
--
CREATE OR REPLACE FUNCTION sddf_update.tps_update_1(p_pres_id_in bigint, 
p_last_audit_update_dt_in timestamp without time zone, OUT p_err_code_out 
bigint, OUT p_err_mesg_out text, p_expiration_dt_in timestamp without time zone 
DEFAULT '1600-01-01'::date, p_audit_insert_user_name_tx_in character varying 
DEFAULT '0.1'::character varying, p_audit_update_user_name_tx_in 
character varying DEFAULT '0.1'::character varying, 
p_latest_version_in_in character varying DEFAULT '0'::character varying, 
p_delete_in_in character varying DEFAULT '0'::character varying, 
p_class_code_id_in character varying DEFAULT '0.1'::character varying, 
p_major_classification_cd_in character varying DEFAULT '.1'::character 
varying, p_major_class_name_tx_in character varying DEFAULT 
'0.1'::character varying, p_coverage_short_name_tx_in character varying 
DEFAULT '0.1'::character varying, p_coverage_name_tx_in character 
varying DEFAULT '0.1'::character varying, p_cdt_source_system_cd_in 
character varying DEFAULT '0.1'::character varying, 
p_cdt_submitting_country_cd_in character varying DEFAULT 
'0.1'::character varying, p_cdt_status_cd_in character varying DEFAULT 
'0.1'::character varying, p_effective_dt_in timestamp without time zone 
DEFAULT '1600-01-01'::date)
 RETURNS record
 LANGUAGE plpgsql
 SECURITY DEFINER
AS $function$
DECLARE
ora2pg_rowcount int;
lv_audit_update_dt   
product_owner.external_sys_class_code_pres.audit_update_dt%TYPE;
--lv_audit_update_dt   
product_owner.external_sys_class_code_pres.audit_update_dt%TYPE;
BEGIN
  UPDATE product_owner.external_sys_class_code_pres
 SET class_code_id =
CASE WHEN p_class_code_id_in='0.1' THEN  class_code_id  
ELSE p_class_code_id_in END ,
 major_classification_cd =
CASE WHEN p_major_classification_cd_in='.1' THEN  
major_classification_cd  ELSE p_major_classification_cd_in END ,
 major_classification_name_tx =
CASE WHEN p_major_class_name_tx_in='0.1' THEN  
major_classification_name_tx  ELSE p_major_class_name_tx_in END ,
 coverage_short_name_tx =
CASE WHEN p_coverage_short_name_tx_in='0.1' THEN  
coverage_short_name_tx  ELSE p_coverage_short_name_tx_in END ,
 coverage_name_tx =
CASE WHEN p_coverage_name_tx_in='0.1' THEN  
coverage_name_tx  ELSE p_coverage_name_tx_in END ,
 cdt_source_system_cd =
CASE WHEN p_cdt_source_system_cd_in='0.1' THEN  
cdt_source_system_cd  ELSE p_cdt_source_system_cd_in END ,
 cdt_submitting_country_cd =
CASE WHEN p_cdt_submitting_country_cd_in='0.1' THEN  
cdt_submitting_country_cd  ELSE p_cdt_submitting_country_cd_in END ,
 cdt_status_cd =
CASE WHEN p_cdt_status_cd_in='0.1' THEN  cdt_status_cd  
ELSE p_cdt_status_cd_in END ,
 effective_dt =
CASE WHEN p_effective_dt_in=TO_DATE('01/01/1600', 'mm/dd/') 
THEN  effective_dt  ELSE p_effective_dt_in END ,
 expiration_dt =
CASE WHEN p_expiration_dt_in=TO_DATE('01/01/1600', 
'mm/dd/') THEN  expiration_dt  ELSE p_expiration_dt_in END ,
 audit_insert_user_name_tx =
CASE WHEN p_audit_insert_user_name_tx_in='0.1' THEN  
audit_insert_user_name_tx  ELSE p_audit_insert_user_name_tx_in END ,
 audit_update_dt = CURRENT_TIMESTAMP,
 audit_update_user_name_tx =
CASE WHEN p_audit_update_user_name_tx_in='0.1' THEN  
audit_update_user_name_tx  ELSE p_audit_update_user_name_tx_in END ,
 latest_version_in =
CASE WHEN p_latest_version_in_in='0' THEN  latest_version_in  
ELSE p_latest_version_in_in END ,
 delete_in =
CASE WHEN p_delete_in_in='0' THEN  delete_in  ELSE 
p_delete_in_in END
   WHERE pres_id = p_pres_id_in
 AND audit_update_dt = p_last_audit_update_dt_in;

  GET DIAGNOSTICS ora2pg_rowcount = ROW_COUNT;

  p_err_code_out := 0;
   EXCEPTION
  WHEN SQLSTATE '50001' THEN
 p_err_code_out := -20999;
 p_err_mesg_out :=
   'Record has been modified since last retrieved - Resubmit 
transaction for parameter(s)  '
|| ' p_pres_id_in  values of which are => '
|| p_pres_id_in;
 RAISE EXCEPTION '%', p_err_mesg_out USING ERRCODE = '45999';
   END;

$function$

Table definition is below:
--
product_master_fdw=# \d product_owner.external_sys_class_code_pres;
 Table "product_owner.external_sys_class_code_pres"
Column|Type | Collation | 
Nullable | Default
--+-+---+--+-
pres_id  | bigin

PG11.2 - wal_level =minimal max_wal_senders = 0

2019-09-04 Thread Jason Ralph
Hello,
I have recently upgraded 10 Postgres databases to 11.2, these databases are 
used for data warehousing / ETL.  These are stand alone systems, no 
replication. They perform a lot of inserts, drop tables and create tables etc...
I have been working on tuning the databases, I am seeing the following in the 
log:
2019-09-03 10:40:21.660 EDT [5635] LOG:  checkpoints are occurring too 
frequently (13 seconds apart)
2019-09-03 10:40:21.660 EDT [5635] HINT:  Consider increasing the configuration 
parameter "max_wal_size".

My first thought was to increase the max_wal_size which may take care of the 
logged message.  Then I looked at the wal_level setting.  I see that it 
defaults to replica out of the box and max_wal_senders = 10.
postgres=# show wal_level;
wal_level
---
replica
(1 row)
postgres=# show max_wal_senders;
max_wal_senders
-
10
(1 row)

The question I have is, for a production database not doing replication, can I 
safely set the following parameters, I understand that minimal will also 
disable wal_archiving so I am concerned about that as well.
wal_level = minimal
max_wal_senders = 0

I have been looking at the documentation and I guess I am looking for a yes 
this is ok, I figured it's on by default for a reason so I was hesitant to 
change it.




Jason Ralph

This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. E-mail transmission cannot be guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or contain viruses. The sender therefore 
does not accept liability for any errors or omissions in the contents of this 
message, which arise as a result of e-mail transmission. If verification is 
required please request a hard-copy version.


Re: Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-04 Thread Tom Lane
"Kumar, Virendra"  writes:
> Here is function code:

Hm, nothing very exciting in there.  But the known reasons for this
type of error involve something changing a table rowtype that the
function uses (that is, ALTER TABLE ADD COLUMN or the like).  Or
dropping/recreating such a table altogether.  Is there any part
of your workflow that redefines the type of
product_owner.external_sys_class_code_pres.audit_update_dt or
product_owner.external_sys_class_code_pres ?

Also, you still didn't tell us the server's version.

regards, tom lane




Re: PG11.2 - wal_level =minimal max_wal_senders = 0

2019-09-04 Thread Luca Ferrari
On Wed, Sep 4, 2019 at 5:15 PM Jason Ralph  wrote:
> The question I have is, for a production database not doing replication, can 
> I safely set the following parameters, I understand that minimal will also 
> disable wal_archiving so I am concerned about that as well.
>
> wal_level = minimal
>
> max_wal_senders = 0
>
>
>
> I have been looking at the documentation and I guess I am looking for a yes 
> this is ok, I figured it’s on by default for a reason so I was hesitant to 
> change it.

I don't see why you should not set them as you described.
The idea is that the overhead of having replica over minimal is today
enough "low" in contrast to the need for replication (e.g., backup
tools like pgbackrest).

Luca




RE: PG11.2 - wal_level =minimal max_wal_senders = 0

2019-09-04 Thread Jason Ralph
Thank you Luca,
Can I ask one more related question, I have no need for replication as noted 
below, can I safely disable the worker process via the setting below? In my 
sandbox it does turn off the logical replication launcher, I just wanted to be 
sure I am not affecting anything other than replication with this setting.


max_logical_replication_workers = 0 # taken from max_worker_processes
# (change requires restart)


Jason Ralph

-Original Message-
From: Luca Ferrari 
Sent: Wednesday, September 4, 2019 12:16 PM
To: Jason Ralph 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: PG11.2 - wal_level =minimal max_wal_senders = 0

On Wed, Sep 4, 2019 at 5:15 PM Jason Ralph  wrote:
> The question I have is, for a production database not doing replication, can 
> I safely set the following parameters, I understand that minimal will also 
> disable wal_archiving so I am concerned about that as well.
>
> wal_level = minimal
>
> max_wal_senders = 0
>
>
>
> I have been looking at the documentation and I guess I am looking for a yes 
> this is ok, I figured it’s on by default for a reason so I was hesitant to 
> change it.

I don't see why you should not set them as you described.
The idea is that the overhead of having replica over minimal is today enough 
"low" in contrast to the need for replication (e.g., backup tools like 
pgbackrest).

Luca
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. E-mail transmission cannot be guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or contain viruses. The sender therefore 
does not accept liability for any errors or omissions in the contents of this 
message, which arise as a result of e-mail transmission. If verification is 
required please request a hard-copy version.


Re: Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-04 Thread Adrian Klaver

On 9/4/19 8:02 AM, Kumar, Virendra wrote:

Here is function code:




Table definition is below:
--
product_master_fdw=# \d product_owner.external_sys_class_code_pres;
  Table "product_owner.external_sys_class_code_pres"
 Column|Type | Collation | 
Nullable | Default


I see ora2pg in the function and product_master_fdw above.

Is this a foreign table?

--
Adrian Klaver
adrian.kla...@aklaver.com




floating point output

2019-09-04 Thread Rob Sargent
I've found the description of floating point types (here 
), 
but I'm looking for the rationale of the output format, particularly 
with respect to total digits presented (variable in a single select's 
output) and the dropping of a trailing zero (to some implying a loss of 
precision).  Is the code my only guide here?





Re: floating point output

2019-09-04 Thread Adrian Klaver

On 9/4/19 5:23 PM, Rob Sargent wrote:
I've found the description of floating point types (here 
), 
but I'm looking for the rationale of the output format, particularly 
with respect to total digits presented (variable in a single select's 
output) and the dropping of a trailing zero (to some implying a loss of 
precision).  Is the code my only guide here?


Some examples would help explain your concerns.

Bottom line, if you want precision use numeric.







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: PG11.2 - wal_level =minimal max_wal_senders = 0

2019-09-04 Thread Luca Ferrari
On Wed, Sep 4, 2019 at 10:44 PM Jason Ralph
 wrote:
>
> Thank you Luca,
> Can I ask one more related question, I have no need for replication as noted 
> below, can I safely disable the worker process via the setting below? In my 
> sandbox it does turn off the logical replication launcher, I just wanted to 
> be sure I am not affecting anything other than replication with this setting.
>
>
> max_logical_replication_workers = 0 # taken from max_worker_processes
> # (change requires restart)
>

Quite frankly I've never done, but I don't see why you should not turn it off.

Luca