Fri, Aug 4, 2023 at 1:13 PM Ron wrote:
> On 8/3/23 23:47, Kalit Inani wrote:
>
> Hi all,
> During PITR based recovery of a postgres instance, we are getting the
> following error -
> '2023-06-21 23:52:52.232 PDT [24244] FATAL: hot standby is not possible
> because max_conne
On 8/3/23 23:47, Kalit Inani wrote:
Hi all,
During PITR based recovery of a postgres instance, we are getting the
following error -
'2023-06-21 23:52:52.232 PDT [24244] FATAL: hot standby is not possible
because max_connections = 150 is a lower setting than on the master server
(its
Hi all,
During PITR based recovery of a postgres instance, we are getting the
following error -
'2023-06-21 23:52:52.232 PDT [24244] FATAL: hot standby is not possible
because max_connections = 150 is a lower setting than on the master server
(its value was 500)'
Here are the st
Julien Rouhaud writes:
> On Thu, Feb 03, 2022 at 05:39:57PM +0530, Bharath Rupireddy wrote:
>> ... Instead, it would be better
>> if the server emits a single log with all the insufficient
>> parameters(max_connections, max_worker_processes, max_wal_senders,
>> ma
lue-restart have to be performed. Instead, it would be better
> if the server emits a single log with all the insufficient
> parameters(max_connections, max_worker_processes, max_wal_senders,
> max_prepared_transactions and max_locks_per_transaction) values and
> crashes FATALly. The use
On Thu, Feb 3, 2022 at 3:17 PM Julien Rouhaud wrote:
>
> Hi,
>
> On Thu, Feb 03, 2022 at 10:36:37AM +0100, Luca Ferrari wrote:
> > Hi all,
> > running PostgreSQL 14, physical replication with slot, after changing
> > (increasing) the max_connections on the primar
Hi,
On Thu, Feb 03, 2022 at 10:36:37AM +0100, Luca Ferrari wrote:
> Hi all,
> running PostgreSQL 14, physical replication with slot, after changing
> (increasing) the max_connections on the primary, I had this message at
> a restart from the standby:
>
> DETAIL: max_connection
On Thu, Feb 3, 2022, 3:07 PM Luca Ferrari wrote:
> Hi all,
> running PostgreSQL 14, physical replication with slot, after changing
> (increasing) the max_connections on the primary, I had this message at
> a restart from the standby:
>
> DETAIL: max_connections = 100 is a low
Hi all,
running PostgreSQL 14, physical replication with slot, after changing
(increasing) the max_connections on the primary, I had this message at
a restart from the standby:
DETAIL: max_connections = 100 is a lower setting than on the primary
server, where its value was 300.
and the standby
esql/analyzing-the-limits-of-connection-scalability-in-postgres/ba-p/1757266>
>
>
> On Sun, 30 May 2021 at 20:19, Vijaykumar Jain <
> vijaykumarjain.git...@gmail.com> wrote:
>
>> I have a two dumb questions.
>>
>> 1)
>> I know the max_connections valu
aykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:
> I have a two dumb questions.
>
> 1)
> I know the max_connections value change requires a restart.
>
> I also read a thread, which says why it is the case, assuming it still
> holds true.
>
> Jean Arnaud wr
I have a two dumb questions.
1)
I know the max_connections value change requires a restart.
I also read a thread, which says why it is the case, assuming it still
holds true.
Jean Arnaud writes:
> I'm looking for a way to change the "max_connections" parameter withou
On Fri, Feb 7, 2020 at 11:14 AM Justin wrote:
>
> On Fri, Feb 7, 2020 at 1:56 PM Sam Gendler
> wrote:
>
>> Benchmarks, at the time, showed that performance started to fall off due
>> to contention if the number of processes got much larger. I imagine that
>> the speed of storage today would may
On Fri, Feb 7, 2020 at 1:56 PM Sam Gendler
wrote:
> Benchmarks, at the time, showed that performance started to fall off due
> to contention if the number of processes got much larger. I imagine that
> the speed of storage today would maybe make 3 or 4x core count a pretty
> reasonable place to
tional wisdom for starting number was
2*cores + 1*spindles, if memory serves. You can set max_connections higher,
but that was the number you wanted to have active, and then adjust for
workload - OLTP vs warehouse, how much disk access vs buffer cache, etc.
Benchmarks, at the time, showed that perform
On Fri, Feb 7, 2020 at 6:29 AM Justin wrote:
> WorkMem is the biggest consumer of resources lets say its set to 5 megs
> per connection at 1000 connections that 5,000 megs that can be allocated.
>
Clarification- work_mem is used per operation (sort, hash, etc) and could
be many many times with
On 07/02/2020 13:18, Chris Withers wrote:
On 07/02/2020 12:49, Chris Ellis wrote:
What's "too much" for max_connections? What happens when you set it to
high? What factors affect that number?
When sizing max_connections you need to trade off how many
connections your a
database connection
pooler can still work but the configuration is going to difficult,
On Fri, Feb 7, 2020 at 7:50 AM Chris Ellis wrote:
> Hi Chris
>
> On Fri, 7 Feb 2020, 08:36 Chris Withers, wrote:
>
>> Hi All,
>>
>> What's a sensible way to pick th
On 07/02/2020 12:49, Chris Ellis wrote:
What's "too much" for max_connections? What happens when you set it to
high? What factors affect that number?
When sizing max_connections you need to trade off how many connections
your application will use at peak vs how much RAM a
Hi Chris
On Fri, 7 Feb 2020, 08:36 Chris Withers, wrote:
> Hi All,
>
> What's a sensible way to pick the number to use for max_connections?
>
Sensible in this context is some what variable. Each connection in
PostgreSQL will be allocated a backend process. These are not th
Hi All,
What's a sensible way to pick the number to use for max_connections?
I'm looking after a reasonable size multi-tenant cluster, where the
master handles all the load and there's a slave in case of hardware
failure in the master.
The machine is used to host what I sus
re new to postgres and need help in deciding how to set value for
> max_connections on DB.
>
> 1. How can we decide on optimal value for max_connections for a given
> setup/server? I checked many posts saying that even 1000 is considered as a
> very high value but we are hit
1. How can we decide on optimal value for max_connections for a given
setup/server? I checked many posts saying that even 1000 is considered as a
very high value but we are hitting the error too_many_connections due to
Max_connections value limit.
I have one set at 1000 but I usually top out
*Hello*,
We are working on a payments systems which uses postgreSql 9.6 as backend
DB and blockchain technology. The database is setup in HA in master-standby
mode using pacemaker on Linux 7.6.
*We are new to postgres and ne*ed help in deciding how to set value for
max_connections on DB.
1. How
Hi,
If you reckon the application initiates such a large number of concurrent
connections, I'd suggest you configure a connection pooler to avoid the
connection overhead by PostgreSQL. max_connections will be the parameter you
are looking at to configure but ensure it is configur
Hello team,
We have migrated our database from Oracle 12c to Postgres 11. I need your
suggestions , we have sessions limit in Oracle = 3024 . Do we need to set the
same connection limit in Postgres as well. How we can decide the
max_connections limit for postgres. Are there any differences in
26 matches
Mail list logo