On Fri, Feb 7, 2020 at 3:52 PM Colin Adler wrote:
> Seems to work now. My question is, is this something I should report to the
> maintainers?
I am one of the people that maintains the B-Tree code.
You didn't mention what version of Postgres you're using here. That
could be important. Please let
Hi all,
Earlier today we were trying to debug why a row wasn't being deleted from a
table and we ran into some interesting behavior.
This is the table in question:
coder=# \d+ extensions
Table "public.extensions"
Column | Type | Collation | Nullable | Default
| St
On Fri, Feb 7, 2020 at 11:14 AM Justin wrote:
>
> On Fri, Feb 7, 2020 at 1:56 PM Sam Gendler
> wrote:
>
>> Benchmarks, at the time, showed that performance started to fall off due
>> to contention if the number of processes got much larger. I imagine that
>> the speed of storage today would may
On 2/7/20 8:48 AM, Zwettler Markus (OIZ) wrote:
hi,
we have to refresh our DEV and UAT stages periodically with PRD data.
we will to do this by cloning the whole cluster using pg_basebackup or
restoring our filesystem backup (data + archive).
I saw that the database system identifier stays t
On Fri, Feb 7, 2020 at 1:56 PM Sam Gendler
wrote:
> Benchmarks, at the time, showed that performance started to fall off due
> to contention if the number of processes got much larger. I imagine that
> the speed of storage today would maybe make 3 or 4x core count a pretty
> reasonable place to
On Fri, Feb 7, 2020 at 5:36 AM Steve Atkins wrote:
> What's a good number of active connections to aim for? It probably depends
> on whether they tend to be CPU-bound or IO-bound, but I've seen the rule of
> thumb of "around twice the number of CPU cores" tossed around, and it's
> probably a dece
On Fri, Feb 7, 2020 at 6:29 AM Justin wrote:
> WorkMem is the biggest consumer of resources lets say its set to 5 megs
> per connection at 1000 connections that 5,000 megs that can be allocated.
>
Clarification- work_mem is used per operation (sort, hash, etc) and could
be many many times with
On Thu, Feb 6, 2020, 20:29 Ken Tanzer wrote:
Em qua, 5 de fev de 2020 às 23:55, Vik Fearing
escreveu:
Please answer +1 if you want or don't mind seeing transaction status by
default in psql or -1 if you would prefer to keep the current default.
+1
I liked the idea just reading about it, but
hi,
we have to refresh our DEV and UAT stages periodically with PRD data.
we will to do this by cloning the whole cluster using pg_basebackup or
restoring our filesystem backup (data + archive).
I saw that the database system identifier stays the same. is there any
complication on this?
thx
On 07/02/2020 13:18, Chris Withers wrote:
On 07/02/2020 12:49, Chris Ellis wrote:
What's "too much" for max_connections? What happens when you set it to
high? What factors affect that number?
When sizing max_connections you need to trade off how many
connections your application will us
Hi Chris Withers
As stated each connection uses X amount of resources and its very easy to
configure Postgresql where even small number of connections will each up
all the RAM
WorkMem is the biggest consumer of resources lets say its set to 5 megs
per connection at 1000 connections that 5,000 me
On 07/02/2020 12:49, Chris Ellis wrote:
What's "too much" for max_connections? What happens when you set it to
high? What factors affect that number?
When sizing max_connections you need to trade off how many connections
your application will use at peak vs how much RAM and CPU you have.
Hi Chris
On Fri, 7 Feb 2020, 08:36 Chris Withers, wrote:
> Hi All,
>
> What's a sensible way to pick the number to use for max_connections?
>
Sensible in this context is some what variable. Each connection in
PostgreSQL will be allocated a backend process. These are not the lightest
weight of
Hi All,
What's a sensible way to pick the number to use for max_connections?
I'm looking after a reasonable size multi-tenant cluster, where the
master handles all the load and there's a slave in case of hardware
failure in the master.
The machine is used to host what I suspect are mainly djan
14 matches
Mail list logo