Hi
I have both hdd and ssd disk on the postgres server. The cluster is
right now created on the hdd only. I am considering using a tablespace
to put some highly used postgres object on the ssd disk. Of course the
ssd is small compared to the hdd, and I need to choose carefully what
objects are sto
út 18. 2. 2020 v 18:27 odesílatel Laurenz Albe
napsal:
> On Mon, 2020-02-17 at 19:41 +0100, Pavel Stehule wrote:
> > I tested
> >
> > CREATE OR REPLACE FUNCTION public.fx(integer)
> > RETURNS void
> > LANGUAGE plpgsql
> > AS $function$
> > begin
> > for i in 1..$1 loop
> > begin
> > ins
Hi Merlin,
Its configured high value for max_conn, but active and idle session have never
crossed the count 50.
DB Size: 20 GBTable size: 30MBRAM: 16GBvC: 4
yes, its view earlier I posted and here is there query planner for new actual
view,
"Append (cost=0.00..47979735.57 rows=3194327000 width
On Tue, Feb 18, 2020 at 12:40 PM Justin Pryzby wrote:
> This is almost certainly unrelated. It looks like that query did a seq scan
> and accessed a large number of tuples (and pages from "shared_buffers"), which
> the OS then shows as part of that processes memory, even though *shared*
> buffers
Please don't cross post to different lists.
Pgsql-general ,
PgAdmin Support ,
PostgreSQL Hackers ,
"pgsql-hackers-ow...@postgresql.org"
,
Postgres Performance List ,
Pg Bugs ,
Pgsql-admin ,
Pgadmin-hackers ,
PostgreSQL Hacker
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size = "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoin
Below are the same configurations ins .conf file before and after updagrade
show max_connections; = 1743show shared_buffers = "4057840kB"show
effective_cache_size = "8115688kB"show maintenance_work_mem = "259MB"show
checkpoint_completion_target = "0.9"show wal_buffers = "16MB"show
default_stat
On Tue, Feb 18, 2020 at 05:46:28PM +, Nagaraj Raj wrote:
after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues
no world load has changed before and after upgrade.
spec: RAM 16gb,4vCore
Any bug reported like this or suggestions on how to fix this issue? I
appreciate
On Mon, 2020-02-17 at 19:41 +0100, Pavel Stehule wrote:
> I tested
>
> CREATE OR REPLACE FUNCTION public.fx(integer)
> RETURNS void
> LANGUAGE plpgsql
> AS $function$
> begin
> for i in 1..$1 loop
> begin
> insert into foo values(i);
> exception when others then
> raise notice 'yyy';