Hello,
we used to monitor the replication slot size on postgres 9.6.6 with the
following query:
SELECT pg_xlog_location_diff(pg_current_xlog_location(), restart_lsn) FROM
pg_replication_slots WHERE slot_name = 'SLOT NAME';
We are moving to postgres 10.4 and we saw the pg_xlog_location_diff is not
Hello,
we used to monitor waiting connections with postgres 9.6.6 via this query :
select count (*) from pg_stat_activity where wait_event IS NOT NULL
Now with postgres 10 it seems a bit harder to understand when a query from the
application is waiting.
I built this query but not sure I'm
Hello,
we recently moved from postgres 9.6.6 to 10.4
We perform a pg_dump in production to restore daily in a preprod env.
This process used to work perfectly, but now we have a tiny problem.
We first restore data, we perform a vacuum and then we restore matviews.
Restoring matviews now we have :
>
> > On 25 Jul 2018, at 9:43, Nicola Contu wrote:
> >
> > Hello,
> > we recently moved from postgres 9.6.6 to 10.4
> >
> > We perform a pg_dump in production to restore daily in a preprod env.
> > This process used to work perfectly, but now we have a ti
Thanks Laurenz
I will try that.
Btw, just wondering why it works if I refresh it later, even if the
definition is still without public
2018-07-25 12:06 GMT+02:00 Laurenz Albe :
> Nicola Contu wrote:
> > we recently moved from postgres 9.6.6 to 10.4
> >
> > We perform a pg_
yeah, we updated that function in production to says public.all_days.
I will let you know at the next restore.
Thanks guys, appreciated.
2018-07-25 16:28 GMT+02:00 Tom Lane :
> Laurenz Albe writes:
> > Nicola Contu wrote:
> >> pg_restore: [archiver (db)] could not ex
That worked.
Thanks guys.
2018-07-25 16:33 GMT+02:00 Nicola Contu :
> yeah, we updated that function in production to says public.all_days.
> I will let you know at the next restore.
>
> Thanks guys, appreciated.
>
> 2018-07-25 16:28 GMT+02:00 Tom Lane :
>
>> Lauren
Hello,
I'm trying to get pgbouncer working but I'm getting this error :
2018-09-17 12:20:15.304 87772 NOISE safe_accept(12) = Resource temporarily
unavailable
2018-09-17 12:20:36.183 87772 NOISE safe_accept(11) = 13 (127.0.0.1:59100)
2018-09-17 12:20:36.183 87772 NOISE new fd from accept=13
201
Hello,
I'm trying to get pgbouncer working but I'm getting this error :
2018-09-17 12:20:15.304 87772 NOISE safe_accept(12) = Resource temporarily
unavailable
2018-09-17 12:20:36.183 87772 NOISE safe_accept(11) = 13 (127.0.0.1:59100)
2018-09-17 12:20:36.183 87772 NOISE new fd from accept=13
2018
t 2018 alle ore 16:11 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:
> On 9/17/18 7:05 AM, Nicola Contu wrote:
> > The log is from pgbouncer.
> >
> > I installed updates after this problem, but they did not resolve
> anything.
>
> I was thinking more
\n\tTCP/IP connections on port 6543? in
/home/cmd3/adodb5/drivers/adodb-postgres64.inc.php on line 727
So basically the same I get when connecting via psql on port 6543
Il giorno lun 17 set 2018 alle ore 16:23 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:
> On 9/17/18 7:19 AM, Nicola C
.
I can connect via telnet on that port, but we I try with anything else
(like Apache) it makes pgbouncer crashing.
Il giorno lun 17 set 2018 alle ore 15:44 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:
> On 9/17/18 5:33 AM, Nicola Contu wrote:
> > Hello,
> >
) = Resource temporarily
unavailable
Il giorno lun 17 set 2018 alle ore 17:46 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:
> On 9/17/18 8:39 AM, Nicola Contu wrote:
> Please reply to list also.
> Ccing list.
>
> > No wait, IP_ADDR has been modified by me, bu
critto:
> On 9/17/18 8:59 AM, Nicola Contu wrote:
> > Yeah, I replaced the IP of the machine with the word IP_ADDR.
> >
> > On the pgbouncer log I get this while trying to connect via psql:
> >
> > 2018-09-17 15:56:49.452 67611 WARNING tune_socket(16) failed: Operat
Hello,
we are running pgbouncer 1.9.1 connected to postgres 10.5
Sometimes we are seeing a lot of waiting connections with this query :
DISCARD ALL
This is our pgbouncer config :
[databases]
dev = host=10.10.10.1 port=5432 dbname=dev auth_user=pgbouncer pool_size=120
[pgbouncer]
listen_port =
Hello,
we are running Postgres 10.5 with master slave replication.
These are our custom params
archive_command = 'pgbackrest --stanza=cmdprod archive-push %p' # command
to use to archive a logfile segment
archive_mode = on # enables archiving; off, on, or always
checkpoint_completion_target = 0.7
--sort -rss -o comm,pmem | awk '
> NR == 1 { print; next }
> { a[$1] += $2 }
> END {
> for (i in a) {
> printf "%-15s\t%s\n", i, a[i];
> }
> }
> '
>
>
> On Tue, Oct 16, 2018 at 11:04 AM Nicola Contu
> wrote:
>
>>
Hello,
is this normal? can anyone help?
Thanks a lot for your help in advance.
Nicola
Il giorno mer 10 ott 2018 alle ore 17:03 Nicola Contu <
nicola.co...@gmail.com> ha scritto:
> Hello,
> we are running pgbouncer 1.9.1 connected to postgres 10.5
>
> Sometimes we are seein
wapped. Beware if the system is actively
> swapping then swapoff can take some time. But it seems not in your
> case.
> On Tue, 16 Oct 2018 at 10:48, Nicola Contu wrote:
> >
> > No it is not probably used, because I can't find it in any way as I said.
> >
> &g
2492534808 | dev| 7355 | 1833427130 | pgbouncer |
| 10.151.2.145 | | 60570 | 2018-10-16
14:13:05.151015+00 | | 2018-10-16
15:10:40.309993+00 | 2018-10-16 15:10:40.310038+00 | Client
| ClientRead | idle |
Hello,
we noticed with a simple matview we have that refreshing it using the
concurrently item the space always increases of about 120MB .
This only happens if I am reading from that matview and at the same time I
am am refreshing it.
cmdv3=# SELECT
pg_size_pretty(pg_relation_size('public.matview_
P.S.: I am on postgres 11.3
Il giorno ven 12 lug 2019 alle ore 16:32 Nicola Contu <
nicola.co...@gmail.com> ha scritto:
> Hello,
> we noticed with a simple matview we have that refreshing it using the
> concurrently item the space always increases of about 120MB .
> This on
It does not. That's the issue.
It always increases of 120mb and it reached 12gb instead of just 180mb.
Il dom 14 lug 2019, 21:34 Kaixi Luo ha scritto:
>
> On Fri, Jul 12, 2019 at 4:34 PM Nicola Contu
> wrote:
>
>> P.S.: I am on postgres 11.3
>>
>> Il gi
Il giorno dom 14 lug 2019 alle ore 22:23 Tom Lane ha
scritto:
> [ please do not top-post in your replies, it makes the conversation hard
> to follow ]
>
> Nicola Contu writes:
> > Il dom 14 lug 2019, 21:34 Kaixi Luo ha scritto:
> >> This is normal and som
Hello,
We are running postgres 11.5 and in the last two weeks we did :
- upgrade of postgres to 11.5 from 11.4
- increased shared_buffer to 1/3 of the memory
- increased effective_cache_size = 160GB from 120
- increased checkpoint_completion_target = 0.9 from 0.7
- increased checkpoint_timeout = 1
-mem.
> Did you see any other error in logs ? Have you upgrade your kernel since
> last two weeks ?
>
> Best,
> --
> *De :* Nicola Contu
> *Envoyé :* mercredi 11 septembre 2019 09:47
> *À :* pgsql-generallists.postgresql.org <
> pgsql-general@li
If the error persist I will try to revert the work_mem.
Thanks a lot
Il giorno mer 11 set 2019 alle ore 10:10 Pavel Stehule <
pavel.steh...@gmail.com> ha scritto:
> Hi
>
> st 11. 9. 2019 v 9:48 odesílatel Nicola Contu
> napsal:
>
>> Hello,
>> We are running
giorno gio 12 set 2019 alle ore 01:01 Thomas Munro <
thomas.mu...@gmail.com> ha scritto:
> On Wed, Sep 11, 2019 at 11:20 PM Nicola Contu
> wrote:
> > If the error persist I will try to revert the work_mem.
> > Thanks a lot
>
> Hi Nicola,
>
> It's hard to say
We also reverted this param :
cmdv3=# show max_parallel_workers_per_gather;
max_parallel_workers_per_gather
-
2
(1 row)
It was set to 8.
Il giorno mar 21 gen 2020 alle ore 16:06 Nicola Contu <
nicola.co...@gmail.com> ha scritto:
> Hey Thomas,
>
Hello, may I ask you for a feedback?
Thanks a lot
Il giorno mar 21 gen 2020 alle ore 17:14 Nicola Contu <
nicola.co...@gmail.com> ha scritto:
> We also reverted this param :
>
> cmdv3=# show max_parallel_workers_per_gather;
> max_parallel_
omas Munro <
thomas.mu...@gmail.com> ha scritto:
> On Wed, Jan 22, 2020 at 4:06 AM Nicola Contu
> wrote:
> > after a few months, we started having this issue again.
> > So we revert the work_mem parameter to 600MB instead of 2GB.
> > But the issue is still there. A q
build a core dump file.
Do you still recommend to increase max_conn?
Il giorno gio 30 gen 2020 alle ore 23:41 Thomas Munro <
thomas.mu...@gmail.com> ha scritto:
> On Wed, Jan 29, 2020 at 11:53 PM Thomas Munro
> wrote:
> > On Wed, Jan 29, 2020 at 10:37 PM Nicola Contu
> wrot
Hello,
I have two servers connected to the same switch running postgres 11.5
I am trying to replicate one of those servers after a planned work on the
master, so the replica has been lost. It has always worked but now I get
this :
pg_basebackup: could not receive data from WAL stream: server clos
,user= DETAIL: Last
completed transaction was at log time 2020-03-11 09:47:14.069731+00.
2020-03-11 09:47:44 GMT [] [12598]: [4531-1] db=,user= LOG: restartpoint
starting: time
Il giorno mer 11 mar 2020 alle ore 01:53 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:
> On 3/10/2
Adrian Klaver ha
scritto:
> On 3/11/20 2:54 AM, Nicola Contu wrote:
> > These are the lines before
> >
> > 2020-03-11 09:05:08 GMT [127.0.0.1(40214)] [43853]: [1-1]
> > db=cmdv3,user=zabbix_check ERROR: recovery is in progress
> > 2020-03-11 09:05:08 GMT [127.0.
CPU load on the server to be built? No.
System logs don't show anything relevant unfortunately
Il mer 11 mar 2020, 21:34 Adrian Klaver ha
scritto:
> On 3/11/20 11:59 AM, Nicola Contu wrote:
> > I am actually cascading.
> > The master is in nyh, the first slave is in Dalla
kla...@aklaver.com> ha scritto:
> On 3/11/20 2:12 PM, Nicola Contu wrote:
> > CPU load on the server to be built? No.
>
> CPU load, I/O load on the servers in the replication chain.
>
> Basically you just recently, it seems, imposed extra overhead to the
> process by encr
The encryption is at os level. So the drives are encrypted with a password
where the db saves data
Il gio 12 mar 2020, 15:51 Adrian Klaver ha
scritto:
> On 3/12/20 4:31 AM, Nicola Contu wrote:
> > The replicator is ok and the replicated as well.
> > %Cpu(s): 0.2 us, 1.0 sy,
:56 Nicola Contu <
nicola.co...@gmail.com> ha scritto:
> The encryption is at os level. So the drives are encrypted with a password
> where the db saves data
>
> Il gio 12 mar 2020, 15:51 Adrian Klaver ha
> scritto:
>
>> On 3/12/20 4:31 AM, Nicola Contu wrote:
>
n 3/13/20 4:11 AM, Nicola Contu wrote:
> > So in the logs I now see this :
> >
> > 2020-03-13 11:03:42 GMT [10.150.20.22(45294)] [27804]: [1-1]
> > db=[unknown],user=replicator LOG: terminating walsender process due to
> > replication timeout
>
> Yeah that's
Hello,
I am running postgres 11.5 and we were having issues with shared segments.
So I increased the max_connection as suggested by you guys and reduced my
work_mem to 600M.
Right now instead, it is the second time I see this error :
ERROR: could not resize shared memory segment "/PostgreSQL.213
So that seems to be a bug, correct?
Just to confirm, I am not using NFS, it is directly on disk.
Other than that, is there a particular option we can set in the
postgres.conf to mitigate the issue?
Thanks a lot for your help.
Il giorno sab 4 apr 2020 alle ore 02:49 Thomas Munro
ha scritto:
>
The only change we made on the disk, is the encryption at OS level.
Not sure this can be something related.
Il giorno mar 7 apr 2020 alle ore 10:58 Nicola Contu
ha scritto:
> So that seems to be a bug, correct?
> Just to confirm, I am not using NFS, it is directly on disk.
>
> Oth
Hello,
I am a bit confused about the settings in pgbouncer
What's exactly the pool_size?
If I set 3, and I tried to connect from 4 shells, I am still able to
connect.
Same thing for max_db_connections. I set this to 1 and I am able to connect
from 2 shells.
This is kind of confusing and I'm not r
ery, but connected to the shell.
So that's what my doubt
Il giorno gio 17 gen 2019 alle ore 16:32 David G. Johnston <
david.g.johns...@gmail.com> ha scritto:
> On Thu, Jan 17, 2019 at 8:18 AM Nicola Contu
> wrote:
> >
> > Hello,
> > I am a bit confused about th
Hello,
we are trying to move our postgres 10.5 instance to 11.2
We are trying to restore a DB in a pre-production env but we get an error
on the pg_restore command
[root@STAGING]# /usr/local/pgsql11.2/bin/pg_restore -v -U postgres -L
/data/cofinder_restore.lst -d cofinder -j 16
/home/ncontu/da
I guess it depends on this
https://www.postgresql.org/message-id/15466-0b90383ff69c6e4b%40postgresql.org
Thanks
Il giorno ven 1 mar 2019 alle ore 09:17 Nicola Contu
ha scritto:
> Hello,
> we are trying to move our postgres 10.5 instance to 11.2
>
> We are trying to restore a
Hello,
is there any reason why I am getting worse results using pgsql11.2 in
writing comparing it with pgsql 10.6?
I have two Instances, both just restored, so no bloats.
Running read queries I have pretty much same results, a little bit better
on pg11- Running writes the difference is in favour o
r 2, 2019 at 5:02 AM Ray O'Donnell wrote:
> > On 01/03/2019 15:01, Nicola Contu wrote:
> > > Hello,
> > > is there any reason why I am getting worse results using pgsql11.2 in
> > > writing comparing it with pgsql 10.6?
> > >
> > > I have two
e PG11.2 with PG10.7 *( similar
> bug Fixes and Improvements + same fsync() behavior )
>
> *"This release changes the behavior in how PostgreSQL interfaces with
> fsync() and includes fixes for partitioning and over 70 other bugs that
> were reported over the past three months"
Not sure what you are requesting exactly but here is the strace for the
start of the pg_ctl
Il giorno lun 4 mar 2019 alle ore 21:55 Thomas Munro
ha scritto:
> On Tue, Mar 5, 2019 at 5:01 AM Nicola Contu
> wrote:
> > Attached a part of the strace running the pgbench command for p
Here is the strace as requested for pg11
Thanks
Il giorno mar 5 mar 2019 alle ore 17:47 Thomas Munro
ha scritto:
> On Wed, Mar 6, 2019 at 4:22 AM Nicola Contu
> wrote:
> >
> > Not sure what you are requesting exactly but here is the strace for the
> start of the pg_ctl
This is instead the strace of another server running the same version
compiled but that is even slower.
Il giorno mer 6 mar 2019 alle ore 11:14 Nicola Contu
ha scritto:
> Here is the strace as requested for pg11
>
> Thanks
>
> Il giorno mar 5 mar 2019 alle ore 17:
st to
let you know, comparing the same insert script between the first server,
the first server takes 2m the second one takes 5-7m.
Thanks a lot,
Il giorno gio 7 mar 2019 alle ore 04:47 Thomas Munro
ha scritto:
> On Thu, Mar 7, 2019 at 12:12 AM Nicola Contu
> wrote:
> > This is i
Hello,
do you have any advice on this?
Thanks a lot in advance
Il giorno gio 7 mar 2019 alle ore 09:39 Nicola Contu
ha scritto:
> So the first file is on Postgres11.2 on a test server (and where I compare
> 10 vs 11)
> The second file, is our preprod machine running Postgres 11.2 (
Hello,
I just have few questions on the replication slots :
- is it possible to get size of the slot?
- if the slave is down, the table grows, when the slave comes up again,
will the table be flushed after pushing wals?
- will they impact performances on the master?
I'm just worried about the siz
Hello,
we recently upgrade OS from centos 6.9 to a new server with centos 7.
The centos 6.9 server has became the preproduction server now.
We are running postgres 9.6.6 on both servers.
They are both on SSD disk, these are the only differences :
- DB partition on centos 7 is on a RAID 10
- fil
These are the timings in centos 7 :
Time: 4.248 ms
Time: 2.983 ms
Time: 3.027 ms
Time: 3.298 ms
Time: 4.420 ms
Time: 2.599 ms
Time: 2.555 ms
Time: 3.008 ms
Time: 6.220 ms
Time: 4.275 ms
Time: 2.841 ms
Time: 3.699 ms
Time: 3.387 ms
These are the timings in centos 6:
Time: 1.722 ms
Time: 1.670 ms
ms
Time: 1.980 ms
Time: 2.240 ms
Time: 2.947 ms
Time: 2.828 ms
Time: 2.227 ms
Time: 1.998 ms
Time: 1.990 ms
Time: 2.643 ms
Time: 2.143 ms
Time: 2.919 ms
Time: 2.246 ms
I never got same results of the centos 6.9 machine.
2017-12-04 15:40 GMT+01:00 Nicola Contu :
> These are the timings in cen
as Vondra
> wrote:
> >
> > On 12/04/2017 02:19 PM, Nicola Contu wrote:
> > ...>
> >> centos 7 :
> >>
> >> dbname=# \timing Timing is on. cmdv3=# SELECT id FROM
> >> client_billing_account WHERE name = 'name'; id --- * (1
Hello,
We are running postgres 9.6.6 on centos 7.
We have a large DB (180GB) with about 1200 tables.
We have autovacuum set with default values and we are seeing that for some
tables the free percent goes really high (51%) and we need to daily full
vacuum those tables.
dbanme=# SELECT * FROM pgs
higher setting if the available RAM allows it, to give more RAM to the
> autovacuum daemon.
>
>
>
> 2017-12-13 9:49 GMT-06:00 Nicola Contu :
>
>> Hello,
>> We are running postgres 9.6.6 on centos 7.
>>
>> We have a large DB (180GB) with about 1200 tables.
&g
d what we can suggest DEV from the query/db perspective to
> alleviate/fix the issue.
>
> Again, thanks so much.
>
>
>
> Thanks and kind regards
> Aste - alessandro.a...@gtt.net
>
> -Original Message-
> From: Stephen Frost [mailto:sfr...@snowman.net]
> Se
63 matches
Mail list logo