..]
>> David
>
> Just wondering why there is a hard coded limit.
>
> While I agree it might be good to be able specify the number of workers, sure
> it would be possible to derive a suitable default based on the number of
> effective processors available?
>
I had the same problem and my conclusion was that it is not possible to go
above 8 cores because of Amdahl's law on parallel computing. More here:
https://en.wikipedia.org/wiki/Amdahl%27s_law
regards,
fabio pardi
Hi Rama,
On 02/10/2020 01:42, Rama Krishnan wrote:
> Hi Friends,
>
> By using pg bouncer can we split read and queries
>
pgbouncer is just a connection pooler.
The logic where to send the reads and where the writes, should be in our
application.
regards,
fabio pardi
Hi,
use of a connection pooler usually helps in such cases. It will not reduce
latency, but it will mitigate the problem when the app can benefit from
recycling the connections.
regards,
fabio pardi
On 09/09/2020 06:59, J . Mageshwaran wrote:
> Hi Team, I am performing some benchm
things here.
The first query returns all the columns "like 'instr%_ref'" present in the
statistics (so in the whole cluster), while the second is counting the actual
number of different rows in bigtable.
regards,
fabio pardi
, you will probably avoid many of your problems
about disks performances.
Do not forget disks, RAID controllers, networking, SLA, SLO, HA, DR..
OT: I would use newer Postgres than 9.6 if I were you, unless you have good
reasons to use 9.6.
regards,
fabio pardi
On 04/06/2020 11:36, Praveen
Hi,
if you have access to the OS, then you have plenty of options.
Else, I think pg_cron might do the job
regards,
fabio pardi
On 28/08/2019 08:58, Sonam Sharma wrote:
> Is there any option to run reindex or vaccum in background?
> Every time the session gets logged off in between.
e data produced by background job
(and, maybe, checkpoints are happening more frequently?).
>
> Anyway, I'll test more and report back some more results.
good, let us know and do not forget to provide the log lines produced by the
checkpoints too.
regards,
fabio pardi
point to something.
Also the postgres logs should be able to tell you more, eg: when a checkpoint
starts, finishes, and how much stuff it wrote.
I hope I gave you enough inputs to better understand what is going on.
regards,
fabio pardi
can
reach a 3rd host, you might work around the problem using an SSH tunnel.
I discourage you from this latter option, anyway.
> If possible, I would prefer partial replication (only some tables) to
> full base replication (all instances).
you could have a look into pglogical
regards,
fabio pardi
Hi Daulat,
I believe that the error message is referring to the system user, not
the database one.
docker exec --help | grep -- -u
-u, --user string Username or UID (format:
[:])
regards,
fabio pardi
On 5/21/19 2:42 PM, Daulat Ram wrote:
> Hello team,
>
>
&g
Hi Karl Martin,
you could set log_min_duration_statement=0 at the global level (in the config
file) or at session level too.
regards,
fabio pardi
On 09/04/2019 11:26, Karl Martin Skoldebrand wrote:
> Hi,
>
>
>
> Is there a way to track “transactions” by default (i.e.
o pgbouncer? a total (globally,
including to pgbouncer db) of max_client_conn
that means, 'cl_active' can go up to 10 in my example, but as soon as
the 6th client initiates a transaction, it cannot be served and has to
wait for a connection to be freed.
regards,
fabio pardi
On 1/17
rn. If you connect manually, you will notice an open connection, (socket is
opened) but not database shell.
regards,
fabio pardi
>
>> Same thing for max_db_connections. I set this to 1 and I am able to connect
>> from 2 shells.
>
> Same as pool_size but basically a fail-s
Hi,
In my opinion repmgr it's worth a look.
https://repmgr.org/
regards,
fabio pardi
On 17/01/2019 14:32, AI Rumman wrote:
> Hi,
>
> I am planning to use Postgresql with TimescaleDb extension. I have to design
> a system similar to AWS RDS which supports automated fail
log).
I m not sure other clients are able to read from WAL buffer, therefore i m not
sure the data is available to other clients at that specific point in time.
Maybe somebody else in the ML knows the details by heart?
regards,
fabio pardi
On 15/01/2019 12:15, Fabio Pardi wrote:
>
Hi,
all clients will get the latest version of the row (from RAM, that is). The
only thing is that in case of server crash, not-yet-written-to-disk commits
will be lost.
detailed explanation can be found here:
https://www.postgresql.org/docs/current/wal-async-commit.html
regards,
fabio
available rows are retrieved. In practice a
node's parent node might stop short of reading all available rows (see the
LIMIT example below).
As you can read here:
https://www.postgresql.org/docs/current/using-explain.html
regards,
fabio pardi
when not in use.
regards,
fabio pardi
On 12/12/2018 10:37, Oygun Josef wrote:
>
> Hi,
>
>
>
> Is it possible to terminate idle connections/sessions automatically through a
> timeout in AWS or do I need to run a periodical cron job for this?
>
>
>
> Postgres v
@Nicola, while I'm glad you got further help.. I was asking myself it you maybe
missed my answer to your question? (mail from 10/10/2018 at 15:31)
https://www.postgresql.org/message-id/a22f8385-2a49-30a7-b1d2-fc743c2f3245%40portavita.eu
regards,
fabio pardi
On 16/10/2018 14:12, Nicola
? Did perhaps somebody forgot to close the
connection to the db in your application and they are only waiting to be timed
out?
About 'DISCARD ALL', that is the default 'server_reset_query'.
regards,
fabio pardi
On 10/10/18 17:03, Nicola Contu wrote:
> Hello,
> we a
Hi,
I think you are looking for:
https://www.postgres-xl.org/
regards,
fabio pardi
On 05/10/18 11:54, deepikags wrote:
> Hi,
>
> Does postgreSQL community edition supports data distribution across
> nodes(distributed architecture) or do we need to buy licence for the same ?
&
you perform a 'vacuum full' on your old
db,then the size should be very close or equal to the newly imported one.
hope it clarifies.
regards,
fabio pardi
On 03/10/18 14:51, Sonam Sharma wrote:
> Hello Ben,
>
> When we do \l+ , it is different than source, when we load
=' -I/usr/local/lib/include'
4690c4703
< LDFLAGS='-g -Wl,--as-needed'
---
> LDFLAGS='-g -Wl,--as-needed -L/usr/local/lib/lib'
together with the socket problem mentioned earlier makes me think that
we are overlooking something or we miss some important piece of
On 19/09/18 15:34, Adrian Klaver wrote:
> On 9/19/18 5:40 AM, ncontu wrote:
>> Hi Fabio,
>> It used to work before, even without installing updates.
>
> Exactly, the suspicion is that something in the updates changed the behavior
> of your system.
>
Yes, that's the suspicion. Or at least was..
x on Postgres and is there on Oracle?)
- If you are doing sortsand the data does not fit on work_mem then you are
making use of disk space, slowing down operations
regards,
fabio pardi
On 19/09/18 05:29, Thomas Munro wrote:
> On Wed, Sep 19, 2018 at 1:35 PM jimmy wrote:
>> I use sel
grep
6543'.No line should return.
Are you running pgbouncer from systemctl or command line?Maybe is a long shot,
but one more thing you can try is to remove the line 'user=postgres' and let it
run as 'pgbouncer'user from root/systemctl.
Is selinux enabled or disabled on
Hi Achilleas,
I'm glad you like the article.
Probably I will find the time to come back to the topic when I'm done comparing
Mongodb with PostgreSQL
regards,
fabio pardi
On 07/09/18 11:18, Achilleas Mantzios wrote:
> Nice read! Wonder if you could repeat the tests on pgsql 1
welcome
Regards,
fabio pardi
#x27;
I have doubts on the security implications of that setup, though.
regards,
fabio pardi
On 31/07/18 13:27, Lu, Dan wrote:
>
> Hi David,
>
>
>
> Which command? Can you give me an example?
>
>
>
> Say, I want to put the config file on a share drive so al
potentially accumulating more WAL segments on the primary,
you might want to look into
max_standby_archive_delay/max_standby_streaming_delay parameters to have more
control over query cancellation
regards,
fabio pardi
On 25/07/18 15:14, basti wrote:
> Thanks a lot for this tip. I don't know
d for live database for the
> tables which has more dead tuples(greater than)?
>
nope
> 1. Does it cause any *adverse *effect?
>
>
exclusively locks the table. Recipe for disaster.
What's wrong with the normal operations of autovacuum?
regards,
fabio pardi
> Pleas
you have your points, my friend.
On 07/17/2018 11:23 AM, Jehan-Guillaume (ioguix) de Rorthais wrote:
> On Tue, 17 Jul 2018 11:03:08 +0200
> Fabio Pardi wrote:
>
>> On 07/17/2018 10:21 AM, Jehan-Guillaume (ioguix) de Rorthais wrote:
>>> On Tue, 17 Jul 2018 10:11:50 +02
;s purpose, please note
that check_pgactivity is **only** checking for btree indexes (which are
the default ones, and the proven-to-get-bloated-quickly)
If I were you (both), I would monitor **all** indexes (and yes! tables
too), since one day you might realize it was actually a good idea to do so.
regards,
fabio pardi
If you monitor using nagios (or if you want to make a wrapper around it):
https://bucardo.org/check_postgres/check_postgres.pl.html#bloat
works like a charm.
regards,
fabio pardi
On 07/16/2018 05:16 PM, Alessandro Aste wrote:
> Hello, I am trying to put togheter a query to monitor the in
anything else to make the situation more clear (like errors in the logfile, or
anything else you think is relevant)
regards,
fabio pardi
On 02/07/18 16:51, Marco Fochesato wrote:
> Dear all,
> I have a table with 21 columns.
> Primary key is done with 20 of these colums.
>
> I hav
you can have a look to this page, which I find a good
starting point for a proper tuning:
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
regards,
fabio pardi
On 26/06/18 12:13, Magnus Hagander wrote:
>
>
> On Tue, Jun 26, 2018 at 11:40 AM, Ivar F
think is relevant in the logs
* tell us more about your installation (from package, compiled, which version
of Postres and OS, if you did anything special after installation)
*post the permissions of the folder 'base' and those of base/16395/19739338
* tell us under which user is post
Hi Shore,
Have a look at:
https://www.postgresql.org/docs/current/static/functions-admin.html
'pg_terminate_backend' is probably what you are looking for
regards,
fabio pardi
On 21/06/18 11:32, a wrote:
> Hi
>
> I'm using pgAdmin 4, pgAgent and postgresql 10 on win
Hi Louis,
I think 'alter user' can do the job for you.
https://www.postgresql.org/docs/current/static/sql-alteruser.html
Else, as alternative: before running pg_restore, you couldedit the dump and
replace the string 'ROLE postgres' withthe correct useron the RDS instan
g_db -c "SELECT pglogical.create_node(node_name :=
'provider.$pglog_db', dsn := 'host=dbfabio port=5432 dbname=$pglog_db');"
psql $pglog_db -c "SELECT pglogical.replication_set_add_all_tables('default',
ARRAY['public']);"
Subscriber:
Same as test 1
Streaming replication setup looks like:
Master:
wal_level = hot_standby
Standby:
hot_standby = on
+ recovery.conf
any help is appreciated.
regards,
fabio pardi
ng purpose.
But the real point here is why are you running the command every ~3 seconds?
(and perhaps, why 200 databases?)
I would focus on that issue, and keep using your query.
regards,
fabio pardi
On 15/06/18 02:29, Sam Saffron wrote:
> Hi there,
>
> At the moment we ar
Hi Atul,
Please do not cross-post over mailing lists.
As per your problem:
on a streaming replication setup, all changes applied to master are propagated
to standby(s).
If standby is stopped or cannot temporary reach master, then it will pick up
changes when started or when can reach master a
sure it is best
practice.
Else, we use monitoring for that, but in most cases the timestamps will
not be that accurate.
regards,
fabio pardi
On 06/04/2018 03:17 PM, pavan95 wrote:
> Hi all,
>
> Is there any way to get postmaster(postgres database) shutdown time? Please
> help!!
&g
my notes are wrong, or provide pointers to the
documentation.
regards,
fabio pardi
On 31/05/18 16:20, Fabio Pardi wrote:
> As far as I know, the OS cache is shared, and shared_buffers too.
>
> Back to the matter of contention, your statement i think might be true only
> in cases when y
make room to new blocks fetched from disk, while the concurrent query is
trying to do the same. Then the disk contention might happen.
regards,
fabio pardi
On 31/05/18 16:09, Melvin Davidson wrote:
>
>
> On Thu, May 31, 2018 at 10:04 AM, Ron <mailto:ronljohnso...@gmail.com>>
...
regards,
fabio pardi
On 31/05/18 14:26, Ashu Pachauri wrote:
> Thanks Fabio for the reply.
> The queries are blocked in the sense that I can see them in pg_stat_activity.
>
> Please find the query and its output for correlating the blocked activity
> with bloc
hey are waiting in pgbouncer pool?
regards,
fabio pardi
On 31/05/18 12:38, Ashu Pachauri wrote:
> We have been using Postgres 9.5.12 behind PGBouncer and facing some weird
> issues. Whenever we running long running DDL statements (e.g. 'add index
> concurently' or 'Alte
oh! I was late!
well..you have one extra link now! :)
On 03/05/18 11:00, Fabio Pardi wrote:
> Hi Craig,
>
>
> are those good?
>
>
> http://mirror1.net.one.com/postgres/pool/main/p/postgresql-9.6/postgresql-9.6-dbg_9.6.5-1.pgdg80+1_amd64.deb
>
> http://ata
Hi Craig,
are those good?
http://mirror1.net.one.com/postgres/pool/main/p/postgresql-9.6/postgresql-9.6-dbg_9.6.5-1.pgdg80+1_amd64.deb
http://atalia.postgresql.org/morgue/p/postgresql-9.6/postgresql-9.6-dbg_9.6.5-1.pgdg80%2b1_amd64.deb
regards,
fabio pardi
On 03/05/18 10:23, Craig
are happy with your upgrade, you can cleanup the leftovers running:
delete_old_cluster.sh
regards,
fabio pardi
On 04/20/2018 11:24 AM, Akshay Ballarpure wrote:
> Hi Fabio,
> *Thanks so much for figuring out an issue..!!! much appreciated.*
> i have stopped both postgres version (8.
uot;help" for help.
I think you might be incurring in the same problem.
Can you confirm it?
regards,
fabio pardi
On 04/19/2018 09:37 AM, Akshay Ballarpure wrote:
> Hi Fabio,
> Yes i ran initdb on new database and able to start as below.
>
> [root@ms-esmon root]# su -
sql -p 50432 -h /var/run/postgresql -U your_user _db_
regards,
fabio pardi
On 04/18/2018 02:02 PM, Akshay Ballarpure wrote:
> Hi Fabio,
> sorry to bother you again, its still failing with stopping both server
> (8.4 and 9.4)
>
> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin
upgrade because the old and new clusters will not
be running at the same time. However, when checking an old running
server, the old and new port numbers must be different.'
Hope it helps,
Fabio Pardi
On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:
> Thanks Fabio for instant reply.
>
Hi,
please avoid crossposting to multiple mailing lists.
You need to run both versions of the database, the old and the new.
They need to run on different ports (note that it is impossible to run 2
different processes on the same port, that's not a postgresql thing)
On 04/18/2018 09:30 AM, A
Hi Vaibhav,
I think this might come of help:
https://www.postgresql.org/docs/current/static/tutorial-select.html
On 04/18/2018 09:22 AM, vaibhav zaveri wrote:
> Hi,
>
> How to fetch data from tables in PostgreSQL.
>
> Regards,
> Vaibhav Zaveri
55 matches
Mail list logo