Hello, when maintenance_work_mem and autovacuum_work_mem are set, my
understanding is that the vacuum and autovacuum sessions should be limited
to use the memory limits set by these parameters. But I am seeing more
memory being used than these limits by autovacuum sessions, any reason why
this woul
There is a table in the db, whose index_scan count from pg_stat_all_tables
for this table seems to be too high, there are not that many queries being
executed against this table. Wondering how this count can be too high.
1. The db is up since 80 days so I assume these are cumulative stats
si
for an RDS Aurora PostgreSQL v11.7 database I want to setup DDL and
truncate and delete auditing.
I am setting log_statement to DDL, to log all DDLs (this excludes truncate
statements). I also want to setup auditing only for deletes and truncates
on tables. I don't want to set log_statement to all
It's a simple sequential scan plan of one line, just reading the base table
sequentially.
On Mon, Oct 26, 2020, 9:21 AM Philip Semanchuk
wrote:
>
>
> > On Oct 25, 2020, at 10:52 PM, Ayub M wrote:
> >
> > Thank you both.
> >
> > As for the mview refr
Thank you both.
As for the mview refresh taking long --
• The mview gets refreshed in a couple of mins sometimes and sometimes it
takes hours. When it runs for longer, there are no locks and no resource
shortage, the number of recs in the base table is 6m (7.5gb) which is not
huge so why does it
There is a table t which is used in a mview mv, this is the only table in
the mview definition.
create table t (c1 int, ..., c10 int);-- there is a pk on say c1
columncreate materialized view mv as select c1, c2...c10 from
t;---there is a unique index on say c5 and bunch of other indexes on
the mv
Hello, while running aggregate queries on partitioned table there is
improvement when enable_partitionwise_join is turned on. By default this
parameter is disabled and going by the documentation it says (
https://www.postgresql.org/docs/11/runtime-config-query.html)
```
Because partitionwise group
Created materialized view in parallel by setting
max_parallel_workers_per_gather to 4. I could see parallel sessions kicking
off and creating the mview much faster than serial execution (10mins -
parallel vs over an hour - serial).
Now I want the refresh of the mview to happen taking around same t
On Wed, Aug 12, 2020 at 8:06 PM Tom Lane wrote:
> Ayub M writes:
> > This is PostgreSQL 11 on AWS, there is a mview query in this OLAP
> database,
> > the tables involved are huge - 50-100m records on average records
> hundreds
> > of columns in most cases. The quer
This is PostgreSQL 11 on AWS, there is a mview query in this OLAP database,
the tables involved are huge - 50-100m records on average records hundreds
of columns in most cases. The query runs for a while and then errors out
saying "No space left on device". I could see it generating around 500gb of
I have a RDS PostgreSQL v11.6 with primary and standby. On the primary,
vacuum/analyze were running and pg_stat_all_tables's
last_vacuum/analyze/autovacuum/autoanalyze were having values when
vacuum/analyze were run.
1. Switchover to Standby happened, now when I see pg_stat_all_tables (on
the act
I would like to get suggestions and feedback on aws sct/dms vs ora2pg for
an Oracle (on-prem) to PostgreSQL (aws rds) migration project.
One big difference between them I see is the fact that dms supports
incremental loads from oracle to postgres (dont think ora2pg supports that,
but I could be wr
minates this buffered IO, hence it is not required you
> stick to restrict 25% of RAM.
>
> 2. Work_mem is just setting, if you sort something out your session uses
> than memory other wise it just lies at OS.
>
> On Mon, 10 Feb, 2020, 1:34 PM Ayub M, wrote:
>
>>
1. shared_buffers - In a regular PostgreSQL installation, say I am
allocating 25% of my memory to shared_buffers that means it leaves 75% for
rest such as OS, page cache and work_mems etc. Is my understanding correct?
If so, AWS Aurora for Postgres uses 75% of memory for shared_buffers,
Does the pg_stat_statements.total_time include the time it takes for all
fetches of a cursor query. Or is it only the db time taken to execute the
query?
--
Regards,
Ayub
Yes I did set that, here is how pgbouncer looks like ---
-rwsrwsr-x. 1 root root 2087504 Sep 13 00:45 pgbouncer
On Fri, Sep 13, 2019 at 6:50 AM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:
> On 13/9/19 10:19 π.μ., Ayub M wrote:
>
> Stumbled in the f
var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 5432
auth_type = pam
Am I missing something? Any permissions?
On Thu, Sep 12, 2019 at 4:54 AM Ayub M wrote:
> Okay, thanks for the response. Unfortunately Aurora does not expose these
> files or I should say there is no concept
wrote:
> On 11/9/19 2:47 μ.μ., Ayub M wrote:
>
> Achilleas, for this setup to work are changes to postgresql.conf and
> pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where
> these files are not accessible.
>
> Those files are needed in any case
Achilleas, for this setup to work are changes to postgresql.conf and
pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where
these files are not accessible.
On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:
> On 9/9/19 12:41 μ.μ., Laurenz Al
Thank you Achilleas and Laurenz for the points, will try this out.
On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:
> On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:
> > Christoph Moench-Tegeder wrote:
> >>> It has hba and via hba file one can specify ldap connecti
It has hba and via hba file one can specify ldap connections
https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
On Mon, Sep 9, 2019, 3:40 AM Christoph Moench-Tegeder
wrote:
> ## Ayub M (hia...@gmail.com):
>
> > Hello, I am trying to setup pgbouncer with ldap to conn
Hello, I am trying to setup pgbouncer with ldap to connect with rds
postgres. Downloaded latest version of pbbouncer (1.11) and using hba
auth_type. Getting below error when starting pgbouncer daemon. Am I
missing something?
[centos@ip-x.x.x.x etc]$ pgbouncer -V
PgBouncer version 1.11.0
[centos@i
Hello, I am using postgres 10.6 and have a huge table with ~30m recs which
needs to be partitioned. Whats the best strategy for it with minimal
downtime?
1. Rename existing table and create new partitioned tables with orig name
and keep inserting data from the old renamed to new table. This will i
23 matches
Mail list logo