Hello,
I'd like to ask for your opinion and experience you might have with
partitioned tables on a production OLTP database.
We have a couple of tables reaching 1T and beyond and liked to have them
partitioned.
We built a pg14 full volume db.
To get an impression of the new performance we're execu
Hello all,
We're stuck at a wrong plan that the planner insists on.(pg 14.2)
It's an aggregation over a single table.
The planner always goes for an extra scan over a partial index.
We have set statistics on the relevant columns to the max of 1 and
could not get correct row estimates. None of t
Just an update.
If the target is upgraded to pg13 then the initial copy goes fine.
Unfortunately it is a 40T db in a sunset, so we'd rather won't upgrade.
On Fri, Feb 11, 2022 at 4:50 PM Radoslav Nedyalkov
wrote:
> Hello All,
> It is a bit specific logical replication setup
Hello All,
It is a bit specific logical replication setup where we try to replicate
a partitioned table (pg14.1) to a non-partitioned one (pg11.14)
After establishing everything the subscriber fails on the initial copy with
ERROR: table "public.tab01" not found on publisher
If the subscription i
Hello all,
We have a busy database that has oid increasing with 50-100M per month.
There is no oid type across user attributes anywhere nor user tables with
oids;
Not sure where this generation comes from. If you can hint me, it would be
great.
Also what happens when pg_class.oid reaches 4B.
pg_cl
We have wal_log_hints = off.
Then I thought hint bits are never WAL logged.
Or am I missing the part - they are, upon first read ?
Thanks again
Rado
On Wed, May 19, 2021 at 3:56 PM Laurenz Albe
wrote:
> On Wed, 2021-05-19 at 14:16 +0300, Radoslav Nedyalkov wrote:
> > Hello Forum,
&
Hello Forum,
I know it sounds weird but we have faced it a couple of times already and
have no idea.
We're trying to establish logical replication from RDS(10.9) to RDS(12.5).
We're adding tables to the publication one by one.
During the initial copy for some tables we experience huge WAL generatio
Hi all,
So we have master(pg11.8) and standbyA(pg11.11), 24T data, quite busy
data-warehouse on Amazon Linux.
We've built a standbyB(pg11.11) by creating basebackup from standbyA and
collecting wals from the master.
When basebackup completed we started recovery from collected wals.
When standbyB
On Tue, Mar 9, 2021 at 8:13 PM Tom Lane wrote:
> Radoslav Nedyalkov writes:
> > On Tue, Mar 9, 2021 at 6:53 PM Tom Lane wrote:
> >> The only thing I'm aware of that could consume unbounded memory
> >> on the server side is hash aggregation. (v13 has impro
On Wed, Mar 10, 2021 at 3:56 AM Martín Fernández
wrote:
> Hello,
>
> I’m troubleshooting a problem at my company with a pg 12 cluster that we
> run.
>
> We are using Amazon DMS to replicate data from our database into S3
> buckets. DMS replicates data by using logical replication slots.
>
> Afte
On Tue, Mar 9, 2021 at 6:53 PM Tom Lane wrote:
> Radoslav Nedyalkov writes:
> > On Tue, Mar 9, 2021 at 6:03 PM Tom Lane wrote:
> >> Perhaps the accumulation is happening on the client side? libpq doesn't
> >> have any provision for spilling a result set to d
On Tue, Mar 9, 2021 at 6:03 PM Tom Lane wrote:
> Radoslav Nedyalkov writes:
> > Occasionally we get bad queries on our db that consume a lot of memory.
> > These typically are full joins by mistake or just too large result sets.
> > My understanding is these should
Hi all,
Occasionally we get bad queries on our db that consume a lot of memory.
These typically are full joins by mistake or just too large result sets.
My understanding is these should go to a temp file but apparently memory
allocation is preferred.
Last time a statement hit 150GB RAM and did no
On Sun, Nov 15, 2020 at 12:48 AM Mohamed Wael Khobalatte <
mkhobala...@grubhub.com> wrote:
>
>
> On Sat, Nov 14, 2020 at 2:46 PM Radoslav Nedyalkov
> wrote:
>
>>
>>
>> On Fri, Nov 13, 2020 at 8:13 PM Radoslav Nedyalkov
>> wrote:
>>
>&
On Fri, Nov 13, 2020 at 8:13 PM Radoslav Nedyalkov
wrote:
>
>
> On Fri, Nov 13, 2020 at 7:37 PM Laurenz Albe
> wrote:
>
>> On Fri, 2020-11-13 at 15:24 +0200, Radoslav Nedyalkov wrote:
>> > On a very busy master-standby setup which runs typical olap processing -
>
On Fri, Nov 13, 2020 at 7:37 PM Laurenz Albe
wrote:
> On Fri, 2020-11-13 at 15:24 +0200, Radoslav Nedyalkov wrote:
> > On a very busy master-standby setup which runs typical olap processing -
> > long living , massive writes statements, we're getting on the standby:
> &
Hi Forum,all
On a very busy master-standby setup which runs typical olap processing -
long living , massive writes statements, we're getting on the standby:
ERROR: canceling statement due to conflict with recovery
FATAL: terminating connection due to conflict with recovery
The weird thing is
>
> On Jul 18, 2020, at 14:18, Radoslav Nedyalkov
> wrote:
>
>
> Well. the vacuum full failed with
>
> vacuumdb: vacuuming of table "olap.transactions_and_fees_2020_01" in
> database "db" failed: ERROR: found xmin 3916900817 from before
> relf
Well. the vacuum full failed with
vacuumdb: vacuuming of table "olap.transactions_and_fees_2020_01" in
database "db" failed: ERROR: found xmin 3916900817 from before
relfrozenxid 80319533
On Sun, Jul 19, 2020 at 12:01 AM Radoslav Nedyalkov
wrote:
> Hi Forum,
> We
Hi Forum,
We see
pg_dump: Dumping the contents of table "transactions_and_fees_2020_01"
failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: could not access status of
transaction 3916900817
DETAIL: Could not open file "pg_xact/0E97": No such file or directory.
(pg11.8 on cent
chael,
Best
Rado
On Wed, Jul 15, 2020 at 1:06 AM Tom Lane wrote:
> Radoslav Nedyalkov writes:
> > Ah, I could have messed up the examples I gave. Row numbers are
> different.
> > Once again the plans , sorry about that.
>
> Given that it works at 100 entries and not 101, I
me: 3.430 ms
Execution Time: 15.954 ms
(4 rows)
Time: 20.257 ms
On Tue, Jul 14, 2020 at 10:31 PM Radoslav Nedyalkov
wrote:
> Hi Michael,
> full output from the query is attached.
> here is the truncated lists version.
>
> frac_mcv | 0.00306267
&
Hi Forum,
I'm scratching my head around the following case:
*te* is a 80M rows, 100GB table. It is a bare simple select over indexed
attribute of it.
EXPLAIN SELECT te.id FROM te WHERE te.current_pid IN (240900026,
240900027,
240900028,
-- 200 entries ...
Gather (cost=1000.00..61517367.85 r
one more detail: pg_replication_slots.catalog_xmin shows the biggest age
in the system.
How could transaction on catalogues may affect logical slot. Any clue
here ?
Thank you,
Rado
On Wed, Jul 1, 2020 at 2:09 AM Radoslav Nedyalkov
wrote:
> Hello All,
> we are in a situation where resta
Hello All,
we are in a situation where restart_lsn for logical replication slots is
not advancing.
We have two slots. Both are active, confirmed_flush_lsn also is updated.
Client side all looks okay. remote_lsn for subscritions is advancing.
The effect of restart_lsn being 'stopped' is the disk i
Sat, Apr 25, 2020 at 5:03 AM Peter Geoghegan wrote:
> On Fri, Apr 24, 2020 at 7:32 AM Radoslav Nedyalkov
> wrote:
> > We 're rebuilding a big table which has set parallel_workers = 6
> > system has
> > max_parallel_maintenance_workers | 6 |
> /var/lib/pgs
Hello everybody,
We 're rebuilding a big table which has set parallel_workers = 6
system has
max_parallel_maintenance_workers | 6 |
/var/lib/pgsql/11/data/postgresql.sumup.conf
max_parallel_workers | 16 |
/var/lib/pgsql/11/data/postgresql.sumup.conf
max_parallel_workers_p
On Tue, Mar 19, 2019 at 10:37 PM Adrian Klaver
wrote:
> On 3/19/19 9:35 AM, Radoslav Nedyalkov wrote:
> > Hi All,
> > We're testing upgrade from postgres 10.6 to postgres 11.2 with
> pg_upgrade.
> > Before stopping pg10 we disabled subscription.
> > Aft
Hi All,
We're testing upgrade from postgres 10.6 to postgres 11.2 with pg_upgrade.
Before stopping pg10 we disabled subscription.
After upgrade and launching pg11, the existing logical replication
subscription is there and disabled.
stest=# \dRs+
List of s
On Thu, Jan 31, 2019 at 4:09 PM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:
> On 31/1/19 3:55 μ.μ., Radoslav Nedyalkov wrote:
> > Hi All,
> > We want to setup a logical replica with initial load from physical
> standby. (all pg10.5)
> > Basically w
x27;re not that confident. Does the procedure looks okay?
What if standby is paused at COMMIT lsn?
Thanks!
Radoslav Nedyalkov
acuum won't be able to remove rows younger than session2
backend_xmin.
Well, we planned to use pg_advisory_lock() as a boot phase in a hot-standby
appserver
and apparently this will be problematic as the session2 might wait for
weeks.
Any thoughts ? Do we miss something ?
Thanks and Regard
32 matches
Mail list logo