Hi, I got this message on a standby after a FO of PG14 cluster with 3 nodes.
user=,db=,client=,application= LOG: new timeline 20 forked off current
database system timeline 19 before current recovery point CC8/164E9350
this message come from xlog.c rescanLatestTimeLine function:
if (currentTle-
Thanks for your explanation and for the links
On Tue, Mar 19, 2024 at 11:17 AM Aleksander Alekseev <
aleksan...@timescale.com> wrote:
> Hi Fabrice,
>
> > I do not understand why hot_updates value is not 0 for pg_database?
> Given that reloptions is empty for this table that means it has a defaul
Hi,
I do not understand why hot_updates value is not 0 for pg_database? Given
that reloptions is empty for this table that means it has a default value
of 100%
Regards,
Fabrice
SELECT
relname AS table_name,
seq_scan AS sequential_scans,
idx_scan AS index_scans,
n_tup_ins AS ins
Hello,
postgres [1264904]=# select 123456789.123456789123456::double precision;
┌┐
│ float8 │
├┤
│ 123456789.12345679 │
└┘
(1 row)
I do not understand why this number is truncated at 123456789.12345679 that
is 17 digits and n
Hi,
When a table is reloaded wit pg_restore, it is recreated without indexes or
constraints. There are automatically skipped. Is there a reason for this?
g_restore -j 8 -v -d zof /shared/pgdump/aq/backup/dbtest/shtest --no-owner
--role=test -t mytable 2>&1 | tee -a dbest.log
pg_restore: skipping
Hi,
The --clean option of pg_restore allows you to replace an object before
being imported. However, dependencies such as foreign keys or views prevent
the deletion of the object. Is there a way to add the cascade option to
force the deletion?
Thanks for helping
Fabrice
Ok thanks for all these precisions
Regards
Fabrice
On Tue, Dec 19, 2023 at 2:00 PM Matthias van de Meent <
boekewurm+postg...@gmail.com> wrote:
> On Tue, 19 Dec 2023, 12:27 Fabrice Chapuis,
> wrote:
> >
> > Hi,
> > Is it possible to visualize the DDL with the
Hi,
Is it possible to visualize the DDL with the pg_waldump tool. I created a
postgres user but I cannot find the creation command in the wals
Thanks for help
Fabrice
Regards
Fabrice
On Sun, Oct 8, 2023 at 3:57 PM Christoph Moench-Tegeder
wrote:
> ## Fabrice Chapuis (fabrice636...@gmail.com):
>
> > From a conceptual point of view I think that specific wals per
> subscription
> > should be used and stored in the pg_replslot folder
ing directly on the wals of the instance.
What do you think about this proposal?
Regards
Fabrice
On Mon, Oct 2, 2023 at 12:06 PM Christoph Moench-Tegeder
wrote:
> Hi,
>
> ## Fabrice Chapuis (fabrice636...@gmail.com):
>
> > on the other hand there are 2 slots for logical repl
│ f │
Regards
Fabrice
On Thu, Sep 28, 2023 at 7:59 PM Christoph Moench-Tegeder
wrote:
> ## Fabrice Chapuis (fabrice636...@gmail.com):
>
> > We have a cluster of 2 members (1 primary and 1 standby) with Postgres
> > version 14.9 and 2 barman server, slots are onl
Hello,
I have a question about the automatic removal of unused WAL files. When
loading data with pg_restore (200Gb) we noticed that a lot of WALs files
are generated and they are not purged automatically nor recycled despite
frequent checkpoints, then pg_wal folder (150Gb) fill and become out of
s
Where in the code is written the mechanism used for isolation when drop
table is executed in a transaction
Thanks for your help
Fabrice
w.f...@fujitsu.com> wrote:
> On Tue, Oct 18, 2022 at 22:35 PM Fabrice Chapuis
> wrote:
> > Hello Amit,
> >
> > In version 14.4 the timeout problem for logical replication happens
> again despite
> > the patch provided for this issue in this version. When bulky
&
2 at 22:35 PM Fabrice Chapuis
> wrote:
> > Hello Amit,
> >
> > In version 14.4 the timeout problem for logical replication happens
> again despite
> > the patch provided for this issue in this version. When bulky
> materialized views
> > are reloaded it bro
Hello Amit,
In version 14.4 the timeout problem for logical replication happens again
despite the patch provided for this issue in this version. When bulky
materialized views are reloaded it broke logical replication. It is
possible to solve this problem by using your new "streaming" option.
Have
Thanks for your patch, it works well in my test lab.
I added the definition *extern in wal_sender_timeout;* in the
*output_plugin.h* file for compilation works.
I tested the patch for version 10 which is currently in production on our
systems.
The functions below are only in master branch:
pgoutput
Thanks for your new fix Wang.
TimestampTz ping_time = TimestampTzPlusMilliseconds(sendTime,
wal_sender_timeout / 2);
shouldn't we use receiver_timeout in place of wal_sender_timeout because de
problem comes from the consummer.
On Wed, Jan 26, 2022 at 4:37 AM wangw.f...@fujitsu.com <
wangw.f...@f
frequently.
*/
...
Regards
Fabrice
On Fri, Jan 21, 2022 at 2:17 PM Fabrice Chapuis
wrote:
> Thanks for your patch, it also works well when executing our use case, the
> timeout no longer appears in the logs. Is it necessary now to refine this
> patch and make as few changes as possible
Thanks for your patch, it also works well when executing our use case, the
timeout no longer appears in the logs. Is it necessary now to refine this
patch and make as few changes as possible in order for it to be released?
On Fri, Jan 21, 2022 at 10:51 AM wangw.f...@fujitsu.com <
wangw.f...@fujits
Hello Amit,
If it takes little work for you, can you please send me a piece of code
with the change needed to do the test
Thanks
Regards,
Fabrice
On Fri, Jan 14, 2022 at 1:03 PM Amit Kapila wrote:
> On Fri, Jan 14, 2022 at 3:47 PM Fabrice Chapuis
> wrote:
> >
> > I
if it takes little work for you, can you please send me a piece of code
with the change needed to do the test
Thanks
Regards,
Fabrice
On Fri, Jan 14, 2022 at 1:03 PM Amit Kapila wrote:
> On Fri, Jan 14, 2022 at 3:47 PM Fabrice Chapuis
> wrote:
> >
> > If I can follow you,
LOG: 0: worker process: logical
replication worker for subscription 26994 (PID 82232) exited with exit code
1
2022-01-13 11:20:46.421 CET [82224] LOCATION: LogChildExit,
postmaster.c:3625
Thanks a lot for your help.
Fabrice
On Thu, Jan 13, 2022 at 2:59 PM Amit Kapila wrote:
> On Thu, Jan 13
minute timeout.
On Wed, Jan 12, 2022 at 11:54 AM Amit Kapila
wrote:
> On Tue, Jan 11, 2022 at 8:13 PM Fabrice Chapuis
> wrote:
>
>> Can you explain why you think this will help in solving your current
>> problem?
>>
>> Indeed your are right this function won&
stop
with a timeout.
I can put a debug point to check if a timeout is sent to the worker
process. Do you have any other clue?
Thank you for your help
Fabrice
On Fri, Jan 7, 2022 at 11:26 AM Amit Kapila wrote:
> On Wed, Dec 29, 2021 at 5:02 PM Fabrice Chapuis
> wrote:
>
>> I p
loop* in worker.c help to find a
solution?
rc = WaitLatchOrSocket(MyLatch,
WL_SOCKET_READABLE | WL_LATCH_SET |
WL_TIMEOUT | WL_POSTMASTER_DEATH,
fd, wait_time,
WAIT_EVENT_LOGICAL_APPLY_MAIN);
Thanks for your help
Fabrice
On Thu, Dec 23, 2021 at 11:52 AM Amit Kapila
wrote:
> On Wed, Dec 22, 2021 at 8:50 PM Fabri
7;1', publication_names
'"pub008_s00"')
-rw---. 1 postgres postgres 16270723 Dec 22 16:02
xid-14312-lsn-23-9900.snap
-rw---. 1 postgres postgres 16145717 Dec 22 16:02
xid-14312-lsn-23-9A00.snap
-rw---. 1 postgres postgres 10889437 Dec 22 16:02
xid-14312-
ri, Nov 12, 2021 at 7:23 AM Amit Kapila wrote:
> On Thu, Nov 11, 2021 at 11:15 PM Fabrice Chapuis
> wrote:
> >
> > Hello,
> > Our lab is ready now. Amit, I compile Postgres 10.18 with your
> patch.Tang, I used your script to configure logical replication between 2
&
ion worker exit when physical replication is configured?
Thanks for your help
Fabrice
On Fri, Oct 8, 2021 at 9:33 AM Fabrice Chapuis
wrote:
> Thanks Tang for your script.
> Our debugging environment will be ready soon. I will test your script and
> we will try to reproduce the problem
英
wrote:
> On Friday, September 24, 2021 12:04 AM, Fabrice Chapuis <
> fabrice636...@gmail.com> wrote:
>
> >
>
> > Thanks for your patch, we are going to set up a lab in order to debug
> the function.
>
>
>
> Hi
>
>
>
> I tried to reproduc
Thanks for your patch, we are going to set up a lab in order to debug the
function.
Regards
Fabrice
On Thu, Sep 23, 2021 at 3:50 PM Amit Kapila wrote:
> On Wed, Sep 22, 2021 at 9:46 PM Fabrice Chapuis
> wrote:
> >
> > If you would like I can test the patch you send to me.
&g
If you would like I can test the patch you send to me.
Regards
Fabrice
On Wed, Sep 22, 2021 at 11:02 AM Amit Kapila
wrote:
> On Tue, Sep 21, 2021 at 9:12 PM Fabrice Chapuis
> wrote:
> >
> > > IIUC, these are called after processing each WAL record so not
> > sure
heses values appropriate from your point of view?
Best Regards
Fabrice
On Tue, Sep 21, 2021 at 11:52 AM Amit Kapila
wrote:
> On Tue, Sep 21, 2021 at 1:52 PM Fabrice Chapuis
> wrote:
> >
> > If I understand, the instruction to send keep alive by the wal sender
> has not
stgres: aq: bgworker:
logical replication worker for subscription 24651602
postgres 55681 12546 0 Sep20 ?00:00:00 postgres: aq: wal sender
process repuser 127.0.0.1(57930) idle
Kind Regards
Fabrice
On Tue, Sep 21, 2021 at 8:38 AM Amit Kapila wrote:
> On Mon, Sep 20, 2021 at 9:
2021 at 4:10 PM Fabrice Chapuis
> wrote:
> >
> > Hi Amit,
> >
> > We can replay the problem: we load a table of several Gb in the schema
> of the publisher, this generates the worker's timeout after one minute from
> the end of this load. The table on which this
9, 2021 at 6:25 AM Amit Kapila wrote:
> On Fri, Sep 17, 2021 at 8:08 PM Fabrice Chapuis
> wrote:
> >
> > the publisher and the subscriber run on the same postgres instance.
> >
>
> Okay, but there is no log corresponding to operations being performed
> by the publ
the publisher and the subscriber run on the same postgres instance.
Regards,
Fabrice
On Fri, Sep 17, 2021 at 12:26 PM Amit Kapila
wrote:
> On Fri, Sep 17, 2021 at 3:29 PM Fabrice Chapuis
> wrote:
> >
> > Hi,
> >
> > Logical replication is configured on one instan
Hi,
Logical replication is configured on one instance in version 10.18. Timeout
errors occur regularly and the worker process exit with an exit code 1
2021-09-16 12:06:50 CEST [24881]: [1-1] user=postgres,db=foo,client=[local]
LOG: duration: 1281408.171 ms statement: COPY schem.tab (col1, col2)
38 matches
Mail list logo