Hi Kuroda san,
> I think even if LRG is implemented as contrib modules or any extensions,
> it will deeply depend on the subscription option "origin" proposed in [1].
> So LRG cannot be used for older version, only PG16 or later.
Sorry, I misunderstood.
I understand now.
Regards,
Ryohei Takahas
Hi,
In addition to the use cases mentioned above, some users want to use n-way
replication of partial database.
The following is the typical use case.
* There are several data centers.
(ex. Japan and India)
* The database in each data center has its unique data.
(ex. the database in Japan h
Hi Fujii san,
Thank you for updating the patch.
I have no additional comments.
Regards,
Ryohei Takahashi
Hi,
Thank you for updating the patch.
I agree with the documentation and program.
How about adding the test for %c (Session ID)?
(Adding the test for %C (cluster_name) seems difficult.)
Regards,
Ryohei Takahashi
Hi,
Thank you for developing this feature.
I think adding escape sequence for cluster_name is useful too.
> Is the reason for 'C' in upper-case to avoid possible conflict with
> 'c' of log_line_prefix? I'm not sure that preventive measure is worth
> doing. Looking the escape-sequence spec alon
Hi Nagata-san,
> Ok. I'll fix _copyIntoClause() and _equalIntoClause() as well as
> _readIntoClause()
> and _outIntoClause().
OK.
> > ivm=# create table t (c1 int, c2 int);
> > CREATE TABLE
> > ivm=# create incremental materialized view ivm_t as select distinct c1 from
> > t;
> > NOTICE: cre
Hi Nagata-san,
Sorry for late reply.
> However, even if we create triggers recursively on the parents or children,
> we would still
> need more consideration. This is because we will have to convert the format
> of tuple of
> modified table to the format of the table specified in the view for
Hi Nagata-san,
I'm still reading the patch.
I have additional comments.
(1)
In v23-0001-Add-a-syntax-to-create-Incrementally-Maintainabl.patch, ivm member
is added to IntoClause struct.
I think it is necessary to modify _copyIntoClause() and _equalIntoClause()
functions.
(2)
By executing pg
Hi Nagata-san,
Thank you for your reply.
> I'll investigate this more, but we may have to prohibit views on partitioned
> table and partitions.
I think this restriction is strict.
This feature is useful when the base table is large and partitioning is also
useful in such case.
I have several
Hi Nagata-san,
I am interested in this patch since it is good feature.
I run some simple tests.
I found the following problems.
(1)
Failed to "make world".
I think there are extra "" in
doc/src/sgml/ref/create_materialized_view.sgml
(line 110 and 117)
(2)
In the case of partition, it seems
Hi,
I noticed that anti-virus software slow down the open().
I stopped the anti-virus software and re-run the test.
(Average of 10 times)
master: 1924tps
Hold_xlogreader.patch: 1993tps (+3.5%)
Read_from_walbuffer.patch: 1954tps(+1.5%)
Therefore, the effect of my patch is limited.
I'm sorry for
I noticed that the previous Read_from_walbuffer.patch has a mistake in
xlogreader.c.
Could you please use the attached v2 patch?
The performance result of the previous mail is the result of v2 patch.
Regards,
Ryohei Takahashi
v2-Read_from_walbuffer.patch
Description: v2-Read_from_walbuffer.pa
Hi Sawada-san,
Thank you for your reply.
> BTW did you test on the local? That is, the foreign servers are
> located on the same machine?
Yes, I tested on the local since I cannot prepare the good network now.
> I guess it would be better to start a new thread for this improvement.
Thank you
Hi,
I noticed that COMMIT PREPARED command is slow in the discussion [1].
First, I made the following simple script for pgbench.
``` prepare.pgbench
\set id random(1, 100)
BEGIN;
UPDATE test_table SET md5 = md5(clock_timestamp()::text) WHERE id = :id;
PREPARE TRANSACTION 'prep_:client_id'
Hi,
> Wouldn't it be better to explicitly initialize the pointer with NULL?
Thank you for your advice.
You are correct.
Anyway, I fixed it and re-run the performance test, it of course does not
affect tps.
Regards,
Ryohei Takahashi
Hi Sawada-san,
Thank you for your reply.
> Not sure but it might be possible to keep holding an xlogreader for
> reading PREPARE WAL records even after the transaction commit. But I
> wonder how much open() for wal segment file accounts for the total
> execution time of 2PC. 2PC requires 2 netwo
Hi,
I'm interested in this patch and I also run the same test with Ikeda-san's
fxact_update.pgbench.
In my environment (poor spec VM), the result is following.
* foreign_twophase_commit = disabled
363tps
* foreign_twophase_commit = required (It is necessary to set -R ${RATE} as
Ikeda-san said
Hi Michael-san,
> Attached is a patch to do that, which should go down to v12 where
> tcp_user_timeout has been introduced. Takahashi-san, what do you
> think?
Thank you for creating the patch.
This patch is what I expected.
I'm not sure whether this patch should be applied to postgres below 1
Hi
pg_basebackup -F t fails when fsync spends more time than tcp_user_timeout in
following environment.
[Environment]
Postgres 13dev (master branch)
Red Hat Enterprise Postgres 7.4
[Error]
$ pg_basebackup -F t --progress --verbose -h -D
pg_basebackup: initiating base backup, waiting for chec
19 matches
Mail list logo