git.postgresql.org is down/unreachable
( git://git.postgresql.org/git/postgresql.git )
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
In the 'ftp' listing, v10 appears at the bottom:
https://www.postgresql.org/ftp/source/
With all the other v10* directories at the top, we could get a lot of
people installing wrong binaries...
Maybe it can be fixed so that it appears at the top.
Thanks,
Erik Rijkers
-
comments improvements--- src/backend/optimizer/prep/prepunion.c.orig 2017-09-24 17:40:34.888790877 +0200
+++ src/backend/optimizer/prep/prepunion.c 2017-09-24 17:41:39.796748743 +0200
@@ -2413,7 +2413,7 @@
* Find AppendRelInfo structures for all relations specified by relids.
*
* The Appen
app.
For me, such a list would be even more useful than any subsequently
processed results.
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
-formatting.
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
mitfest-app. I would think it would be best to make it so that when
the thread gets set to state 'committed', the actual commit/hash is
added somewhere at the same time.
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes t
The feature matrix table in high-availability.sgml had a column added so
also increase the column-count (patch attached).
thanks,
Erik Rijkers--- doc/src/sgml/high-availability.sgml.orig 2017-08-17 15:04:32.535819637 +0200
+++ doc/src/sgml/high-availability.sgml 2017-08-17 15:04:46.528122345
On 2017-08-01 20:43, Robert Haas wrote:
In commit 054637d2e08cda6a096f48cc99696136a06f4ef5, I updated the
parallel query documentation to reflect recently-committed parallel
Barring objections, I'd like to commit this in the next couple of days
I think that in this bit:
occurrence is fr
On 2017-07-27 21:08, Mark Rofail wrote:
On Thu, Jul 27, 2017 at 7:15 PM, Erik Rijkers wrote:
It would help (me at least) if you could be more explicit about what
exactly each instance is.
I apologize, I thought it was clear through the context.
Thanks a lot. It's just really eas
ething you posted earlier?
I guess it could be distilled from the earlier posts but when I looked
those over yesterday evening I still didn't get it.
A link to the post where the 'original patch' is would be ideal...
thanks!
Erik Rijkers
With two tables a PK table with 5 rows and
On 2017-07-24 23:31, Mark Rofail wrote:
On Mon, Jul 24, 2017 at 11:25 PM, Erik Rijkers wrote:
This patch doesn't apply to HEAD at the moment ( e2c8100e6072936 ).
My bad, I should have mentioned that the patch is dependant on the
original
patch.
Here is a *unified* patch that I
ers a 41.68% slow down, while the
new
patch with the index has a 95.18% speed up!
[elemOperatorV4.patch]
This patch doesn't apply to HEAD at the moment ( e2c8100e6072936 ).
Can you have a look?
thanks,
Erik Rijkers
patching file doc/src/sgml/ref/create_table.sgml
Hunk #1 succeeded a
content-tree becomes that much
easier again.
(By the way (unrelated), I also noticed only today that the new process
now wraps many of the too-long-lines; lines that were previously
unceremoniously cut off in 'mid-sentence'. That wrapping, although not
always pretty, is a really usefu
On 2017-06-18 00:27, Peter Eisentraut wrote:
On 6/17/17 06:48, Erik Rijkers wrote:
On 2017-05-28 12:44, Erik Rijkers wrote:
re: srsubstate in pg_subscription_rel:
No idea what it means. At the very least this value 'w' is missing
from the documentation, which only mentions:
i =
On 2017-05-28 12:44, Erik Rijkers wrote:
re: srsubstate in pg_subscription_rel:
No idea what it means. At the very least this value 'w' is missing
from the documentation, which only mentions:
i = initalize
d = data copy
s = synchronized
r = (normal replication)
Shouldn
tablesync.c - comment improvements--- src/backend/replication/logical/tablesync.c.orig 2017-06-10 10:20:07.617662465 +0200
+++ src/backend/replication/logical/tablesync.c 2017-06-10 10:45:52.620514397 +0200
@@ -12,18 +12,18 @@
* logical replication.
*
* The initial data synchronization i
On 2017-06-07 23:18, Alvaro Herrera wrote:
Erik Rijkers wrote:
Now, looking at the script again I am thinking that it would be
reasonable
to expect that after issuing
delete from pg_subscription;
the other 2 tables are /also/ cleaned, automatically, as a
consequence. (Is
this
patch was never really related to
the problem, but rather than by the time Erik got around to testing
this patch, other fixes had made the problems relatively rare, and the
apparently-improved results with this patch were just chance. If that
theory is wrong, it would be good to hear about it. ]
e? All this is probably vague and I am only posting
in the hope that Petr (or someone else) perhaps immediately understands
what goes wrong, with even his limited amount of info.
In the meantime I will try to dig up more detailed info...
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing
don't know if it worked but I'm glad that it is solved ;)
Thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-31 16:20, Erik Rijkers wrote:
On 2017-05-31 11:16, Petr Jelinek wrote:
[...]
Thanks to Mark's offer I was able to study the issue as it happened
and
found the cause of this.
[0001-Improve-handover-logic-between-sync-and-apply-worker.patch]
This looks good:
-- out_20170531
x27;ve moved back to longer (1-hour) runs (no errors so
far), and I'd like to keep track of what the most 'vulnerable'
parameters are.
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
definitely does look as if you fixed it.
Thanks!
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-26 08:10, Erik Rijkers wrote:
If you run a pgbench session of 1 minute over a logical replication
connection and repeat that 100x this is what you get:
At clients 90, 64, 8, scale 25:
-- out_20170525_0944.txt
100 -- pgbench -c 90 -j 8 -T 60 -P 12 -n -- scale 25
7 -- Not
Could you give the params for the successful runs?
(ideally, a grep | sort | uniq -c of the ran pgbench lines )
Can you say anything about hardware?
Thanks for repeating my lengthy tests.
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make chan
stupid (although I'm not glad that there seems
to be a bug, and an elusive one at that)
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
cumentation, which only mentions:
i = initalize
d = data copy
s = synchronized
r = (normal replication)
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-28 01:15, Mark Kirkwood wrote:
Also, any idea which rows are different? If you want something out of
the box that will do that for you see DBIx::Compare.
I used to save the content-diffs too but in the end decided they were
useless (to me, anyway).
--
Sent via pgsql-hackers mail
On 2017-05-28 01:21, Mark Kirkwood wrote:
Sorry - I see you have done this already.
On 28/05/17 11:15, Mark Kirkwood wrote:
Interesting - might be good to see your test script too (so we can
better understand how you are deciding if the runs are successful or
not).
Yes, in pgbench_derail2.s
On 2017-05-27 17:11, Andres Freund wrote:
On May 27, 2017 6:13:19 AM EDT, Simon Riggs
wrote:
On 27 May 2017 at 09:44, Erik Rijkers wrote:
I am very curious at your results.
We take your bug report on good faith, but we still haven't seen
details of the problem or how to recrea
On 2017-05-27 10:30, Erik Rijkers wrote:
On 2017-05-27 01:35, Mark Kirkwood wrote:
Here is what I have:
instances.sh:
testset.sh
pgbench_derail2.sh
pubsub.sh
To be clear:
( Apart from that standalone call like
./pgbench_derail2.sh $scale $clients $duration $date_str
)
I normally run
On 2017-05-27 01:35, Mark Kirkwood wrote:
On 26/05/17 20:09, Erik Rijkers wrote:
The idea is simple enough:
startup instance1
startup instance2 (on same machine)
primary: init pgbench tables
primary: add primary key to pgbench_history
copy empty tables to replica by dump/restore
primary
On 2017-05-27 01:35, Mark Kirkwood wrote:
On 26/05/17 20:09, Erik Rijkers wrote:
this whole thing 100x
Some questions that might help me get it right:
- do you think we need to stop and start the instances every time?
- do we need to init pgbench each time?
- could we just drop the
1-minute
pgbench-derail version exactly because of the earlier problems with
snapbuild: I wanted a test that does a lot of starting and stopping of
publication and subscription.
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes t
On 2017-05-26 10:29, Mark Kirkwood wrote:
On 26/05/17 20:09, Erik Rijkers wrote:
On 2017-05-26 09:40, Simon Riggs wrote:
If we can find out what the bug is with a repeatable test case we can
fix it.
Could you provide more details? Thanks
I will, just need some time to clean things up a
On 2017-05-26 09:40, Simon Riggs wrote:
If we can find out what the bug is with a repeatable test case we can
fix it.
Could you provide more details? Thanks
I will, just need some time to clean things up a bit.
But what I would like is for someone else to repeat my 100x1-minute
tests, ta
On 2017-05-26 08:58, Simon Riggs wrote:
On 26 May 2017 at 07:10, Erik Rijkers wrote:
- Do you agree this number of failures is far too high?
- Am I the only one finding so many failures?
What type of failure are you getting?
The failure is that in the result state the replicated tables
o others do? Could we somehow concentrate results and
method somewhere?
Thanks,
Erik Rijkers
PS
The core of the 'pgbench_derail' test (bash) is simply:
echo "drop table if exists pgbench_accounts;
drop table if exists pgbench_branches;
drop table if exists pgbench_tellers;
drop
On 2017-05-21 06:37, Erik Rijkers wrote:
On 2017-05-20 14:40, Michael Paquier wrote:
On Fri, May 19, 2017 at 3:01 PM, Masahiko Sawada
wrote:
Also, as Horiguchi-san pointed out earlier, walreceiver seems need
the
similar fix.
Actually, now that I look at it, ready_to_display should as well
t make the beta.
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-09 21:00, Petr Jelinek wrote:
On 09/05/17 19:54, Erik Rijkers wrote:
On 2017-05-09 11:50, Petr Jelinek wrote:
Ah okay, so this is same issue that's reported by both Masahiko Sawada
[1] and Jeff Janes [2].
[1]
https://www.postgresql.org/message-id/CAD21AoBYpyqTSw%2B
ers 107165 107145 0 17:11 ?Ss 0:00 \_ postgres:
bgworker: logical replication launcher
rijkers 108349 107145 0 17:12 ?Ss 0:27 \_ postgres: wal
sender process rijkers [local] idle
rijkers 108351 107145 0 17:12 ?Ss 0:26 \_ postgres: wal
sender process rijkers
On 2017-05-09 11:50, Petr Jelinek wrote:
On 09/05/17 10:59, Erik Rijkers wrote:
On 2017-05-09 10:50, Petr Jelinek wrote:
On 09/05/17 00:03, Erik Rijkers wrote:
On 2017-05-05 02:00, Andres Freund wrote:
Could you have a look?
[...]
I rebased the above mentioned patch to apply to the
On 2017-05-09 10:50, Petr Jelinek wrote:
On 09/05/17 00:03, Erik Rijkers wrote:
On 2017-05-05 02:00, Andres Freund wrote:
Could you have a look?
Running tests with these three patches:
0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
0002-WIP-Possibly-more-robust-snapbuild
I
believe this has been true for all failure cases that I've seen (except
the much more rare stuck-DROP-SUBSCRIPTION which is mentioned in another
thread).
Sorry, I have not been able to get any thing more clear or definitive...
thanks,
Erik Rijkers
--
Sent via pgsql-hackers m
On 2017-05-08 13:13, Masahiko Sawada wrote:
On Mon, May 8, 2017 at 7:14 PM, Erik Rijkers wrote:
On 2017-05-08 11:27, Masahiko Sawada wrote:
FWIW, running
0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
0002-WIP-Possibly-more-robust-snapbuild-approach.patch +
fix
n top of 44c528810)
I have encountered the same condition as well in the last few days, a
few times (I think 2 or 3 times).
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
7; anymore.
(If there is any other configuration of patches worth testing please let
me know)
thanks
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-04-17 15:59, Stas Kelvich wrote:
On 17 Apr 2017, at 10:30, Erik Rijkers wrote:
On 2017-04-16 20:41, Andres Freund wrote:
On 2017-04-16 10:46:21 +0200, Erik Rijkers wrote:
On 2017-04-15 04:47, Erik Rijkers wrote:
>
> 0001-Reserve-global-xmin-for-create-slot-snasphot-export
On 2017-04-16 20:41, Andres Freund wrote:
On 2017-04-16 10:46:21 +0200, Erik Rijkers wrote:
On 2017-04-15 04:47, Erik Rijkers wrote:
>
> 0001-Reserve-global-xmin-for-create-slot-snasphot-export.patch +
> 0002-Don-t-use-on-disk-snapshots-for-snapshot-export-in-l.patch+
> 0003-Prev
On 2017-04-15 04:47, Erik Rijkers wrote:
0001-Reserve-global-xmin-for-create-slot-snasphot-export.patch +
0002-Don-t-use-on-disk-snapshots-for-snapshot-export-in-l.patch+
0003-Prevent-snapshot-builder-xmin-from-going-backwards.patch +
0004-Fix-xl_running_xacts-usage-in-snapshot-builder.patch
*)0))", File:
"pgstat.c", Line: 828)
reliably (often within a minute).
The test itself does not fail, at least not that I saw (but I only ran a
few).
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscriptio
mechanical message by a computer. I
dislike it strongly.
I would prefer the line to be more terse:
DETAIL: 90 transactions to finish.
Am I the only one who is annoyed by this phrase?
Thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To ma
es - I
don't know which ones are essential and which may not be).
If it's at all useful I can repeat tests to show how often current
master still fails (easily 50% or so failure-rate).
This would be the pgbench-over-logical-replication test that I did so
often earlier on.
thank
On 2017-04-07 22:50, Andres Freund wrote:
On 2017-04-07 22:47:55 +0200, Erik Rijkers wrote:
monitoring.sgml has one tag missing
Is that actually an issue? SGML allows skipping certain close tags, and
IIRC row is one them. We'll probably move to xml at some point not too
far away,
monitoring.sgml has one tag missing--- doc/src/sgml/monitoring.sgml.orig 2017-04-07 22:37:55.388708334 +0200
+++ doc/src/sgml/monitoring.sgml 2017-04-07 22:38:16.582047695 +0200
@@ -1275,6 +1275,7 @@
ProcArrayGroupUpdate
Waiting for group leader to clear transaction
fine now; I hope the above patches can all be committed soon.
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-03-09 11:06, Erik Rijkers wrote:
I use three different machines (2 desktop, 1 server) to test logical
replication, and all three have now at least once failed to correctly
synchronise a pgbench session (amidst many succesful runs, of course)
(At the moment using tese patches for
Small fry gathered wile reading walsender.c ...
(to be applied to master)
Thanks,
Erik Rijkers
--- src/backend/replication/walsender.c.orig 2017-03-28 08:34:56.787217522 +0200
+++ src/backend/replication/walsender.c 2017-03-28 08:44:56.486327700 +0200
@@ -14,11 +14,11 @@
* replication-mode
On 2017-03-24 10:45, Mark Kirkwood wrote:
However one minor observation - as Michael Banck noted - the elapsed
time for slave to catch up after running:
$ pgbench -c8 -T600 bench
on the master was (subjectively) much longer than for physical
streaming replication. Is this expected?
I think
On 2017-03-23 03:28, Michael Paquier wrote:
On Thu, Mar 23, 2017 at 12:51 AM, Erik Rijkers wrote:
While trying to test pgbench's stderr (looking for 'creating tables'
in
output of the initialisation step) I ran into these two bugs (or
perhaps
better 'oversights&
for a non-zero return value
(which seems to make sense, but in this case not possible: pgbench
returns 0 on init with output on stderr).
make check-world passes without error
Thanks,
Erik Rijkers
--- src/test/perl/TestLib.pm.orig 2017-03-22 11:34:36.948857255 +0100
+++ src/test/perl/TestLib.pm
On 2017-03-18 06:37, Erik Rijkers wrote:
Studying logrep yielded some more improvements to the comments in
snapbuild.c
(to be applied to master)
Attached the actual file
thanks,
Erik Rijekrs
--- src/backend/replication/logical/snapbuild.c.orig 2017-03-18 05:02:28.627077888 +0100
+++ src
Studying logrep yielded some more improvements to the comments in
snapbuild.c
(to be applied to master)
thanks,
Erik Rijekrs
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ing is done yet", but I can't
tell from your email so I thought I'd report ir anyway. Ignore as
appropriate...
Thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Improvements (grammar/typos) in the comments in snapbuild.c
To be applied to master.
thanks,
Erik Rijkers
--- src/backend/replication/logical/snapbuild.c.orig 2017-03-14 21:53:42.590196415 +0100
+++ src/backend/replication/logical/snapbuild.c 2017-03-14 21:57:57.906539208 +0100
@@ -34,7 +34,7
On 2017-03-09 11:06, Erik Rijkers wrote:
On 2017-03-08 10:36, Petr Jelinek wrote:
On 07/03/17 23:30, Erik Rijkers wrote:
On 2017-03-06 11:27, Petr Jelinek wrote:
0001-Reserve-global-xmin-for-create-slot-snasphot-export.patch +
0002-Don-t-use-on-disk-snapshots-for-snapshot-export-in-l.patch
On 2017-03-09 11:06, Erik Rijkers wrote:
file Name:
logrep.20170309_1021.1.1043.scale_25.clients_64.NOK.log
20170309_1021 is the start-time of the script
1 is master (2 is replica)
1043 is the time, 10:43, just before the pg_waldump call
Sorry, that might be confusing. That 10:43 is the
lcome.
thanks,
Erik Rijkers
20170307_1613.tar.bz2
Description: BZip2 compressed data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-03-06 16:10, Erik Rijkers wrote:
On 2017-03-06 11:27, Petr Jelinek wrote:
Hi,
updated and rebased version of the patch attached.
I compiled with /only/ this one latest patch:
0001-Logical-replication-support-for-initial-data-copy-v6.patch
Is that correct, or are other patches
?
Anyway, with that one patch, and even after
alter role ... set synchronous_commit = off;
the process is very slow. (sufficiently slow that I haven't
had the patience to see it to completion yet)
What am I doing wrong?
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-ha
On 2017-03-03 01:30, Petr Jelinek wrote:
With these patches:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver.patch
0002-Fix-after-trigger-execution-in-logical-replication.patch
0003-Add-RENAME-support-for-PUBLICATIONs-and-SUBSCRIPTION.patch
snapbuild-v5-0001-Reserve-global-xmin-for-create-
On 2017-02-28 07:38, Erik Rijkers wrote:
On 2017-02-27 15:08, Petr Jelinek wrote:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver.patch
+
0002-Fix-after-trigger-execution-in-logical-replication.patch
+
0003-Add-RENAME-support-for-PUBLICATIONs-and
is a slowish machine but that's a really spectacular difference.
It's the difference between keeping up or getting lost.
Would you remind me why synchronous_commit = on was deemed a better
default? This thread isn't very clear about it (not the 'logical
replication WIP'
on. I set max_sync_workers_per_subscription to 6 (from
default 2) but it doesn't help much (at all).
To be continued...
Thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-02-26 10:53, Erik Rijkers wrote:
Not yet perfect, but we're getting there...
Sorry, I made a mistake: I was running the newest patches on master but
the older versions on replica (or more precise: I didn't properly
shutdown the replica so the older version remained up a
On 2017-02-26 01:45, Petr Jelinek wrote:
Again, much better... :
-- out_20170226_0724.txt
25 -- pgbench -c 1 -j 8 -T 10 -P 5 -n
25 -- All is well.
-- out_20170226_0751.txt
25 -- pgbench -c 4 -j 8 -T 10 -P 5 -n
25 -- All is well.
-- out_20170226_0819.txt
25 -- pgbench -c
10 -P 5 -n
25 -- All is well.
QED.
(By the way, no hanged sessions so far, so that's good)
thanks
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-02-25 00:08, Petr Jelinek wrote:
There is now a lot of fixes for existing code that this patch depends
on. Hopefully some of the fixes get committed soonish.
Indeed - could you look over the below list of 8 patches; is it correct
and in the right (apply) order?
0001-Use-asynchronous
On 2017-02-24 22:58, Petr Jelinek wrote:
On 23/02/17 01:41, Petr Jelinek wrote:
On 23/02/17 01:02, Erik Rijkers wrote:
On 2017-02-22 18:13, Erik Rijkers wrote:
On 2017-02-22 14:48, Erik Rijkers wrote:
On 2017-02-22 13:03, Petr Jelinek wrote:
0001-Skip-unnecessary-snapshot-builds.patch
0002
On 2017-02-22 18:13, Erik Rijkers wrote:
On 2017-02-22 14:48, Erik Rijkers wrote:
On 2017-02-22 13:03, Petr Jelinek wrote:
0001-Skip-unnecessary-snapshot-builds.patch
0002-Don-t-use-on-disk-snapshots-for-snapshot-export-in-l.patch
0003-Fix-xl_running_xacts-usage-in-snapshot-builder.patch
0001
-execution-in-logical-replication.patch
0003-Add-RENAME-support-for-PUBLICATIONs-and-SUBSCRIPTION.patch
0001-Logical-replication-support-for-initial-data-copy-v5.patch
It works well now, or at least my particular test case seems now solved.
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list
content of each table
(an ordered select *)
I repeated this a few times: of course, the number of rows in
pgbench_history varies a bit but otherwise it is always the same: 3
empty replica tables, pgbench_history replicated correctly.
Something is not right.
thanks,
Erik Rijkers
--
Sent vi
On 2017-02-19 23:24, Erik Rijkers wrote:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver-v2.patch
0002-Always-initialize-stringinfo-buffers-in-walsender-v2.patch
0003-Fix-after-trigger-execution-in-logical-replication-v2.patch
0004-Add-RENAME-support-for-PUBLICATIONs-and-SUBSCRIPTION-v2
0001-Use-asynchronous-connect-API-in-libpqwalreceiver-v2.patch
0002-Always-initialize-stringinfo-buffers-in-walsender-v2.patch
0003-Fix-after-trigger-execution-in-logical-replication-v2.patch
0004-Add-RENAME-support-for-PUBLICATIONs-and-SUBSCRIPTION-v2.patch
0001-Logical-replication-support-for-in
On 2017-02-19 17:21, Erik Rijkers wrote:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver-v2.patch
0002-Always-initialize-stringinfo-buffers-in-walsender-v2.patch
0003-Fix-after-trigger-execution-in-logical-replication-v2.patch
0004-Add-RENAME-support-for-PUBLICATIONs-and-SUBSCRIPTION-v2
-initial-data-copy-v4.patch
Improve readability of comment blocks
in src/backend/replication/logical/origin.c
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-02-11 11:16, Erik Rijkers wrote:
On 2017-02-08 23:25, Petr Jelinek wrote:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver-v2.patch
0002-Always-initialize-stringinfo-buffers-in-walsender-v2.patch
0003-Fix-after-trigger-execution-in-logical-replication-v2.patch
0004-Add-RENAME
Maybe add this to the 10 Open Items list?
https://wiki.postgresql.org/wiki/PostgreSQL_10_Open_Items
It may garner a bit more attention.
Ah sorry, it's there already.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.post
On 2017-02-16 00:43, Petr Jelinek wrote:
On 13/02/17 14:51, Erik Rijkers wrote:
On 2017-02-11 11:16, Erik Rijkers wrote:
On 2017-02-08 23:25, Petr Jelinek wrote:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver-v2.patch
0002-Always-initialize-stringinfo-buffers-in-walsender-v2.patch
On 2017-02-16 00:43, Petr Jelinek wrote:
On 13/02/17 14:51, Erik Rijkers wrote:
On 2017-02-11 11:16, Erik Rijkers wrote:
On 2017-02-08 23:25, Petr Jelinek wrote:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver-v2.patch
0002-Always-initialize-stringinfo-buffers-in-walsender-v2.patch
On 2017-02-11 11:16, Erik Rijkers wrote:
On 2017-02-08 23:25, Petr Jelinek wrote:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver-v2.patch
0002-Always-initialize-stringinfo-buffers-in-walsender-v2.patch
0003-Fix-after-trigger-execution-in-logical-replication-v2.patch
0004-Add-RENAME
On 2017-02-09 02:25, Erik Rijkers wrote:
On 2017-02-08 23:25, Petr Jelinek wrote:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver-v2.patch
0002-Always-initialize-stringinfo-buffers-in-walsender-v2.patch
0003-Fix-after-trigger-execution-in-logical-replication-v2.patch
0004-Add-RENAME
I don't think I've managed to successfully run the script with more than
1 client yet.
Can you have a look whether this is reproducible elsewhere?
thanks,
Erik Rijkers
#!/bin/sh
# assumes both instances are running, on port 6972 and 6973
logfile1=$HOME/pg_stuff/pg_installatio
/ the interactive
output.
I would vote to just make it remain silent if there is no error. (and
if there is an error, issue a message and exit)
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
nyway.
-Erik
On Thu, Feb 9, 2017 at 3:56 PM, Tom Lane wrote:
> =?UTF-8?Q?Erik_Nordstr=C3=B6m?= writes:
> > Thanks for the insightful feedback. You are right, the patch does suffer
> > from overflow (and other possible issues when I think about it). Using
> > rint(), a
to_timestamp
---
2017-02-07 16:12:04.236538+01
(1 row)
test=# select to_timestamp(14864803242.312311);
to_timestamp
---
2441-01-17 09:00:42.312311+01
(1 row)
--Erik
On Wed, Feb 8, 2017 at 9:52 PM, Tom Lane wrote:
> I wrote:
> > I wonder
.patch
0001-Logical-replication-support-for-initial-data-copy-v4.patch
test 'object_address' fails, see atachment.
That's all I found in a quick first trial.
thanks,
Erik Rijkers
*** /home/aardvark/pg_stuff/pg_sandbox/pgsql.logical_replication/src/test/regress/expected/object_a
the given floating
point value to a microsecond integer and then doing the epoch conversion,
rather than doing the conversion using floating point and finally casting
to an integer/timestamp.
I am attaching a patch that fixes the above issue.
Regards,
Erik
to_timestamp_fix.patch
Description: B
-uninitialized]
rel->rd_amcache = cache;
^~~
which hopefully can be prevented...
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
1 - 100 of 370 matches
Mail list logo