On Fri, Oct 20, 2017 at 5:47 AM, Amit Kapila wrote:
> I think what we need here is a way to register satisfies function
> (SnapshotSatisfiesFunc) in SnapshotData for different storage engines.
I don't see how that helps very much. SnapshotSatisfiesFunc takes a
HeapTuple as an argument, and it ca
On Wed, Oct 25, 2017 at 6:07 AM, Michael Paquier
wrote:
> Hi all,
>
> After thinking a bit on the subject, I have decided to submit a patch
> to do $subject. This makes pg_receivewal more consistent with
> pg_basebackup. This option is mainly useful for testing, something
> that becomes way more d
On Tue, Oct 24, 2017 at 10:10 PM, Thomas Munro
wrote:
> Here is an updated patch set that does that ^.
It's a bit hard to understand what's going on with the v21 patch set I
posted yesterday because EXPLAIN ANALYZE doesn't tell you anything
interesting. Also, if you apply the multiplex_gather pa
On 2017-10-25 07:33:46 +0200, Robert Haas wrote:
> On Tue, Oct 24, 2017 at 9:28 PM, Tom Lane wrote:
> > I don't like changing well-defined, user-visible query behavior for
> > no other reason than a performance gain (of a size that hasn't even
> > been shown to be interesting, btw). Will we chang
On Tue, Oct 24, 2017 at 10:20 PM, Justin Pryzby wrote:
> I think you must have compared these:
Yes, I did. My mistake.
> On Tue, Oct 24, 2017 at 03:11:44PM -0500, Justin Pryzby wrote:
>> ts=# SELECT * FROM bt_page_items(get_raw_page('sites_idx', 1));
>>
>> itemoffset | 48
>> ctid | (1,37)
On Tue, Oct 24, 2017 at 9:28 PM, Tom Lane wrote:
> I don't like changing well-defined, user-visible query behavior for
> no other reason than a performance gain (of a size that hasn't even
> been shown to be interesting, btw). Will we change it back in another
> ten years if the performance trade
On Tue, Oct 24, 2017 at 02:57:47PM -0700, Peter Geoghegan wrote:
> On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby wrote:
> > ..which I gather just verifies that the index is corrupt, not sure if
> > there's
> > anything else to do with it? Note, we've already removed the duplicate
> > rows.
>
On Tue, Oct 24, 2017 at 7:24 PM, Tsunakawa, Takayuki
wrote:
> (3)
> Should we change the default value of max_wal_size from 1 GB to a smaller
> size? I vote for "no" for performance.
The default has just changed in v10, so changing it again could be
confusing, so I agree with your position.
--
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Simon Riggs
> This
> * reduces disk space requirements on master
> * removes a minor bug in fast failover
> * simplifies code
I welcome this patch. I was wondering why PostgreSQL retains the previo
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Michael Paquier
> On Tue, Oct 24, 2017 at 5:58 PM, Tsunakawa, Takayuki
> wrote:
> > If the latest checkpoint record is unreadable (the WAL
> segment/block/record is corrupt?), recovery from the prev
On Tue, Oct 24, 2017 at 5:58 PM, Tsunakawa, Takayuki
wrote:
> If the latest checkpoint record is unreadable (the WAL segment/block/record
> is corrupt?), recovery from the previous checkpoint would also stop at the
> latest checkpoint. And we don't need to replay the WAL records between the
>
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Tom Lane
> Doesn't it also make crash recovery less robust? The whole point
> of that mechanism is to be able to cope if the latest checkpoint
> record is unreadable.
If the latest checkpoint recor
Hi all,
After thinking a bit on the subject, I have decided to submit a patch
to do $subject. This makes pg_receivewal more consistent with
pg_basebackup. This option is mainly useful for testing, something
that becomes way more doable since support for --endpos has been
added.
Unsurprisingly, --
On Fri, Oct 20, 2017 at 9:01 AM, Justin Pryzby wrote:
> This was briefly scary but seems to have been limited to my psql session (no
> other errors logged). Issue with catcache (?)
>
> I realized that the backup job I'd kicked off was precluding the CLUSTER from
> running, but that CLUSTER was st
On Tue, Oct 24, 2017 at 02:57:47PM -0700, Peter Geoghegan wrote:
> On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby wrote:
> > ..which I gather just verifies that the index is corrupt, not sure if
> > there's
> > anything else to do with it? Note, we've already removed the duplicate
> > rows.
> M
On Tue, Oct 24, 2017 at 01:48:55PM -0500, Kenneth Marshall wrote:
> I just dealt with a similar problem with pg_repack and a PostgreSQL 9.5 DB,
> the exact same error. It seemed to caused by a tuple visibility issue that
> allowed the "working" unique index to be built, even though a duplicate row
On Mon, Oct 23, 2017 at 11:20 PM, Simon Riggs wrote:
> Remove the code that maintained two checkpoint's WAL files and all
> associated stuff.
>
> Try to avoid breaking anything else
>
> This
> * reduces disk space requirements on master
> * removes a minor bug in fast failover
> * simplifies code
On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby wrote:
> ..which I gather just verifies that the index is corrupt, not sure if there's
> anything else to do with it? Note, we've already removed the duplicate rows.
Yes, the index itself is definitely corrupt -- this failed before the
new "heapalli
I wrote:
> Anyway, PFA an updated patch that also fixes some conflicts with the
> already-committed arrays-of-domains patch.
I realized that the pending patch for jsonb_build_object doesn't
actually have any conflict with what I needed to touch here, so
I went ahead and fixed the JSON functions th
On Tue, Oct 24, 2017 at 12:31:49PM -0700, Peter Geoghegan wrote:
> On Tue, Oct 24, 2017 at 11:48 AM, Kenneth Marshall wrote:
> >> Really ? pg_repack "found" and was victim to the duplicate keys, and
> >> rolled
> >> back its work. The CSV logs clearly show that our application INSERTed
> >> ro
On Tue, Oct 24, 2017 at 11:48 AM, Kenneth Marshall wrote:
>> Really ? pg_repack "found" and was victim to the duplicate keys, and rolled
>> back its work. The CSV logs clearly show that our application INSERTed rows
>> which are duplicates.
>>
>> [pryzbyj@database ~]$ rpm -qa pg_repack10
>> pg_r
Robert Haas writes:
> On Tue, Oct 24, 2017 at 4:36 PM, Tom Lane wrote:
>> Yeah, but I lost the argument. For better or worse, our expected
>> behavior is now that we throw errors. You don't get to change that
>> just because it would save a few cycles.
> I don't know that we can consider the r
On Tue, Oct 24, 2017 at 01:30:19PM -0500, Justin Pryzby wrote:
> On Tue, Oct 24, 2017 at 01:27:14PM -0500, Kenneth Marshall wrote:
> > On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote:
>
> > > Note:
> > > I run a script which does various combinations of ANALYZE/VACUUM
> > > (FULL/AN
On Tue, Oct 24, 2017 at 01:27:14PM -0500, Kenneth Marshall wrote:
> On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote:
> > Note:
> > I run a script which does various combinations of ANALYZE/VACUUM
> > (FULL/ANALYZE)
> > following the upgrade, and a script runs nightly with REINDEX an
On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote:
> I upgrade another instance to PG10 yesterday and this AM found unique key
> violations.
>
> Our application is SELECTing FROM sites WHERE site_location=$1, and if it
> doesn't find one, INSERTs one (I know that's racy and not ideal).
On Tue, Oct 24, 2017 at 4:36 PM, Tom Lane wrote:
>> Does it? In plenty of cases getting infinity rather than an error is
>> just about as useful.
>> This was argued by a certain Tom Lane a few years back ;)
>> http://archives.postgresql.org/message-id/19208.1167246902%40sss.pgh.pa.us
>
> Yeah, but
I upgrade another instance to PG10 yesterday and this AM found unique key
violations.
Our application is SELECTing FROM sites WHERE site_location=$1, and if it
doesn't find one, INSERTs one (I know that's racy and not ideal). We ended up
with duplicate sites, despite a unique index. We removed t
Parallel execution of ALTER SUBSCRIPTION REFRESH PUBLICATION at several
nodes may cause deadlock:
knizhnik 1480 0.0 0.1 1417532 16496 ? Ss 20:01 0:00
postgres: bgworker: logical replication worker for subscription 16589
sync 16720 waiting
knizhnik 1481 0.0 0.1 1417668 17668
This sounds broken on its face --- if you want stuff to survive to
top-level commit, you need to keep it in TopTransactionContext.
CurTransactionContext might be a subtransaction's context that will
go away at subtransaction commit/abort.
https://github.com/postgres/postgres/blob/
On 24 October 2017 at 09:50, Tom Lane wrote:
> Simon Riggs writes:
>> Remove the code that maintained two checkpoint's WAL files and all
>> associated stuff.
>
>> Try to avoid breaking anything else
>
>> This
>> * reduces disk space requirements on master
>> * removes a minor bug in fast failover
Andres Freund writes:
> On 2017-10-24 10:09:09 -0400, Tom Lane wrote:
>> There's an ancient saying that code can be arbitrarily fast if it
>> doesn't have to get the right answer. I think this proposal falls
>> in that category.
> Does it? In plenty of cases getting infinity rather than an error
Gaddam Sai Ram writes:
> We are implementing in-memory index. As a part of that, during
> index callbacks, under CurTransactionContext, we cache all the DMLs of a
> transaction in dlist(postgres's doubly linked list).
> We registered transaction callback, and in transacti
On 2017-10-24 10:09:09 -0400, Tom Lane wrote:
> Andres Freund writes:
> > There's no comparable overflow handling to the above integer
> > intrinsics. But I think we can still do a lot better. Two very different
> > ways:
>
> > 1) Just give up on detecting overflows for floats. Generating inf in
Hello people,
We are implementing in-memory index. As a part of that, during
index callbacks, under CurTransactionContext, we cache all the DMLs of a
transaction in dlist(postgres's doubly linked list).
We registered transaction callback, and in transaction pre-commi
Andres Freund writes:
> There's no comparable overflow handling to the above integer
> intrinsics. But I think we can still do a lot better. Two very different
> ways:
> 1) Just give up on detecting overflows for floats. Generating inf in
>these cases actually seems entirely reasonable. We al
On 2017-10-24 09:50:12 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > Remove the code that maintained two checkpoint's WAL files and all
> > associated stuff.
>
> > Try to avoid breaking anything else
>
> > This
> > * reduces disk space requirements on master
> > * removes a minor bug in fast
Hi Munro,
Thanks for cautioning us about possible memory leaks(during error cases)
incase of long-lived DSA segements(have a look in below thread for more
details).
https://www.postgresql.org/message-id/CAEepm%3D3c4WAtSQG4tAF7Y_VCnO5cKh7KuFYZhpKbwGQOF%3DdZ4A%40mail.gmail.com
Simon Riggs writes:
> Remove the code that maintained two checkpoint's WAL files and all
> associated stuff.
> Try to avoid breaking anything else
> This
> * reduces disk space requirements on master
> * removes a minor bug in fast failover
> * simplifies code
Doesn't it also make crash recover
There are some moments I should mention:
1. {"1":1}::jsonb is transformed into HV {"1"=>"1"}, while
["1","2"]::jsonb is transformed into AV ["1", "2"]
2. If there is a numeric value appear in jsonb, it will be transformed
to SVnv through string (Numeric->String->SV->SVnv). Not the best
solution, b
We already know this integer overflow checking is non-standard and
compilers keep trying to optimize them out. Our only strategy to
defeat that depends on compiler flags like -fwrapv that vary by
compiler and may or may not be working on less well tested compiler.
So if there's a nice readable an
On Tue, Oct 24, 2017 at 10:56 AM, Ivan Kartyshov wrote:
> Hello. I made some bugfixes and rewrite the patch.
>
> Simon Riggs писал 2017-09-05 14:44:
>
>> As Alexander says, simply skipping truncation if standby is busy isn't
>> a great plan.
>>
>> If we defer an action on standby replay, when and
On Tue, Oct 24, 2017 at 5:00 PM, Andres Freund wrote:
> On 2017-10-24 12:43:12 +0530, amul sul wrote:
>> I tried to get suggested SMHasher[1] test result for the hash_combine
>> for 32-bit and 64-bit version.
>>
>> SMHasher works on hash keys of the form {0}, {0,1}, {0,1,2}... up to
>> N=255, usin
On 2017-10-24 12:43:12 +0530, amul sul wrote:
> I tried to get suggested SMHasher[1] test result for the hash_combine
> for 32-bit and 64-bit version.
>
> SMHasher works on hash keys of the form {0}, {0,1}, {0,1,2}... up to
> N=255, using 256-N as the seed, for the hash_combine testing we
> needed
Hello.
Please, check out jsonb transform
(https://www.postgresql.org/docs/9.5/static/sql-createtransform.html)
for pl/perl language I've implemented.diff --git a/contrib/Makefile b/contrib/Makefile
index 8046ca4..53d44fe 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -75,9 +75,9 @@ ALWAYS_
Alvaro Herrera wrote:
> Before pushing, I'll give a look to the regular autovacuum path to see
> if it needs a similar fix.
Reading that one, my conclusion is that it doesn't have the same problem
because the strings are allocated in AutovacuumMemCxt which is not reset
by error recovery. This ga
Hi,
In analytics queries that involve a large amounts of integers and/or
floats (i.e. a large percentage) it's quite easy to see the functions
underlying the operators in profiles. Partially that's the function call
overhead, but even *after* removing most of that via JITing, they're
surprisingly
On Tue, Sep 19, 2017 at 8:06 AM, Robert Haas wrote:
> On Thu, Sep 14, 2017 at 10:01 AM, Thomas Munro
> wrote:
>> 3. Gather Merge and Parallel Hash Join may have a deadlock problem.
>
> [...]
>
> Thomas and I spent about an hour and a half brainstorming about this
> just now.
>
> [...]
>
> First,
Hello. I made some bugfixes and rewrite the patch.
Simon Riggs писал 2017-09-05 14:44:
As Alexander says, simply skipping truncation if standby is busy isn't
a great plan.
If we defer an action on standby replay, when and who will we apply
it? What happens if the standby is shutdown or crashes
On Fri, Oct 13, 2017 at 3:00 AM, Andres Freund wrote:
> On 2017-10-12 17:27:52 -0400, Robert Haas wrote:
>> On Thu, Oct 12, 2017 at 4:20 PM, Andres Freund wrote:
>> >> In other words, it's not utterly fixed in stone --- we invented
>> >> --load-via-partition-root primarily to cope with circumstan
49 matches
Mail list logo