On 04/03/2018 04:07 PM, Stas Kelvich wrote:
>
>
>> On 3 Apr 2018, at 16:56, Tomas Vondra wrote:
>>
>>
>> So I think we need a subscription parameter to enable/disable this,
>> defaulting to 'disabled’.
>
> +1
>
> Also, current value for
On 04/03/2018 04:37 PM, Alvaro Herrera wrote:
> Tomas Vondra wrote:
>
>> Yes, that is a good point actually - we need to test that replication
>> between PG10 and PG11 works correctly, i.e. that the protocol version is
>> correctly negotiated, and features are disabled
to disk when
reaching the memory limit. It needs to be allocated ad-hoc only when
actually needed.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 04/03/2018 02:05 PM, Magnus Hagander wrote:
> On Sun, Apr 1, 2018 at 2:04 PM, Magnus Hagander <mailto:mag...@hagander.net>> wrote:
>
> On Sat, Mar 31, 2018 at 5:38 PM, Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>>
> wrote:
>
>
group on the first call and
then we only tweak the decodeLocked flag.
7) I propose minor changes to a couple of comments.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backend/repli
On 04/01/2018 10:01 AM, Pavel Stehule wrote:
>
>
> 2018-04-01 1:00 GMT+02:00 Tomas Vondra <mailto:tomas.von...@2ndquadrant.com>>:
>
>
>
> On 03/31/2018 08:28 PM, Tomas Vondra wrote:
> >
> >
> > On 03/31/2018 07:56 PM, Tom
nd more like the rest of
the commands. Most of the patch is boilerplate to support the grammar,
and the one interesting piece exec_stmt_set seems fine to me.
Barring any objections, I'll mark it as RFC tomorrow morning.
regards
--
Tomas Vondra http://www.2ndQuadrant.c
On 04/04/2018 07:54 PM, Tom Lane wrote:
> Tomas Vondra writes:
>> This should do the trick - I've failed to realize exec_stmt_call may
>> exit by calling elog(ERROR) too, in which case the plan pointer was not
>> reset.
>
>> This does fix the failures present
On 03/31/2018 04:42 PM, David Rowley wrote:
> On 30 March 2018 at 02:55, Tomas Vondra wrote:
>> On 03/29/2018 03:09 PM, David Rowley wrote:
>>> I meant to mention earlier that I coded
>>> agg_args_have_sendreceive_funcs() to only check for send/receive
>>>
On 03/30/2018 10:51 PM, Matheus de Oliveira wrote:
> Hi all.
>
> On Wed, Mar 21, 2018 at 1:47 PM, Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>> wrote:
>
>
> Do you plan to post an updated version of the patch, of what is your
> response
it for the OPTIONAL test_decoding test cases
> (which AFAIK we don't plan to commit in that state) which demonstrate
> concurrent rollback interlocking with the lock/unlock APIs. The first
> ELOG was enough to catch the interaction. If we think these elogs
> should be present
On 4/5/18 11:07 AM, Magnus Hagander wrote:
>
>
> On Wed, Apr 4, 2018 at 12:11 AM, Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>> wrote:
>
> ...
>
> It however still fails to initialize the attempts field after allocating
> the db entry in BuildDa
sts
> fail:
>
> CREATE EXTENSION btree_gin;
> + ERROR: could not find function "gin_extract_value_uuid" in file
> "/usr/local/pgsql/lib/btree_gin.so"
>
>
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Su
On 04/05/2018 09:10 PM, Tels wrote:
> Moin,
>
> On Wed, April 4, 2018 11:41 pm, David Rowley wrote:
>> Hi Tomas,
>>
>> Thanks for taking another look.
>>
>> On 5 April 2018 at 07:12, Tomas Vondra
>> wrote:
>>> Other than that, the patch s
On 04/05/2018 05:41 AM, David Rowley wrote:
> Hi Tomas,
>
> Thanks for taking another look.
>
> On 5 April 2018 at 07:12, Tomas Vondra wrote:
>> Seems fine to me, although we should handle the anyarray case too, I
>> guess. That is, get_agg_clause_costs_
of overhead as
> throwing a barrier in there?
>
Perhaps the easiest thing we could do is walk shared buffers and do
LockBufHdr/UnlockBufHdr, which would guarantee no session is the process
of writing out a buffer with possibly stale checksum flag. Of course,
it's a bit brute-force-ish, but it's not that different from the waits
for running transactions and temporary tables.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 04/06/2018 01:43 AM, Alexander Korotkov wrote:
> Hi!
>
> On Tue, Apr 3, 2018 at 2:10 PM, Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>> wrote:
>
> I think solving this may be fairly straight-forward. Essentially, until
> now we only had one way to d
On 04/06/2018 07:22 PM, Andres Freund wrote:
> Hi,
>
> On 2018-04-06 14:34:43 +0200, Tomas Vondra wrote:
>>> Oh, that's not my intention either -- I just wanted to make sure I
>>> was thinking about the same issue you were.
>
>> I agree we shouldn't
On 04/06/2018 07:46 PM, Andres Freund wrote:
> On 2018-04-06 19:40:59 +0200, Tomas Vondra wrote:
>> In any case, I wouldn't call LockBufHdr/UnlockBufHdr a "side channel"
>> interlock. It's a pretty direct and intentional interlock, I think.
>
On 04/06/2018 08:13 PM, Andres Freund wrote:
> On 2018-04-06 19:59:17 +0200, Tomas Vondra wrote:
>> On 04/06/2018 07:46 PM, Andres Freund wrote:
>>>> Sure. But what would that be? I can't think of anything. A process that
>>>> modifies a buffer (or any other
On 04/06/2018 08:13 PM, Andres Freund wrote:
> On 2018-04-06 19:59:17 +0200, Tomas Vondra wrote:
>> On 04/06/2018 07:46 PM, Andres Freund wrote:
>>>> Sure. But what would that be? I can't think of anything. A process that
>>>> modifies a buffer (or an
sting on the latest patch
version, unfortunately, certainly not before the CF end.
So I guess the ultimate review / decision is up to you ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
at I'll make every effort to
> keep reviewing it, and I hope Tomas will persist, so that it has a
> better chance in PG12.
>
Thank you for the effort and for the reviews, of course.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ff that
> needs, or even just might need, follow-on work.
>
+1 to that
FWIW I'm willing to spend some time on the patch for PG12, particularly
on the planner / costing part. The potential gains are too interesting.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ow how to do that correctly and we simply
don't have the manpower to implement it (portable, reliable, handling
different types of storage, ...).
One has to wonder how many applications actually use this correctly,
considering PostgreSQL cares about data durability/consistency so much
and yet we've been misunderstanding how it works for 20+ years.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
nrelated device, so
we'd need to understand which devices are related to PostgreSQL.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
've probably ignored a fair
number of cases demonstrating this issue. It kinda reminds me the wisdom
that not seeing planes with bullet holes in the engine does not mean
engines don't need armor [1].
[1]
https://medium.com/@penguinpress/an-excerpt-from-how-not-to-be-wrong-by-jordan-ellenberg-664e708cfc3d
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
thin provisioning are likely to get
even more common, increasing the incidence of these issues.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 04/09/2018 04:22 PM, Anthony Iliopoulos wrote:
> On Mon, Apr 09, 2018 at 03:33:18PM +0200, Tomas Vondra wrote:
>>
>> We already have dirty_bytes and dirty_background_bytes, for example. I
>> don't see why there couldn't be another limit defining how much dirty
&
what you suggested, and simply mark
the inode as failed. In which case the next fsync can't possibly retry
the writes (e.g. after freeing some space on thin-provisioned system),
but we'd get reliable failure mode.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 04/09/2018 10:04 PM, Andres Freund wrote:
> Hi,
>
> On 2018-04-09 21:54:05 +0200, Tomas Vondra wrote:
>> Isn't the expectation that when a fsync call fails, the next one will
>> retry writing the pages in the hope that it succeeds?
>
> Some people expect tha
to WAL
(2) the page is written out to page cache
(3) writeback of that page fails (and gets discarded)
(4) we attempt to modify the page again, but we read the stale version
(5) we modify the stale version, writing the change to WAL
The standby will get the full-page, and then a WAL from the stale page
version. That doesn't seem like a story with a happy end, I guess. But I
might be easily missing some protection built into the WAL ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
In any case, that certainly does not count as data corruption spreading
from the master to standby.
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
. Not sure it's worth it.
I've found this by fairly trivial stress testing - running pgbench and
pg_basebackup in a loop. It was failing pretty reliably (~75% of runs).
With the proposed change I see no further failures.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
Po
On 04/10/2018 11:24 PM, Tomas Vondra wrote:
> Hi,
>
> I think there's a bug in sendFile(). We do check checksums on all pages
> that pass this LSN check:
>
> /*
> * Only check pages which have not been modified since the
> * start of the base backu
FWIW this is the same issue that I reported in 2016:
https://www.postgresql.org/message-id/3f62f24e-51b3-175c-9222-95f25fd2a9d6%402ndquadrant.com
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hash (cost=27.35..27.35 rows=7 width=4) (actual
> time=0.089..0.089 rows=7 loops=3)
> Buckets: 1024 Batches: 1 Memory Usage: 9kB
>
>
What happens when you disable sequential scans on pg10?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ct a part of the
issue might be that the join is misestimated - it's expected to produce
~29k rows, but produces 0.
Can you check if this query has the same issue? It's just the
problematic join, and it should be simpler to investigate:
SELECT count(*)
FROM f_ticketupdate_aad5jtwal0ayaax AS f
INNER JOIN
dwh_dm_aabv5kk9rxac4lz_aaonw7nchsan2n1_aad8xhr0m_aaewg8j61iagl1z AS d
ON (f.dt_event_id = d.id)
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Apologies, the reduced query was missing a where condition on id_week:
SELECT count(*)
FROM f_ticketupdate_aad5jtwal0ayaax AS f
INNER JOIN
dwh_dm_aabv5kk9rxac4lz_aaonw7nchsan2n1_aad8xhr0m_aaewg8j61iagl1z AS d
ON (f.dt_event_id = d.id)
WHERE ( 6171 = d."id_euweek" )
regards
--
To
ularly principled way to fix this - if the cost
difference gets a bit larger (or if you increase the number of parallel
workers) it's probably going to use the parallel plan again.
Obviously, PostgreSQL 9.5 doesn't have parallel queries, so it does not
have a chance of making this
On Wed, Sep 04, 2019 at 11:37:48AM +0200, Rafia Sabih wrote:
On Tue, 30 Jul 2019 at 02:17, Tomas Vondra
wrote:
On Sun, Jul 21, 2019 at 01:34:22PM +0200, Tomas Vondra wrote:
>
> ...
>
>I wonder if we're approaching this wrong. Maybe we should not reverse
>engineer que
On Fri, Sep 06, 2019 at 04:19:16PM +0200, Tomas Vondra wrote:
FWIW I've done some benchmarking on this too, with a single pgbench client
running select-only test on a tiny database, in different modes (simple,
extended, prepared). I've done that on two systems with different CPUs
(s
s attached).
I don't see any performance regression - there are some small variations
in both directions (say, ~1%) but that's well within the noise. So I think
the patch is fine in this regard.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7
On Wed, Sep 04, 2019 at 09:17:10PM +0200, Tomas Vondra wrote:
On Wed, Sep 04, 2019 at 11:37:48AM +0200, Rafia Sabih wrote:
On Tue, 30 Jul 2019 at 02:17, Tomas Vondra
wrote:
On Sun, Jul 21, 2019 at 01:34:22PM +0200, Tomas Vondra wrote:
...
I wonder if we're approaching this wrong. May
itting tuples is not referencing the inner side hashtable at all
and only the outer batch file and the combined bitmap.
Why would the workers need to wait for the lone worker to scan their
bitmap file? Or do the files disappear with the workers, or something
like that?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
I think having an actual patch to look at would be helpful.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Thu, Sep 05, 2019 at 09:54:33AM -0700, Melanie Plageman wrote:
On Tue, Sep 3, 2019 at 9:36 AM Alvaro Herrera
wrote:
On 2019-Jul-11, Tomas Vondra wrote:
> On Wed, Jul 10, 2019 at 04:51:02PM -0700, Melanie Plageman wrote:
> > I think implementing support for parallel hashjoin or e
On Tue, Sep 10, 2019 at 04:30:41AM +0300, Nikita Glukhov wrote:
On 04.09.2019 1:02, Alvaro Herrera wrote:
On 2019-Jun-11, Tomas Vondra wrote:
1) We need a better infrastructure to parse opclass parameters. For
example the gtsvector_options does this:
I think this is part of what Nikolay
On Wed, Sep 11, 2019 at 12:03:58AM +0200, Tomas Vondra wrote:
On Tue, Sep 10, 2019 at 04:30:41AM +0300, Nikita Glukhov wrote:
On 04.09.2019 1:02, Alvaro Herrera wrote:
On 2019-Jun-11, Tomas Vondra wrote:
1) We need a better infrastructure to parse opclass parameters. For
example the
On Tue, Sep 03, 2019 at 02:38:56PM -0400, Alvaro Herrera wrote:
On 2019-Aug-01, Tomas Vondra wrote:
I'll move it to the next CF. Aside from the issues pointed by Kyotaro-san
in his review, I still haven't made my mind about whether to base the use
statistics targets set for the
On Wed, Sep 11, 2019 at 01:44:28AM +0300, Nikita Glukhov wrote:
On 11.09.2019 1:03, Tomas Vondra wrote:
On Tue, Sep 10, 2019 at 04:30:41AM +0300, Nikita Glukhov wrote:
2. New AM method amattoptions().
amattoptions() is used to specify per-column AM-specific options.
The example is
acking offset seems reasonable.
As a sidenote - in the other thread about streaming, one of the patches
does change how we log subxact assignments. In the end, this allows using
just a single file for the top-level transaction, instead of having one
file per subxact. That would also solve this.
On Thu, Sep 12, 2019 at 12:49:29PM -0300, Alvaro Herrera wrote:
On 2019-Jul-30, Tomas Vondra wrote:
On Sun, Jul 21, 2019 at 01:34:22PM +0200, Tomas Vondra wrote:
>
> I wonder if we're approaching this wrong. Maybe we should not reverse
> engineer queries for the various places,
On Thu, Sep 12, 2019 at 11:34:01AM -0700, Andres Freund wrote:
Hi,
On 2019-09-12 09:57:55 -0300, Alvaro Herrera wrote:
On 2019-Sep-12, Tomas Vondra wrote:
> On Wed, Sep 11, 2019 at 09:51:40AM -0300, Alvaro Herrera from 2ndQuadrant
wrote:
> > On 2019-Sep-11, Amit Khandekar wrote:
&g
to be able to catch up on this again
soon.
Good! I'm certainly looking forward to a new patch version.
As discussed in the past, this patch is pretty sensitive (large, touches
planning, ...), so we should try getting most of it in not too late in
the cycle. For example 2019-11 would be ni
ror messages, tell us which PostgreSQL version are
you actually using, etc.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Sat, Sep 14, 2019 at 09:26:26PM +0200, Thomas Rosenstein wrote:
Hi Tomas,
I'm using Postgresql 10.10 on the standbys and 10.5 on the primary.
On 14 Sep 2019, at 21:16, Tomas Vondra wrote:
On Sat, Sep 14, 2019 at 06:03:34PM +0200, Thomas Rosenstein wrote:
Hi,
so I got two question
, so I can't really judge how useful
that is in practice, but it seems like a fairly natural extension of the
existing operators. I mean, if I understand it correctly, the proposed
behavior is equal to treating the element as a "collapsed range".
regards
--
Tomas Vondra
On Sun, Sep 15, 2019 at 10:16:30AM +0100, Dean Rasheed wrote:
On Sat, 14 Sep 2019 at 05:25, Tom Lane wrote:
Tomas Vondra writes:
> On Wed, Aug 14, 2019 at 05:24:26PM +1200, Thomas Munro wrote:
>> On Wed, Aug 14, 2019 at 5:06 PM Tom Lane wrote:
>>> Oh, hmm --- yeah, tha
On Sun, Sep 15, 2019 at 11:27:19AM +0100, Dean Rasheed wrote:
On Sun, 15 Sep 2019 at 11:11, Tomas Vondra wrote:
On Sun, Sep 15, 2019 at 10:16:30AM +0100, Dean Rasheed wrote:
>
>Ah sorry, I missed this thread before. As author of that commit, it's
>really on me to fix it, and t
default is 60s, but if you
tune it down it should send keepalives more often.
See WalSndKeepaliveIfNecessary in [1]:
[1]
https://github.com/postgres/postgres/blob/master/src/backend/replication/walsender.c#L3425
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL D
nced by the index expression are not
changing, but some additional columns are updated.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Sun, Sep 15, 2019 at 09:33:33PM -0400, James Coleman wrote:
On Sat, Jul 20, 2019 at 11:25 AM Tomas Vondra
wrote:
>> ...
>>
>> I think this may be a thinko, as this plan demonstrates - but I'm not
>> sure about it. I wonder if this might be penalizing
ansactions in advance, so it would
have to be implemented as optimistic apply, with a detection and
recovery from conflicts.
I'm not against doing that, and I'm willing to spend some time on revies
etc. but it seems like a completely separate effort.
regards
--
Tomas Vondra
formance, but I'm not sure it'll be
even measurable. Also, the other detoast macros right before this new
one are also ultimately just a function calls.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Sep 17, 2019 at 12:39:33PM -0400, Tom Lane wrote:
Robert Haas writes:
On Mon, Aug 26, 2019 at 9:29 AM Tomas Vondra
wrote:
This is one of the remaining open items, and we don't seem to be moving
forward with it :-(
Why exactly is this an open item, anyway?
The reason it
eckpoint is triggered, hence no issue.
Maybe aggressively triggering checkpoints on the running cluter from
another session would do the trick ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
can try to
reproduce and investigate the isssue?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Thu, Sep 19, 2019 at 01:23:05PM +0900, Michael Paquier wrote:
On Wed, Sep 18, 2019 at 11:58:08PM +0200, Tomas Vondra wrote:
I kinda suspect it might be just a coincidence that it fails during that
particular test. What likely plays a role here is a checkpoint timing
(AFAICS that's the
he table schema (e.g. have a special column representing
combination of those columns), so that there's just a single condition
(thus no misestimate due to correlation)
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Mon, Sep 23, 2019 at 03:48:50PM +0800, Thunder wrote:
Is this an issue?
Can we fix like this?
Thanks!
I do think it is a valid issue. No opinion on the fix yet, though.
The report was sent on saturday, so patience ;-)
regards
--
Tomas Vondra http://www.2ndQuadrant.com
make it more like the
direct parameter (which is only evaluated once). I don't really need
those extra parameters in the transition function at all, it's fine to
just get it to the final function (and there should be far fewer calls
to those).
regards
[1] https://www.postgresql.or
On Mon, Sep 23, 2019 at 12:53:36PM -0400, Tom Lane wrote:
Tomas Vondra writes:
I've been working on a custom aggregate, and I've ran into some fairly
annoying overhead due to casting direct parameters over and over. I'm
wondering if there's a way to eliminate this, someho
On Tue, Sep 24, 2019 at 02:21:40PM +0900, Michael Paquier wrote:
On Wed, Jul 24, 2019 at 11:52:28PM +0200, Tomas Vondra wrote:
I think Heikki was asking about places with a lot of sub-contexts, which a
completely different issue. It used to be the case that some aggregates
created a separate
but I might be wrong. I need to think
about that for a while.
Maybe we should focus on the 0001 part for now - it can be committed
indepently and does provide useful feature.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote D
On Thu, Sep 26, 2019 at 06:58:17PM +0530, Amit Kapila wrote:
On Tue, Sep 3, 2019 at 4:16 PM Tomas Vondra
wrote:
On Mon, Sep 02, 2019 at 06:06:50PM -0400, Alvaro Herrera wrote:
>In the interest of moving things forward, how far are we from making
>0001 committable? If I understand cor
tion for the replica (hot-standby) with information about
recent data removed from the table by vacuum, so that the standby may
abort user queries that'd need the data etc. See heap_xlog_clean()
function in heapam.c.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Dev
ta to fix it in [1]. Until that happens, apply the patches on
top of caba97a9d9 for review.
Thanks
[1] https://commitfest.postgresql.org/24/2183/
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
lat/CA%2BTgmobnu7XEn1gRdXnFo37P79bF%3DqLt46%3D37ajP3Cro9dBRaA%40mail.gmail.com#3e2dc9e70a9f9eb2d695ab94a580c5a2
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
, of course, but I wouldn't be surprised if it
was significant e.g. for small transactions that don't get spilled. And
creating/destroying the contexts is not free either, I think.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Thu, Sep 26, 2019 at 04:33:59PM -0300, Alvaro Herrera wrote:
On 2019-Sep-26, Tomas Vondra wrote:
Hi,
Attached is an updated patch series, rebased on current master. It does
fix one memory accounting bug in ReorderBufferToastReplace (the code was
not properly updating the amount of memory
idea to backpatch this till 9.4?
By "inconsistent" you mean that pre-10 versions will have different
expected output than versions with WAL-logged hash indexes? I don't see
why that would be a reason not to backpatch to all supported versions,
considering we already have
upid while running the smaller one, or maybe it's
just noise (the queries were just a couple of ms in that test). I do
plan to rerun the benchmarks and investigate a bit - if I find the patch
is fine, I'd like to commit it shortly.
regards
--
Tomas Vondra http://www.2ndQua
On Fri, Sep 27, 2019 at 02:33:32PM +0530, Amit Kapila wrote:
On Tue, Jan 9, 2018 at 7:55 AM Peter Eisentraut
wrote:
On 1/3/18 14:53, Tomas Vondra wrote:
>> I don't see the need to tie this setting to maintenance_work_mem.
>> maintenance_work_mem is often set to very large va
On Sat, Sep 28, 2019 at 01:36:46PM +0530, Amit Kapila wrote:
On Fri, Sep 27, 2019 at 4:55 PM Tomas Vondra
wrote:
On Fri, Sep 27, 2019 at 02:33:32PM +0530, Amit Kapila wrote:
>On Tue, Jan 9, 2018 at 7:55 AM Peter Eisentraut
> wrote:
>>
>> On 1/3/18 14:53, Tomas Vondra wrote:
On Thu, Sep 26, 2019 at 01:36:46PM -0700, Jeff Davis wrote:
On Thu, 2019-09-26 at 21:22 +0200, Tomas Vondra wrote:
It's worth mentioning that those bechmarks (I'm assuming we're
talking
about the numbers Rober shared in [1]) were done on patches that used
the eager accountin
On Sun, Sep 29, 2019 at 12:12:49AM +0200, Tomas Vondra wrote:
On Thu, Sep 26, 2019 at 01:36:46PM -0700, Jeff Davis wrote:
On Thu, 2019-09-26 at 21:22 +0200, Tomas Vondra wrote:
It's worth mentioning that those bechmarks (I'm assuming we're
talking
about the numbers Rober shar
eries, it's already pretty large.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Fri, Sep 27, 2019 at 08:31:30PM -0400, James Coleman wrote:
On Fri, Sep 13, 2019 at 10:51 AM Tomas Vondra
wrote:
On Thu, Sep 12, 2019 at 12:49:29PM -0300, Alvaro Herrera wrote:
>On 2019-Jul-30, Tomas Vondra wrote:
>> I've decided to do a couple of experiments, trying to make
On Fri, Sep 27, 2019 at 01:50:30PM -0400, James Coleman wrote:
On Mon, Sep 9, 2019 at 5:55 PM Tomas Vondra
wrote:
The "patched" column means all developer GUCs disabled, so it's expected
to produce the same plan as master (and it is). And then there's one
column for each d
On Sun, Sep 29, 2019 at 02:30:44PM -0300, Alvaro Herrera wrote:
On 2019-Sep-29, Amit Kapila wrote:
On Sun, Sep 29, 2019 at 12:39 AM Tomas Vondra
wrote:
> So that's what I did in the attached patch - I've renamed the GUC to
> logical_decoding_work_mem, detached it from m
Hmm. I don't think that's a bribe, that's a threat. However, maybe it will
work.
IMHO the patch is ready to go - I think the global barrier solves the
issue in the previous version, and that's the only problem I'm aware of.
So +1 from me to go ahead and pu
On Fri, Sep 27, 2019 at 01:00:36AM +0200, Tomas Vondra wrote:
On Wed, Sep 25, 2019 at 05:38:34PM -0300, Alvaro Herrera wrote:
Hello, can you please update this patch?
I'm not the patch author, but I've been looking at the patch recently
and I have a rebased version at hand - s
On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 20:56, Tomas Vondra
написал(а):
I mean this:
/*
* Use int64 to prevent overflow during calculation.
*/
compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;
I'm not very familiar with
On Mon, Sep 30, 2019 at 01:34:13PM -0700, Jeff Davis wrote:
On Sun, 2019-09-29 at 00:22 +0200, Tomas Vondra wrote:
Notice that when CLOBBER_FREED_MEMORY is defined, the code first
> calls
> wipe_mem and then accesses fields of the (wiped) block.
> Interesringly
> enough, the regr
On Tue, Oct 01, 2019 at 11:20:39AM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 22:29, Tomas Vondra
написал(а):
On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 20:56, Tomas Vondra
написал(а):
I mean this:
/*
* Use int64 to prevent overflow
On Tue, Oct 01, 2019 at 12:08:05PM +0200, Tomas Vondra wrote:
On Tue, Oct 01, 2019 at 11:20:39AM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 22:29, Tomas Vondra
написал(а):
On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 20:56, Tomas Vondra
On Tue, Oct 01, 2019 at 02:34:20PM +0200, Tomas Vondra wrote:
On Tue, Oct 01, 2019 at 12:08:05PM +0200, Tomas Vondra wrote:
On Tue, Oct 01, 2019 at 11:20:39AM +0500, Andrey Borodin wrote:
30 сент. 2019 г., в 22:29, Tomas Vondra
написал(а):
On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey
allows features you can't easily achieve with fs encryption,
because the filesystem only sees opaque data files. So having keys per
database/user/... is easier from within the database.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ld not encrypt the server log, in the end.
But yes, you're right it's a challenging topis.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
601 - 700 of 3960 matches
Mail list logo