On 1/18/25 22:31, Thomas Munro wrote:
> On Sun, Jan 19, 2025 at 5:51 AM Tomas Vondra wrote:
>> * Does it still make sense to default to eic=1? For this particular test
>> increasing eic=4 often cuts the duration in half (especially on nvme
>> storage).
>
> Maybe
On 1/13/25 17:32, Melanie Plageman wrote:
> On Sat, Jan 11, 2025 at 7:42 PM Tomas Vondra wrote:
>>
>> I had a quiet evening yesterday, so I decided to take a stab at this and
>> see how hard would it be, and how bad would the impact be. Attached is
>> an experime
re equal.
But it's still confusing, it took me a while my experimental patch
fails. So I think it'd be good to correct it.
regards
--
Tomas Vondra
diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c
index 6449f82a72b..a4541b87b8f 100644
--- a/src/backend/stora
On 1/10/25 15:54, Melanie Plageman wrote:
> On Thu, Jan 9, 2025 at 6:59 PM Tomas Vondra wrote:
>>
>> ...
>
>> Robert's idea kept using buffered files, but limited how many we can
>> fill at any phase. Say we'd use a limit of 1024 batches, but we actua
On 1/11/25 00:09, Melanie Plageman wrote:
> On Fri, Jan 10, 2025 at 11:18 AM Tomas Vondra wrote:
>>
>> On 1/10/25 15:54, Melanie Plageman wrote:
>>> On Thu, Jan 9, 2025 at 6:59 PM Tomas Vondra wrote:
>>> I think this is because we get the batch based on
>
On 1/10/25 15:54, Melanie Plageman wrote:
> On Thu, Jan 9, 2025 at 6:59 PM Tomas Vondra wrote:
>>
>>
>>
>> On 1/9/25 21:42, Melanie Plageman wrote:
>>>
>>> I was excited about your raw file experiment. As Robert and you point
>>> out --
On 1/9/25 21:42, Melanie Plageman wrote:
> On Tue, Dec 31, 2024 at 6:07 PM Tomas Vondra wrote:
>>
>> This means that ultimately it's either (1) or (3), and the more I've
>> been looking into this the more I prefer (1), for a couple reasons:
>>
>> *
On 1/9/25 23:18, Melanie Plageman wrote:
> On Sun, Jan 5, 2025 at 10:00 PM Tomas Vondra wrote:
>>
>> I think the general idea and formula explained in [1] is right, but
>> while working on the PoC patch I started to think about how to formalize
>> this. And I ended up
On 1/9/25 17:17, Melanie Plageman wrote:
> On Tue, Dec 31, 2024 at 6:07 PM Tomas Vondra wrote:
>>
>> So I decided to revisit the three patches from 2019. Attached are
>> rebased and cleaned up versions. A couple comments on each one:
>>
>>
>> 1) v20241
gt;
No opinion. I need to do some testing / benchmarking myself.
> TESTS:
>
> Currently working on tests for a second version of the patch.
>
> DOCUMENTATION:
>
> Added documentation for the new view pg_stat_progress_explain,
> new GUCs and a new item in section 14.1:
>
> 14.1. Using EXPLAIN
> 14.1.1. EXPLAIN Basics
> 14.1.2. EXPLAIN ANALYZE
> 14.1.3. Caveats
> 14.1.4. Progressive EXPLAIN
>
> FURTHER DISCUSSION:
>
> Considering that this patch introduces a new major feature with
> several new components (view, GUCs, etc), there is open room for
> discussion such as:
>
> - Do the columns in pg_stat_progress_explain make sense? Are we
> missing or adding unnecessary columns?
>
> - Do the new GUCs make sense and are their default values appropriate?
>
> - Do we want progressive explain to print plans of regular queries
> started without EXPLAIN if progressive_explain is enabled or should
> the feature be restricted to instrumented queries (EXPLAIN ANALYZE)?
>
> - Is the size of explainHash based on max_connections + max_parallel_workers
> large enough or are there other types of backends that use the
> executor and will print plans too?
>
I've commented on some of these items earlier.
regards
--
Tomas Vondra
like
prometheus.
I can imagine a more complicated system, aggregating the data after into
a lower resolution (e.g. per day). But that's not a complete solution,
because e.g. what if there are many relations that happen to be related
only once per day?
regards
--
Tomas Vondra
On 1/6/25 22:07, Jim Nasby wrote:
> On Jan 2, 2025, at 9:21 PM, Tomas Vondra wrote:
>>
>>> That said, I do think a workload manager would be more effective than
>>> trying to limit total connections.
>>
>> The "workload management" concept is s
On 1/6/25 19:50, Robert Haas wrote:
> On Mon, Jan 6, 2025 at 11:51 AM Tomas Vondra wrote:
>> I wonder if maybe a better solution would be to allow BufFiles with
>> smaller buffers, not just hard-coded 8kB. OTOH I'm not sure how much
>> that helps, before the bufferin
ory_contexts?
19) Minor comment and formatting of MemCtxShmemSize / MemCtxShmemInit.
20) MemoryContextInfo etc. need to be added to typedefs.list, so that
pgindent can do the right thing.
21) I think ProcessGetMemoryContextInterrupt has a bug because it uses
get_summary before reading it f
On 1/6/25 16:42, Robert Haas wrote:
> Hi Tomas,
>
> Thanks for working on this. I haven't studied this problem recently,
> but here are some ideas that occur to me:
>
> 1. Try to reduce the per-batch overhead.
>
Yeah. The "use files without buffering"
On 1/2/25 22:09, Jim Nasby wrote:
>
>> On Dec 31, 2024, at 5:41 PM, Tomas Vondra wrote:
>>
>> On 12/31/24 21:46, Jim Nasby wrote:
>>> On Dec 30, 2024, at 7:05 PM, James Hunter
>>> wrote:
>>>>
>>>> On Sat, Dec 28, 2024 at 11:24 PM Ji
need to adjust any of these GUCs (other than
> max_backends) you don’t need to restart - the new limits would just apply to
> new connection requests.
I don't quite understad how max_backends helps with anything except
allowing to change the limit of connections without a restart, or why
would it be needed for introducing a memory limit. To me those seem very
much like two separate features.
regards
--
Tomas Vondra
0karst
[3]
https://www.postgresql.org/message-id/bc138e9f-c89e-9147-5395-61d51a757b3b%40gusw.net
[4]
https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development
[5]
https://www.postgresql.org/message-id/caakru_yswm7gc_b2nbgwfpe6wuhdolfc1lbz786duzacpud...@mail.gmail.com
--
Tomas Vondra
From
On 12/30/24 22:40, Jim Nasby wrote:
> On Dec 25, 2024, at 11:25 AM, Tomas Vondra wrote:
>> But maybe it'd be possible to just write the entries to a file. We don't
>> need random access to past entries (unlike e.g. pg_stat_statements), and
>> people won
On 12/31/24 02:06, Michael Paquier wrote:
> On Sat, Dec 28, 2024 at 02:25:16AM +0100, Tomas Vondra wrote:
>> And the more I think about it the more I'm convinced we don't need to
>> keep the data about past runs in memory, a file should be enough (except
>> maybe
On 12/29/24 16:39, Robert Treat wrote:
> On Fri, Dec 27, 2024 at 8:25 PM Tomas Vondra wrote:
>> On 12/27/24 05:00, Michael Paquier wrote:
>>> On Thu, Dec 26, 2024 at 06:58:11PM +0100, Tomas Vondra wrote:
>>>> If 128MB is insufficient, why would 256MB be OK? A fac
On 12/28/24 13:36, Anton A. Melnikov wrote:
> Hi!
>
> On 28.12.2024 04:48, Tomas Vondra wrote:
>> On 12/27/24 20:14, James Hunter wrote:
>>> Reviving this thread, because I am thinking about something related --
>>> please ignore the "On Fri, D
from January 28, 2024.
>
> On Fri, Dec 27, 2024 at 11:02 AM Tomas Vondra
> wrote:
>>
>> Firstly, I agree with the goal of having a way to account for memory
>> used by the backends, and also ability to enforce some sort of limit.
>> It's difficult to track t
On 12/27/24 05:00, Michael Paquier wrote:
> On Thu, Dec 26, 2024 at 06:58:11PM +0100, Tomas Vondra wrote:
>> If 128MB is insufficient, why would 256MB be OK? A factor of 2x does not
>> make a fundamental difference ...
>>
>> Anyway, the 128MB value is rather arbitra
sufficient, why would 256MB be OK? A factor of 2x does not
make a fundamental difference ...
Anyway, the 128MB value is rather arbitrary. I don't mind increasing the
limit, or possibly removing it entirely (and accepting anything the
system can handle).
regards
--
Tomas Vondra
Hi,
On 12/23/24 07:35, wenhui qiu wrote:
> Hi Tomas
> This is a great feature.
> + /*
> + * Define (or redefine) custom GUC variables.
> + */
> + DefineCustomIntVariable("stats_history.size",
> + "Sets the amount of memory available for past events.
On 12/19/24 23:05, Masahiko Sawada wrote:
> On Sat, Dec 14, 2024 at 1:24 PM Tomas Vondra wrote:
>>
>> On 12/13/24 00:04, Tomas Vondra wrote:
>>> ...
>>>
>>> The main difference is here:
>>>
>>>
>>> master / no parallel worke
e event (and I'd like to do this always), and (b) it'd
require parsing the server log. So it's not much better than just doing
that, I think ...
Opinions?
--
Tomas Vondra
From 87e457629d382c3071c7f91ec47863c15e6c249d Mon Sep 17 00:00:00 2001
From: Tomas Vondra
Date: Sat, 21 Dec
rozenxid advancement is really only a problem for
> big tables. If the table is small, the eventual aggressive vacuum
> doesn't cost that much.
>
Yeah, I agree with this.
regards
--
Tomas Vondra
On 12/17/24 18:06, Melanie Plageman wrote:
> On Tue, Dec 17, 2024 at 9:11 AM Tomas Vondra wrote:
>>
>>
>>
>> On 12/16/24 19:49, Melanie Plageman wrote:
>>
>>> No, I'm talking about the behavior of causing small pockets of
>>> all-frozen pa
ct
the function to change very often to justify this.
regards
--
Tomas Vondra
, but for the overall maintenance
overhead - essentially accepting the vacuum gets slower, in exchange for
lower cost of maintenance later.
But I think that (a) is going to be fairly complex, because how do you
cost the future vacuum?, and (b) is somewhat misses my point that on
modern NVMe SSD storage (SKIP_PAGES_THRESHOLD > 1) doesn't seem to be a
win *ever*.
So why shouldn't we reduce the SKIP_PAGES_THRESHOLD value (or perhaps
make it configurable)? We can still do the other stuff (decide how
aggressively to free stuff etc.) independently of that.
regards
--
Tomas Vondra
On 12/8/24 00:06, Tomas Vondra wrote:
> Hi,
>
> On 11/16/24 17:59, Andreas Karlsson wrote:
>> Hi,
>>
>> Here is a set of small patches which improve the tab completion of joins
>> in psql. The completion of select queries and DML is very primitive in
On 12/8/24 05:22, Kirill Reshke wrote:
> On Sun, 8 Dec 2024 at 03:35, Tomas Vondra wrote:
>>
>> Hi,
>> I took a quick look at this patch series, and it looks generally fine to
>> me. Barring objections, I'll get it committed. Yes, there's a couple
>> co
On 12/13/24 00:04, Tomas Vondra wrote:
> ...
>
> The main difference is here:
>
>
> master / no parallel workers:
>
> pages: 0 removed, 221239 remain, 221239 scanned (100.00% of total)
>
> 1 parallel worker:
>
> pages: 0 removed, 221239 r
est
it, which most likely means it'd need postgres_fdw to support it. And
I'd guess adding that would be roughly comparable to actually adding the
"proper" MERGE planning into PlanForeignModify.
So in short, I agree with Álvaro.
regards
--
Tomas Vondra
that is a fault of this patch, and I don't expect this patch to fix
that. But it's hard to get excited about new fields added to this log
message, when it'd be most useful aggregated for vacuums over some time
interval. I really wish we had some way to collect and access these
runtime stats in a structured way.
regards
--
Tomas Vondra
On 12/13/24 00:04, Tomas Vondra wrote:
>
> ...
> Attached are results.csv with raw data, and a PDF showing the difference
> between master and patched build with varying number of workers. The
> columns on the right show timing relative to master (with no parallel
> workers). G
On 12/9/24 19:47, Tomas Vondra wrote:
> Hi,
>
> Thanks for working on this. I took a quick look at this today, to do
> some basic review. I plan to do a bunch of testing, but that's mostly to
> get a better idea of what kind of improvements to expect - the initial
> resu
; have your patches reviewed, please respond to this email.
>
Thanks for organizing this. Count me in ;-)
I don't have any "logistical caveats" except that I'm not a morning
person, so would prefer a session at a bearable time.
thanks
--
Tomas Vondra
ot great, but everything
else seems like a lot of effort just to make this one test pass under
valgrind, and I don't think it's worth it.
Can we make the sleep conditional on valgrind, so that regular builds
are not affected? I guess regular builds could fail too, but I don't
think we've seen such failures until now.
regards
--
Tomas Vondra
th the patch we get only
options that are actually set for the user, which is much better.
Two comments:
1) Does it make sense to still show "ALL" when the query returns
nothing? Not sure if we already have a way to handle this.
2) Should we do the same thing for ALTER DATABASE? That also allows
setting options.
thanks
--
Tomas Vondra
out invoking
index and heap vacuuming" but isn't the whole point of this block to do
that cleanup so that the TidStore can be discarded? Maybe I just don't
understand how the work is divided between the leader and workers ...
11) Why does GlobalVisState need to move to snapmgr.h? If I undo this
the patch still builds fine for me.
thanks
--
Tomas Vondra
pport/kb/doc/?id=18219> for
> Your AG range
>
But this can be reproduced on a brand new filesystem - I just tried
creating a 1GB image, create XFS on it, mount it, and fallocate a 600MB
file twice. Which that fails, and there can't be any real fragmentation.
regards
--
Tomas Vondra
On 12/9/24 13:30, Heikki Linnakangas wrote:
> On 09/12/2024 01:12, Tomas Vondra wrote:
>> On 11/14/24 15:13, Heikki Linnakangas wrote:
>>> On 09/10/2024 23:40, Heikki Linnakangas wrote:
>>>> I pushed the first three patches, with the new test and one of the
>&
rs of Red Hat and the usual historical
> developers, of course!!!
>
Yes, I think that's a better place to report this. I don't think we're
doing anything particularly weird / wrong with fallocate().
regards
--
Tomas Vondra
eports bogus out-of-space,
is there even something we can do?
What is not clear to me is why would this affect pg_upgrade at all. We
have the data files split into 1GB segments, and the copy/clone/... goes
one by one. So there shouldn't be more than 1GB "extra" space needed.
Surely you have more free space on the system?
regards
--
Tomas Vondra
k fine. And various other TAP tests with
background_sql() work fine too.
So what's so special about this particular line?
regards
--
Tomas Vondra
On 12/8/24 05:23, Tomas Vondra wrote:
> On 9/18/24 04:56, shawn wang wrote:
>> Thank you very much for your response and suggestions.
>>
>> As you mentioned, the patch here is actually designed for glibc's
>> ptmalloc2 andis not applicable to other platforms.
is pretty easy, but
maybe we could have some thing that does the same thing using mallopt().
That's what Ronan Dunklau proposed in thread [1] a year ago ... I like
that approach much more, it's much simpler for the user.
[1]
https://www.postgresql.org/message-id/flat/3424675.QJadu78ljV%40aivenlaptop
regards
--
Tomas Vondra
if (vCount > 1 || lCount > 1 || dCount > 1)
return -1;
and
if (!IS_VALID_SUB_COMB(currChar, nextChar))
return -1;
I haven't tried constructing tests to hit those cases, though.
Seems ready to go otherwise.
regards
--
Tomas Vondra
es with logical replication where reproducing may be
expensive (in terms of data amounts, time, ...) but I don't think that's
the case here - this test is trivial/cheap.
But I believe the "costs" mentioned by Amit are more about having to
maintain the tests etc. rather than execution costs. In which case
having a flag does exactly nothing - we'd still have to maintain it.
I propose we simply add the test to 008_diff_schema.pl, per v2. I see no
reason to invent something more here.
regards
--
Tomas Vondra
se all seem reasonable to me. I'll do a bit
more review to make sure I didn't miss anything, and then I intend to
get this committed ...
regards
--
Tomas Vondra
esn't happen. So stuff is not written to
.psql_history, for example. That doesn't seem great.
Overall, I don't quite see why the current behavior is a problem, and/or
why would this be an improvement.
thanks
--
Tomas Vondra
e comments that are too long for one line, and there is a line
> with a trailing space:
>
> + else if (Matches("CREATE", "MATERIALIZED", "VIEW", MatchAny, "AS") ||
>
> Other than that, everything looks fine to me.
>
I took a quick look at this patch series, and it looks generally fine to
me. Barring objections, I'll get it committed. Yes, there's a couple
cosmetic details, and it needs a pgindent run, but I think I can take
care of that ...
regards
--
Tomas Vondra
return true;
+
/* file extension depends on the destination type */
if (target_dest == LOG_DESTINATION_STDERR)
logFileExt = NULL;
In fact, isn't it wrong to do the check outside? logfile_rotate_dest()
is responsible for closing the log files too, and with the check outside
we keep the first .log file open forever (lsof shows that).
FWIW my fix has the same issue, but IMO that just means the logic needs
to be more complex, but still in logfile_rotate_dest().
regards
--
Tomas Vondra
On 12/5/24 12:28, Amit Langote wrote:
> On Thu, Dec 5, 2024 at 3:53 PM Amit Langote wrote:
>> On Thu, Dec 5, 2024 at 2:20 AM Tomas Vondra wrote:
>>> Sure, changing the APIs is allowed, I'm just wondering if maybe there
>>> might be a way to not have this issue
On 12/5/24 07:53, Amit Langote wrote:
> On Thu, Dec 5, 2024 at 2:20 AM Tomas Vondra wrote:
>> ...
>>
>>>> What if an
>>>> extension doesn't do that? What weirdness will happen?
>>>
>>> The QueryDesc.planstate won't contain
On 12/4/24 14:34, Amit Langote wrote:
> Hi Tomas,
>
> On Mon, Dec 2, 2024 at 3:36 AM Tomas Vondra wrote:
>> Hi,
>>
>> I took a look at this patch, mostly to familiarize myself with the
>> pruning etc. I have a bunch of comments, but all of that is minor,
, even if you
> run the function only once per day.
>
> I can reduce the initial segment size to DSA_MIN_SEGMENT_SIZE, which is
> 256KB per process. If needed, this could grow up to 16MB based on the
> current settings.
>
> However, for the scenario you mentioned, it would be ideal to have a
> mechanism
> to mark a pinned DSA (using dsa_pin()) for deletion if it is not used/
> attached within a
> specified duration. Alternatively, I could avoid using dsa_pin()
> altogether, allowing the
> DSA to be automatically destroyed once all processes detach from it, and
> recreate it
> for a new request.
>
> At the moment, I am unsure which approach is most feasible. Any
> suggestions would be
> greatly appreciated.
>
I'm entirely unconcerned about the pg_get_process_memory_contexts()
performance, within some reasonable limits. It's something executed
every now and then - no one is going to complain it takes 10ms extra,
measure tps with this function, etc.
17-26% seems surprisingly high, but Even 256kB is too much, IMHO. I'd
just get rid of this optimization until someone complains and explains
why it's worth it.
Yes, let's make it fast, but I don't think we should optimize it at the
expense of "regular workload" ...
regards
--
Tomas Vondra
's a great "howto" for any patch :-(
Does this answer your question?
[1] https://commitfest.postgresql.org/50/5044/
--
Tomas Vondra
On 11/29/24 00:23, Rahila Syed wrote:
> Hi Tomas,
>
> Thank you for the review.
>
>
>
> 1) I read through the thread, and in general I agree with the reasoning
> for removing the file part - it seems perfectly fine to just dump as
> much as we can
nsive it is to just keep the DSA "just in case"? Imagine
someone asks for the memory context info once - isn't it a was to still
keep the DSA? I don't recall how much resources could that be.
I don't have a clear opinion on that, I'm more asking for opinions.
8) Two minutes seems pretty arbitrary, and also quite high. If a timeout
is necessary, I think it should not be hard-coded.
regards
--
Tomas Vondra
s://www.postgresql.org/message-id/DD25705F-E75F-4DCA-B49A-5578F4F55D94%40yesql.se
--
Tomas Vondra
int in keeping this in the CF app.
regards
--
Tomas Vondra
On 11/21/24 14:59, Tomas Vondra wrote:
>
> ...
>
> But then there's the SQL API - pg_logical_slot_get_changes(). And it
> turns out it ends up syncing the slot to disk pretty often, because for
> RUNNING_XACTS we call LogicalDecodingProcessRecord() + standby_decode(),
things are at least
two or three steps apart (BIOS <-> OS <-> application).
It's far more likely this is just a traditional hardware issue. If you
search for "dell machine check error" you'll find plenty of similar
reports. I only checked a couple, but it's invariably some due to some
hardware issue.
regards
--
Tomas Vondra
On 11/20/24 23:19, Tomas Vondra wrote:
> On 11/20/24 18:24, Tomas Vondra wrote:
>>
>> ...
>>
>> What confuses me a bit is that we update the restart_lsn (and call
>> ReplicationSlotsComputeRequiredLSN() to recalculate the global value)
>&g
example. So we actually see the required LSN to move during checkpoint
very often. So how come we don't see the issues much more often? Surely
I miss something important.
Another option might be that pg_replication_slot_advance() doesn't do
something it should be doing. For example, shouldn't be marking the slot
as dirty?
regards
--
Tomas Vondra
On 11/20/24 18:24, Tomas Vondra wrote:
>
> ...
>
> What confuses me a bit is that we update the restart_lsn (and call
> ReplicationSlotsComputeRequiredLSN() to recalculate the global value)
> all the time. Walsender does that in PhysicalConfirmReceivedLocation for
> example. S
s WAL to keep. The disadvantage is that physical slots do not
> guarantee WAL keeping starting from its' restart lsns in general.
>
If it's wrong, it doesn't really matter it has some advantages.
regards
--
Tomas Vondra
ly a Postgres issue would make the whole system crash,
particularly in a way that prevents it from booting again. I don't think
anyone will be able to help you without more info - you need to make it
boot again, inspect the Postgres logs, etc.
regards
--
Tomas Vondra
to form a justification why it's OK :-(
regards
--
Tomas Vondra
From 25678fbbef96965dfb54387dacbe979c920b84e8 Mon Sep 17 00:00:00 2001
From: Tomas Vondra
Date: Tue, 19 Nov 2024 16:21:57 +0100
Subject: [PATCH vasserts 1/3] asserts: restart_lsn
---
src/backend/replication/logical/logical.c
On 11/18/24 22:15, Tomas Vondra wrote:
> ...
>
> So I think the correct solution is to not pass any expressions with
> RestrictInfo to deparse_expression(). Either by stripping the nodes, or
> by not adding them at all.
>
> The patch tries to do the stripping by maybe_e
his :-(
In any case, I think this shows the patch needs more tests.
> 2) It would be great if the STATS flag appeared as an option when
> pressing Tab during query input in the psql command-line interface.
>
True. Tab autocomplete would be nice.
regards
--
Tomas Vondra
On 11/16/24 08:26, Masahiko Sawada wrote:
> On Fri, Nov 15, 2024 at 9:48 AM Tomas Vondra wrote:
>>
>>
>>
>> On 11/15/24 18:40, Masahiko Sawada wrote:
>>> On Thu, Nov 14, 2024 at 10:16 PM Amit Kapila
>>> wrote:
>>>>
>>
On 11/15/24 04:26, Amit Kapila wrote:
> On Wed, Nov 13, 2024 at 5:23 PM Tomas Vondra wrote:
>>
>> On 11/13/24 11:59, Amit Kapila wrote:
>>> On Tue, Nov 12, 2024 at 12:43 PM Ashutosh Bapat
>>> wrote:
>>>>
>>>> On Tue, Nov 12, 2024 at 12:02
r idea. I"ve attached the updated patch. I'll push
> it early next week unless there are further comments.
>
I'm not particularly attached to how I did this in my WIP patch, it was
simply the simplest way to make it work for experimentation. I'd imagine
it'd be best to just mirror how LogicalIncreaseXminForSlot() does this.
regards
--
Tomas Vondra
On 11/14/24 00:33, Masahiko Sawada wrote:
> On Wed, Nov 13, 2024 at 3:53 AM Tomas Vondra wrote:
>>
>>
>>
>> On 11/13/24 10:38, Amit Kapila wrote:
>>> On Tue, Nov 12, 2024 at 6:29 PM Tomas Vondra wrote:
>>>>
>>>> Sure, maybe fixing L
On 11/13/24 18:20, Peter Geoghegan wrote:
> On Wed, Nov 13, 2024 at 11:07 AM Tomas Vondra wrote:
>> My plan was to apply the patch to both 17 and HEAD, and then maybe do
>> something smarter in HEAD in a separate commit. But then Michael pointed
>> out other pageinspect
*/
if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)
elog(ERROR, "too many LWLocks taken");
and not the assert. That suggests your extension does something wrong
with HOLD_INTERRUPTS() or something like that.
regards
--
Tomas Vondra
On 11/12/24 09:04, Hayato Kuroda (Fujitsu) wrote:
> Dear Tomas,
>
>> Here's a fix for pageinspect bug in PG17, reported in [1]. The bug turns
>> out to be introduced by my commit
>
> I could not see the link, but I think it is [1], right?
>
Right. Apologi
On 11/12/24 08:39, Michael Paquier wrote:
> On Mon, Nov 11, 2024 at 07:32:10PM +0100, Tomas Vondra wrote:
>> This adds an out argument to brin_page_items, but I failed to consider
>> the user may still run with an older version of the extension - either
>> after pg_upgrade (
On 11/13/24 11:59, Amit Kapila wrote:
> On Tue, Nov 12, 2024 at 12:43 PM Ashutosh Bapat
> wrote:
>>
>> On Tue, Nov 12, 2024 at 12:02 PM Masahiko Sawada
>> wrote:
>>>
>>> On Mon, Nov 11, 2024 at 2:08 PM Tomas Vondra wrote:
>>>>
>>&g
could "expand" the ANY/IN clauses into an OR clause, so that
restriction_is_or_clause() returns "true". But I haven't tried and I'm
sure there'd be more stuff to fix to make this work.
regards
--
Tomas Vondra
On 11/13/24 10:38, Amit Kapila wrote:
> On Tue, Nov 12, 2024 at 6:29 PM Tomas Vondra wrote:
>>
>> Sure, maybe fixing LogicalIncreaseRestartDecodingForSlot() is enough to
>> fix this particular case. But I'd be happier if we could also add
>> asserts checking the
ith that candiate LSN. We
> won't access the slot here and the solution will be inline with
> DecodeCommit() which skips the transactions.
>
Could you maybe write a patch doing this? That would allow proper
testing etc.
regards
--
Tomas Vondra
printf(fptr,"Constant value: %s\n",valueStr);
>
> fclose(fptr);
>
> }
>
> }
>
>
> But this does not seems to work for select statements, though this works
> for fine for prepared insert statements. Is there a general/ simpler way
> to do this?
>
I think you'll need to provide much more information. We have no idea
what extract_constants_from_plan() does, it doesn't seem to be a
function defined in Postgres code. Yet it seems to be the part doing the
important stuff.
FWIW I suspect it'd be easier to do this kind of stuff on the parsetree,
i.e. much earlier in query processing.
regards
--
Tomas Vondra
esql.org/
A couple comments:
1) The proper spelling is "length" (not "lenght").
2) The GUC should be added to the "passwordcheck" extension, not to the
core GUC file. See how auto_explain defines options in _PG_init() using
DefineCustomIntVariable.
3) It might be a good idea to add a test to passwordcheck.sql.
regards
--
Tomas Vondra
On 11/12/24 13:19, Amit Kapila wrote:
> On Tue, Nov 12, 2024 at 4:55 PM Tomas Vondra wrote:
>>
>> There's also the question of backpatching - the simpler the better, and
>> this I think just resetting the fields wins in this regard. The main
>> question is whet
On 11/12/24 04:05, Amit Kapila wrote:
> On Mon, Nov 11, 2024 at 9:05 PM Tomas Vondra wrote:
>>
>> On 11/11/24 09:19, Amit Kapila wrote:
>>>
>>> I can't think of a solution other than the current proposal where we
>>> do both the oper
On 11/12/24 10:37, Ashutosh Bapat wrote:
> On Tue, Nov 12, 2024 at 4:54 AM Tomas Vondra wrote:
>>
>>
>>
>> On 11/11/24 23:41, Masahiko Sawada wrote:
>>> On Mon, Nov 11, 2024 at 6:17 AM Tomas Vondra wrote:
>>>
>>> Which made m
On 11/11/24 23:41, Masahiko Sawada wrote:
> On Mon, Nov 11, 2024 at 6:17 AM Tomas Vondra wrote:
>>
>> If this analysis is correct, I think it's rather suspicious we don't
>> reset the candidate fields on restart. Can those "old" values ever be
>
ince 2023/02 :-(
>
> I'm happy to work on this fix. At that time, I was unsure if my fix
> was really correct and there was no further discussion.
>
Thanks. I'm not sure about the correctness either, but I think it's
clear the issue is real, and it's not difficult to reproduce it.
regards
--
Tomas Vondra
On 11/11/24 15:17, Tomas Vondra wrote:
> On 11/11/24 14:51, Ashutosh Bapat wrote:
>> ...
>>
>> I think the problem is about processing older running transactions
>> record and setting data.restart_lsn based on the candidates those
>> records produce. But what
Hi,
Here's a fix for pageinspect bug in PG17, reported in [1]. The bug turns
out to be introduced by my commit
commit dae761a87edae444d11a411f711f1d679bed5941
Author: Tomas Vondra
Date: Fri Dec 8 17:07:30 2023 +0100
Add empty BRIN ranges during CREATE INDEX
...
This adds a
its in that 1MB by
linear approximation.
Of course, there's a lot of questions and details to solve - e.g. how
often would it need to happen, when exactly would it happen, etc. And
also how would that integrate with the logical decoding - it's easy to
just get the timestamp from the WAL record, this would require more work
to actually calculate it. It's only a very rough idea.
regards
--
Tomas Vondra
s particular issue, but maybe the right fix would be to reset the
candidate fields on reconnect? And this change would be just hiding the
actual problem. I haven't tried this.
[1]
https://www.postgresql.org/message-id/CAD21AoBVhYnGBuW_o%3DwEGgTp01qiHNAx1a14b1X9kFXmuBe%3Dsg%40mail.gmail.com
--
Tomas Vondra
On 11/8/24 02:35, Peter Geoghegan wrote:
> On Thu, Nov 7, 2024 at 4:34 PM Tomas Vondra wrote:
>> Not sure I understand, but I think I'm somewhat confused by "index AM"
>> vs. indexam. Are you suggesting the individual index AMs should know as
>> little about t
On 11/8/24 19:25, Masahiko Sawada wrote:
> Hi,
>
> Thank you for investigating this issue.
>
> On Thu, Nov 7, 2024 at 10:40 AM Tomas Vondra wrote:
>>
>> Hi,
>>
>> I kept investigating this, but I haven't made much progress. I still
>> don&
1 - 100 of 1591 matches
Mail list logo