It seems not an easy task. And here is my new work of such task.
The main idea is register a sys cache callback in cached_function_compile
when not registered. So it will effect for all SPL.
And also introduce a new hash table to track the function for cache inval
callback. The procedure in callbac
> BTW, it appears to me that doing it this way is O(N^2) in the number
> of active temp tables. So it's not hard to believe that the patch
> as-presented would actually be a fairly serious performance drag for
> some use cases with lots of temp tables. There are certainly ways
> we could do bette
> [ shrug... ] If you create an ON COMMIT DELETE temp table, you
> are explicitly asking for a truncation to happen at every commit.
> I don't think you have much room to beef about the fact that one
> happens.
Yes. ON COMMIT DELETE temp table will be truncated at every commit.
But if we can cont
>> It is unfair to add a performance penalty to everyone just because some
>> people write bad code. I concur that adding complexity to the system to
>> gracefully handle this corner-case doesn't seem justified. A use case
>> description, not mere existence, is needed to provide such justificatio
> I do not think this is something we ought to consider. It might help
> certain corner use-cases, but it's probably a net loss for most.
> In particular, I don't think that creating thousands of temp tables in
> a session but then touching only a few of them in any one transaction
> is a very pla
Hi there!
Recently I noticed a performance issue on temporary relation. The issue will
happened on
ON COMMIT DELETE temporary relations. If one session only create a few
temporary relations,
well, it's fine. But if one session creates plenty of ON COMMIT DELETE kind
temporary relations,
say 3,0
Vladlen Popolitov 2025-08-19 08:39:50 wrote:> Hi!
>
> In your example function will be compiled (the tree is created in
the
> memory)
> and executed.
> During execution this function creates a plan for very simple query
1
> and stores it in the cache, than it creates a plan for query 10 and
>
Yes, of course we can solve this by restoring from backup.
But if the database volumn is large, say, 100TB or more, the cost
is really too expensive just because the tiny clog file corrupt.
Regards,
Jet
Daniel Gustafsson
Human errors, disk errors, or even cosmic rays ...
Regards,
Jet
Andrey Borodin
S1:
When database normal shutdown, and clog file missing, the database cannot
restart. And if make a zero clog file, database started but may cause
transactions lost.
S2:
When database crashed, and clog file missing, when database restart,
it will try to recover. And everything is ok
So I t
Thanks tom.
But what I think is we may provide a better experience. Consider the
below example:
[jet@halodev-jet-01 data]$ psqlpsql (16.6)
Type "help" for help.
postgres=# CREATE TABLE a_test (n INT);
CREATE TABLE
postgres=# INSERT INTO a_test VALUES (1);
INSERT 0 1
postgres=# 2024-12-23 16
But think about such a scenario, after INSERT some tuples, and COMMIT also
succeed.
And after a while, a system error occurred and unfortunately, just caused
clog file corrupted.
So we need to restore the database from backup just because of the tiny clog
file corrupted.
Is there any chance t
Yes, i think you're right. The tuple will be set to HEAP_XMIN_COMMITTED
when doing the visibility checking, but don't you think it's a little weird? Or
may cause some confusion?
Thanks,
Jet
Junwang Zhao
Hi there,
I notice a little strange things of clog behaviours.
When I create a test table, say, a_test, the table only contains a INT type
column.
postgres=# CREATE TABLE a_test (n INT);
CREATE TABLE
and then insert one tuple:
postgres=# INSERT INTO a_test VALUES (1);
INSERT 0 1
An
14 matches
Mail list logo