> We want patches to be in the pgsql-hackers archives, not temporarily
> accessible via some link.
>
> …Robert
>
Moving to another email going forward.
Reattaching the patch.
0001-Handle-Sleep-interrupts.patch
Description: Binary data
Regards,
Sami Imseih
Amazon Web Services
Thanks for the feedback!
> On Jun 28, 2024, at 4:34 PM, Tom Lane wrote:
>
> Sami Imseih writes:
>> Reattaching the patch.
>
> I feel like this is fundamentally a wrong solution, for the reasons
> cited in the comment for pg_usleep: long sleeps are a bad idea
&g
>
>> Therefore, rather than "improving" pg_usleep (and uglifying its API),
>> the right answer is to fix parallel vacuum leaders to not depend on
>> pg_usleep in the first place. A better idea might be to use
>> pg_sleep() or equivalent code.
>
> Yes, that is a good idea to explore and it will
With 50 indexes and 10 parallel workers I can see things like:2024-07-02 08:22:23.789 UTC [2189616] LOG: expected 1.00, actual 239.3783682024-07-02 08:22:24.575 UTC [2189616] LOG: expected 0.10, actual 224.3317372024-07-02 08:22:25.363 UTC [2189616] LOG: expected 1.30, actual 230.462
>
> A more portable approach which could be to continue using nanosleep and
> add checks to ensure that nanosleep exists whenever
> it goes past an absolute time. This was suggested by Bertrand in an offline
> conversation. I am not yet fully convinced of this idea, but posting the patch
> that im
I did a few tests with the patch and did not see any "large" drifts like theones observed above.Thanks for testing.I think it could be "simplified" by making use of instr_time instead of timespecfor current and absolute. Then I think it would be enough to compare theirticks.Correct I attached a v2
>
> Correct I attached a v2 of this patch that uses instr_time to check the
> elapsed
> time and break out of the loop. It needs some more benchmarking.
My email client has an issue sending attachments it seems. Reattaching
v2-0001-vaccum_delay-with-absolute-time-nanosleep.patch
Description:
>
> I'm curious why we wouldn't just subtract "elapsed_time" from "delay" at
> the bottom of the while loop to avoid needing this extra check.
Can you elaborate further? I am not sure how this will work since delay is a
timespec
and elapsed time is an instr_time.
Also, in every iteration
>
> I'm imagining something like this:
>
>struct timespec delay;
>TimestampTz end_time;
>
>end_time = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), msec);
>
>do
>{
>longsecs;
>int microsecs;
>
>TimestampDifference(GetCurrentTimes
> What does your testing show when you don't have
> the extra check, i.e.,
>
> struct timespec delay;
> struct timespec remain;
>
> delay.tv_sec = microsec / 100L;
> delay.tv_nsec = (microsec % 100L) * 1000;
>
> while (nanosleep(&delay, &remain) == -1 && er
> As it looks like we have a consensus that reducing the number of interrupts
> also
> makes sense, I just provided a rebase version of the 1 Hz version (see [0],
> that
> also makes clear in the doc that the new field might show slightly old
> values).
That makes sense. However, I suspect the
> From this discussion, there is desire for a sleep function that:
> 1/ Sleeps for the full duration of the requested time
> 2/ Continues to handle important interrupts during the sleep
>
> While something like CF 5118 will take care of point #1,
> I'm not sure, even with CF entry 5118, nanoslee
Sorry for the late reply on this thread.
On 14/8/2024 23:05, Imseih (AWS), Sami wrote:
> There are no tests as this requires more discussion in a separate thread(?)
> Unfortunately, TAP tests don't allow us to keep a connection and
> manually permutate the order of queries sent to different conne
> >> I think the testing discussion should be moved to a different thread.
> >> What do you think?
> > See v4.
> >
> > 0001 deals with reporting queryId in exec_execute_message and
> > exec_bind_message.
> > 0002 deals with reporting queryId after a cache invalidation.
> >
> > There are no tests
> > >> I think the testing discussion should be moved to a different thread.
> > >> What do you think?
> > > See v4.
> > >
> > > 0001 deals with reporting queryId in exec_execute_message and
> > > exec_bind_message.
> > > 0002 deals with reporting queryId after a cache invalidation.
> > >
> > >
I took a look at your patches and here are my comments
> 1) ExecutorRun() misses the reports, which happens when a query
> does an ExecutorStart(), then a series of ExecutorRun() through a
> portal with bind messages. Robert has mentioned that separately a few
> days ago at [1]. But that's not e
> After sleeping on it, I'd tend to slightly favor the last option in
> the back-branches and the second option on HEAD where we reduce the
> number of report calls. This way, we are a bit more careful in
>released branches by being more aggressive in reporting the query ID.
I agree with this beca
> Do you think that we'd better replace the calls reporting the query ID
> in execMain.c by some assertions on HEAD? This won't work for
> ExecutorStart() because PREPARE statements (or actually EXECUTE,
> e.g. I bumped on that yesterday but I don't recall which one) would
Yes, adding the asserts
> On Wed, Jul 17, 2024 at 11:32:49AM +0200, Anthonin Bonnefoy wrote:
>> Wouldn't it be enough to call pgstat_report_query_id in ExecutorRun
>> and ProcessUtility? With those changes [1], both normal statements and
>> utility statements called through extended protocol will correctly
>> report the
I am attaching v3 of the patch which addresses the comments madeearlier by Bertrand about the comment in the patch [1]. Also I will stick withvacuum_sleep as the name as the function will be inside vacuum.c. I am notsure we should make this function available outside of vacuum, but I would liketo h
v3-0001-vaccum_delay-with-absolute-time-nanosleep.patch
Description: Binary data
attaching the patch again. Something is strange with my email client.
Regards,
Sami
> On Jul 26, 2024, at 3:27 AM, Bertrand Drouvot
> wrote:
>
> Hi,
>
> On Thu, Jul 25, 2024 at 05:27:15PM -0500, Sami Imseih wrote:
>> I am attaching v3 of the patch which addresses the comments made
>> earlier by Bertrand about the comment in the patch [1].
&
v4-0001-vaccum_delay-with-absolute-time-nanosleep.patch
Description: Binary data
> yeah, we already have a few macros that access the .ticks, so maybe we could
> add
> 2 new ones, say:
>
> 1. INSTR_TIME_ADD_MS(t1, msec)
> 2. INSTR_TIME_IS_GREATER(t1, t2)
>
> I think the less operations is don
v5-0001-vaccum_delay-with-absolute-time-nanosleep.patch
Description: Binary data
v5 takes care of the latest comments by Bertrand.
Regards,
Sami
> On Aug 7, 2024, at 1:00 AM, Bertrand Drouvot
> wrote:
>
> add t (in microseconds) to x”
I was attempting to be more verbose in the comment,
but what you have above matches the format of
the other comments. I am ok with your revision.
Regards.
Sami
v7-0001-vaccum_delay-with-absolute-time-nanosleep.patch
Description: Binary data
>
> On Wed, Aug 07, 2024 at 06:00:53AM +, Bertrand Drouvot wrote:
>> +SleepEx((microsec < 500 ? 1 : (microsec + 500) / 1000), FALSE);
>
> I think this deserves a comment.
>
Done
>> +#define INS
v8-0001-vaccum_delay-with-absolute-time-nanosleep.patch
Description: Binary data
>
> Yeah, I had the same thought in [1], so +1.
>
> [1]:
> https://www.postgresql.org/message-id/ZpDhS4nFX66ItAze%40ip-10-97-1-34.eu-west-3.compute.internal
>
The intention ( see start of the thread ) was to ma
v9-0001-vaccum_delay-with-absolute-time-nanosleep.patch
Description: Binary data
v9 has some has some minor corrections to the comments.
Regards,
Sami
>
> + * Unlike pg_usleep, This function continues
>
> s/This/this/?
>
> Apart from the above, LGTM.
>
v10-0001-vaccum_delay-with-absolute-time-nanosleep.patch
Description: Binary data
Thanks for this catch. Uploaded v10.
Regards,
Sami
My email client attached the last response for
some reason :(
v10 attached in the previous message addresses
Bertrands last comment and replaces “This” with “this"
Regards,
Sami
On Tue, Aug 13, 2024 at 4:30 PM Nathan Bossart
wrote:
> On Tue, Aug 13, 2024 at 01:12:30PM -0500, Imseih (AWS), Sami wrote:
> >> None of this seems intractable to me. 1 Hz seems like an entirely
> >> reasonable place to start, and it is very easy to change (or to even
> make
> >> configurable).
> time. I wouldn't say I'm diametrically opposed to this patch, but I do
> think we need to carefully consider whether it's worth the extra code.
FWIW, besides the patch that Bertrand is proposing [1], there is another
parallel
vacuum case being discussed to allow for parallel heap scan [2].
Be
> So, I have applied 0001 down to 14, followed by 0002 on HEAD.
Thank you!
> 0002 is going to be interesting to see moving forward. I am wondering
> how existing out-of-core extensions will react on that and if it will
> help catching up any issues. So, let's see how the experiment goes
> with HE
> By the way, with the main issue fixed as of 933848d16dc9, could it be
> possible to deal with the plan cache part in a separate thread? This
> is from the start a separate thread to me, and we've done quite a bit
> here already.
Agree, will do start a new thread.
--
Sami
> Attached is the patch I am finishing with, with some new tests for
> BRIN and btree to force parallel builds with immutable expressions
> through functions.
glad to see the asserts are working as expected ad finding these issues.
I took a look at the patch and tested it. It looks good. My only c
> I am not sure. The GUCs pretty much enforce this behavior and I doubt
> that these are going to break moving on. Of course they would, but we
> are usually careful enough about that as long as it is possible to
> grep for them. For example see the BRIN case in pageinspect.
Yes, I see pageinspect
> Then, please see attached two lightly-updated patches. 0001 is for a
> backpatch down to v14. This is yours to force things in the exec and
> bind messages for all portal types, with the test (placed elsewhere in
> 14~15 branches). 0002 is for HEAD to add some sanity checks, blowing
> up the test
> would help to grab a query ID. A second option I have in mind would
> be to set up an injection point that produces a NOTICE if a query ID
> is set when we end processing an execute message, then check the
> number of NOTICE messages produced as these can be predictible
> depending on the number
Superuser | true
Client Encoding | UTF8
Server Encoding | UTF8
Session Authorization | postgres
postgres=# set role nosuper;
SET
postgres=> \conninfo+ P
Server Parameter Settings
-[ RECORD 1 ]-+-
Superuser | false
Client Encoding
> I am yet to look very closely, but I think some additional columns that
> will be useful is the number of failsafe autovacuums occurred.
>
> Do you mean when the autovacuum started to prevent workaround?
>
Specifically vacuum_failsafe_age [1] when autovacuum automatically
performs a vacuum witho
> I guess one question is how realistic it is to try and put everything about
> (auto)vacuum in a single view.
> Given the complexity, the answer to that might just be “no”. In that case
> leaving existing fields in pg_stat_all_tables
> is a lot more reasonable.
Agree. I also think the total_tim
> While backwards compatibility is important, there’s definitely precedent for
> changing
> what shows up in the catalog. IMHO it’s better to bite the bullet and move
> those fields
> instead of having vacuum stats spread across two different views.
Correct, the most recent one that I could thin
+ This is the
+ default state for newly created indexes.
This is not needed in the ALTER INDEX docs, IMO.ss
+
+ Disable the specified index. A disabled index is not used for
queries, but it
+ is still updated when the underlying table data changes and will still be
+
The last_(auto)vacuum is useful because it allows
the user to monitor vacuum to ensure that vacuums
are completing on a relation at expected intervals.
I am not sure what value a start time will provide.
Can you provide a reason for this?
Regards,
Sami Imseih
1
(1 row)
postgres=# \bind_named stmt 1 \g
?column?
--
1
(1 row)
postgres=# SELECT query, calls FROM pg_stat_statements;
query | calls
---+---
(0 rows)
Regards,
Sami Imseih
Amazon Web Services (AWS)
>> You are right. This is absolutely unexpected for users. Thank you for
>> the review.
>>
>> To fix this, I suggest storing a random number in the [0, 1) range in a
>> separate variable, which will be used for comparisons in any place. We
>> will compare 'sample_rate' and random value not only in
I missed one more point earlier.
I don't think "failure" is a good name for the setting as it's
a bit too harsh. What we really have is a "shortage" of
workers.
Instead of
+{"failure", LOG_PARALLEL_WORKERS_FAILURE, false},
what about?
{"shortage", LOG_PARALLEL_WORKERS_SHORTAGE, false},
Re
> I wrote:
> > Sami Imseih writes:
> >> I am attaching a patch that deals with the RTE_JOIN case.
>
> > I'll take a look. Thanks for the test demonstrating that
> > this makes a visible performance difference.
>
> Pushed with some simplification: we
index
during a prepared statement? I have not yet looked closely at
this code to see what could be done.
Regards,
Sami Imseih
Amazon Web Services (AWS)
[1] https://www.postgresql.org/docs/current/catalogs.html
nsion can
have access to PlanState.
Regards,
Sami Imseih
Amazon Web Services (AWS)
> Should this not behave like if you drop (or create) an index
> during a prepared statement? I have not yet looked closely at
> this code to see what could be done.
>
> Regards,
I looked at this a bit more and ATExecEnableDisableIndex
needs some tweaks.
What should be getting invalidated in the
lel workers (planned: 4)"
This behavior in which workers planned and launched are aggregated in the
log can be described in the documentation.
3/ Also, log_parallel_query_workers as the name of the GUC will make more sense
if we go logging parallel query only.
What do you think?
Regards,
rse, the time can only be accumulated when the vacuum
completes, which should be good enough.
Regards,
Sami Imseih
the comments do not convince me this is a good idea. I think there
are good cases to allow this considering there is a common use case in
which a single
COPY command can load a large amount of data, making the overhead to check the
partitions worth the value of the FREEZE optimization. I will probably
On Mon, Feb 3, 2025 at 11:29 AM Alex Friedman wrote:
>
> Hi Sami,
>
> Thanks for the feedback.
>
> > 1/ Remove this as
> > "(50% of the maximum, which is about 20GB),"
> >
> > [1] tried to avoid explaining this level of detail, and I
> > agree with that.
>
> I feel it is critical for users to know
> Unless there's some existing way to specify a FREEZE option in the FDW API
> (I assume there isn't), I agree with this.
I am not aware of any way to do this either as postgres_fdw is simply
using libpq.
> > I was also looking at why we block a parent from COPY FREEZE[1], but
> Commit 5c9a551 a
At this point I am planning on withdrawing this patch
from the March commitfest. I don't think fixing the REINDEX
debug1 output makes a whole lot of sense. I still think more logging
for (CREATE|ALTER) (INDEX|TABLE) will be a good to have but there
needs to be more discussion about the best approac
> The "story" I have in mind is: I need to audit an instance I know
> nothing about. I ask the client to adapt the logging parameters for
> pgbadger (including this one), collect the logs and generate a report
> for the said period to have a broad overview of what is happenning.
Let's see if anyon
Thanks for the patch!
This does look like a documentation bug indeed.
I have a few comments:
1/ Remove this as
"(50% of the maximum, which is about 20GB),"
[1] tried to avoid explaining this level of detail, and I
agree with that.
2/ c/"about 10GB"/"10GB" the "about" does not seem necessary he
>> I can work on this if you agree.
> I'd welcome an extra patch to rework a bit the format of the comments
> for the Plan nodes, to ease the addition of pg_node_attr(), making any
> proposed patches more readable.
I'll take care of this also.
Regards,
Sami
> To summarize the results of all benchmarks, I compiled them into a table:
Thanks for compiling the benchmark data above.
The main benefit of this patch will be to give the user
a toggle if they are observing high spinlock contention
due to pg_stat_statements which will likely occur
on larger mac
I confirmed the 20GB value as is described here [1].
8k page can hold 409 member groups and each
member group can hold 4 members, thus
(2^32/(409 *4))*8192 = 20GB.
I also fixed whitespace issues in v3.
This is ready-for-committer In my opinion.
Regards,
Sami
[1]
https://github.com/postgres/po
>> Separately I've been thinking how we could best have a discussion/review on
>> whether the jumbling of specific plan struct fields is correct. I was
>> thinking maybe a quick wiki page could be helpful, noting why to jumble/not
>> jumble certain fields?
> Makes sense. This is a complicated top
> Fix CI failure of doc build in v1 patch.
Thanks for the patch! I am +1 for this, but I have a few comments:
1/ In the IDENTITY case, the remote side may not be
able to handle the DEFAULT value. See the example below:
-- on the foreign server
postgres=# CREATE TABLE t2 (id int, c1 text);
CREATE
vior consistent.
Regards,
Sami
On Mon, Feb 3, 2025 at 7:31 PM Zhang Mingli wrote:
>
> Hi,
>
>
> Zhang Mingli
> www.hashdata.xyz
> On Feb 4, 2025 at 04:18 +0800, Sami Imseih , wrote:
>
>
> This really does not make sense as this
> optimization cannot be applied to
my apologies for the top post in the last reply. I hit send too fast :)
I also created a CF entry for this https://commitfest.postgresql.org/52/5544/
I don't think we will need to backpatch, unless someone has a different
opinion about this.
Regards,
Sami
> Yeah, I'd rather error out than expect users to respond to warnings to the
> effect of "hey, you told us to do something, but we did something else that
> isn't what you asked us to do." That both retains the broken feature and
> adds more noise, neither of which seems desirable.
I agree.
Sami
> so the only reason I can see
> for not back-patching it is that it could lead to somewhat widespread
> breakage for existing scripts, etc. which are arguably kind-of working
> today.
That is my thought for not backpatching. It's not breaking the COPY
command, it's just not applying an optimizati
> Given we don't have any other reports or opinions, I'm probably just going
> to apply the patch to v18.
That makes sense. Thanks!
Sami
> This form sets the storage mode for a column. See the similar form of ALTER
> TABLE for more details.
> Note that the storage mode has no effect unless the table's foreign-data
> wrapper chooses to pay attention to it.
Hi,
It looks like cb1ca4d [1], from nearly 10 years ago, allowed storage
o
> This fixes the long comments in plannodes.h to make it easier to add the
> attribute annotation. It made the most sense to make this the first patch
> in the set.
> A commit that happened last Friday made also this to have conflict.
Thanks for committing v5-0001. I am not sure why there is comm
Another thought that I have is that If we mention that extensions can use
these jumbling ( or whatever the final name is ) functions outside of
core, it makes
sense to actually show an example of this. What do you think?
--
Sami
> Patch V2 addressed the comments.
Overall this LGTM.
I still see a "no real storage" in v2 that should be removed
from the documentation.
+ Foreign tables have no real storage in PostgreSQL.
+ Inapplicable options: INCLUDING INDEXES,
INCLUDING STORAGE,
I think the test coverage to check for th
> + Foreign tables have no real storage in PostgreSQL.
> + Inapplicable options: INCLUDING INDEXES,
> INCLUDING STORAGE,
>
> Oh, I corrected another one in the code comments, but I forgot about this one.
> Done in patch v3.
I attached v4 with some slight modifications to the wording, otherwise
thi
I have only looked at 0001, but I am wondering why
query_id_const_merge is a pg_stat_statements GUC
rather than a core GUC?
The dependency of pg_stat_statements to take advantage
of this useful feature does not seem right.
For example if the user does not have pg_stat_statements enabled,
but are
I do not have an explanation from the patch yet, but I have a test
that appears to show unexpected results. I only tested a few datatypes,
but from what I observe, some merge as expected and others do not;
i.e. int columns merge correctly but bigint do not.
"""
show pg_stat_statements.query_id_con
> I had the "BEGIN; ALTER INDEX; EXPLAIN; ROLLBACK;" scenario in mind, but
> didn't realise we acquire an AccessExclusiveLock on the index. Therefore, it's
> not possible to change the visibility within a single transaction
> unless you don’t mind blocking all access to the relation.
>
> I read
I was looking at the comments [1] for why COPY FREEZE is
not allowed on a parent table, and it was mainly due
to having to open up partitions to check if they are able
to take the optimization (table created or truncated in the
current transaction ). Obviously as the number of
partitions grow, it w
pgbench_accounts is the only table that should grow to
the point where partitioning can benefit the tpcb-like
benchmark.
It looks like you have customized the pgbench
schema that partitions the other pgbench tables, so
you can use a custom script to initialize the data.
IMO, If there is a good re
> > IMO, If there is a good reason to allow the other pgbench
> > tables to be partitioned, that may be better to think
> > about. I am not sure there is though.
>
> see this thread [1] proposing partitioning pgbench_history last year.
>
> [1]
> https://www.postgresql.org/message-id/flat/CAAKRu_Zo
to handle this.
Any thoughts?
Regards,
Sami Imseih
Amazon Web Services (AWS)
> > Looking further into this improvement, I started to question if it is
> > necessary to make columns unique for EXPLAIN purposes?
> Yes, otherwise references to them elsewhere in the plan will be
> ambiguous.
Explain will qualify the column name with the table name such as the simple
example b
ental patch )
Time: 499.185 ms
Time: 473.118 ms
[1]
https://www.postgresql.org/message-id/flat/1537818224423-0.post%40n3.nabble.com#5d23ed9ab9cb5ed45c79352141fa3e79
[2]
https://github.com/postgres/postgres/commit/52c707483ce4d0161127e4958d981d1b5655865e
[3]
https://github.com/postgres/postgres/blob
> But if the column names are ambiguous within the same RTE, how does
> table-qualification fix that? And it's within-the-same-RTE that
> we're concerned with here
> It only takes one case to mean we have to deal with it ;-). But I'm
> fairly sure that there are many other cases, since the parse
> The requirement for ownership of at least one type means that the
> example you give could only be done by a superuser.
That's correct; and superuser should be doing the right thing.
> There is currently no way to prevent the usage of a user-defined cast. Should
> there be one?
>> I don't
> > - RETABLE (it's like REINDEX, but for tables)
> > - ALTER TABLE SQUEEZE
> > - SQUEEZE
> > - VACUUM (SQUEEZE)
> > - VACUUM (COMPACT)
> > - MAINTAIN COMPACT
> > - MAINTAIN SQUEEZE
> >
>
> My $0.02:
>
> COMPACT tablename ...
+1 to "COMPACT tablename"
Also the current pg_stat_progress_cluster
Thanks for the updates patch!
>> This got me thinking if dropping the index is the only
>> use case we really care about. For example, you may want
>> to prevent an index that is enforcing a constraint from
>> being used by the planner, but you probably don't want to
>> drop it. In fact, I also th
> s/parallel vacuum workers for index vacuuming/parallel workers for index
> creation/?
oops, that's my oversight from copying the message from vacuum. fixed.
> I'd add "in parallel" to match its counterpart "serially" above. That would
> make it more clear in case one just look for "building in
> If we go with the 3 column format, then we will just
> have a bunch of repeated values for Category, which
> looks cluttered, IMO.
"cluttered" is maybe the wrong description. I meant the output
will look overwhelming due to the repeated values :)
Regards,
Sami
Thanks for the review!
> === 1
>
> + endtime = GetCurrentTimestamp();
> pgstat_report_vacuum(RelationGetRelid(rel),
> rel->rd_rel->relisshared,
> Max(vacrel->new_live_tuples,
> 0),
>
> You might be interested by this thread "Thinking about EXPLAIN ALTER TABLE":
> https://www.postgresql.org/message-id/CAM-w4HNm1M5J-ow8UjTcqRe3JPxkVCrGe56tRpPUSePSdGcZ_w%40mail.gmail.com
I reviewed this thread, and the primary issue with the EXPLAIN command lies
in the inability to predict all t
> I was referring to the order of the fields in the structure itself,
> but that's no big deal one way or the other.
I understand your point now. I will group them with the
related counters in the next rev and will use
> This should be one comment for the whole block, or this should use the
> sin
y experiment code for this. The tricky part will be finalizing
which nodes and node fields to use for plan computation.
3. We may want to combine all the jumbling code into
a single jumble.c since the query and plan jumble will
share a lot of the same code, i.e. JumbleState.
_JumbleNode, etc.
Reg
The opinion in this thread is leaning towards a VERBOSE
option and I agree with this as it provides the users with a
much more intuitive way to gather parallel information about
parallel index builds. Also, the current DEBUG1 output is
useless as it is now.
Here is a v1 that implements CREATE INDE
> Hmm. I am reading Tom's opinion that goes toward not going in this
> direction for more commands, with the point to extend EXPLAIN to show
> this kind of information:
> https://www.postgresql.org/message-id/1692530.1736369...@sss.pgh.pa.us
That sounds like the ability to do something like EXPLA
> > > Hmm. I am reading Tom's opinion that goes toward not going in this
> > > direction for more commands, with the point to extend EXPLAIN to show
> > > this kind of information:
> > > https://www.postgresql.org/message-id/1692530.1736369...@sss.pgh.pa.us
> >
> > That sounds like the ability to
| bar
(1 row
If the relation on the parent and child constraint match, that
tells us we don't have inheritance.
So, I am thinking we should add another condition for checking
if a foreign key is inherited by checking if the parent constraint
relation is different from the child const
> Wait a second, why do we have these here? Aren't they already in
> \dconfig?
\dconfig is generated by querying pg_settings and this
requires a halthy connection. The parameters being proposed with
\conninfo+ are set in libpq by the server [1] and can be retrieved
even if the connection breaks.
> should cut the link between the parent constraint and the constraint on
> the partition being detached.
correct by setting the conparentid to 0 in pg_constraint and to delete
the pg_depend record for partition dependency. But in the repro case,
we don't have a dependency as the table the foreign
> > The patch that Amul and I wrote both achieve the same result.
> > The approach that Amul took builds a list of constraint OIDs,
> > which could grow with the number of partitions and foreign keys
> > on those partitions. Maybe not a big deal?
> Nope, not a big deal. It would be a big deal if
1 - 100 of 348 matches
Mail list logo