> > Attached is my counter-proposal, where I have settled down to four
> > categories of PlannedStmt:
> > - "standard" PlannedStmt, when going through the planner.
> > - internally-generated "fake" PlannedStmt.
> > - custom cache
> > - generic cache
>
> Thanks for the update! I plan on reviewing th
> Attached is my counter-proposal, where I have settled down to four
> categories of PlannedStmt:
> - "standard" PlannedStmt, when going through the planner.
> - internally-generated "fake" PlannedStmt.
> - custom cache
> - generic cache
Thanks for the update! I plan on reviewing this tomorrow.
-
> The current pg_stat_statements change may be a
> good example of the employment of such infrastructure, isn't it?
So, the current set of patches now will help move forward the specific
use-case of this thread. If we need something different that will be
useful for more use-cases, perhaps it's be
> Sami Imseih writes:
> >> Perhaps CachedPlanType is
> >> misnamed, though, would it be more suited to name that as a sort of
> >> "origin" or "source" field concept? We want to know which which
> >> source we have retrieved a plan that
> Perhaps CachedPlanType is
> misnamed, though, would it be more suited to name that as a sort of
> "origin" or "source" field concept? We want to know which which
> source we have retrieved a plan that a PlannedStmt refers to.
Hmm, I’m not sure I see this as an improvement. In my opinion,
Cached
> One option might be to use a local hash table, keyed the same way as the
> shared pgss hash (excluding dbid), to handle cases where a backend has
> more than one active cached plan. Then at ExecutorEnd, the local entry could
> be looked up and passed to pgss_store. Not sure if this is worth the e
> Sami Imseih writes:
> >> That is not to say that I think 719dcf3c4 was a good idea: it looks
> >> rather useless from here. It seems to me that the right place to
> >> accumulate these sorts of stats is in CachedPlanSources, and I don't
> >> see
> Andrei Lepikhov writes:
> > I see you have chosen a variant with a new enum instead of a pointer to
> > a plan cache entry. I wonder if you could write the arguments
> > supporting this choice?
>
> Pointing to a plan cache entry would often mean that the data
> structure as a whole is circular (
> > I know Michael opposed the idea of carrying these structures,
> > at least CachedPlan, to Executor hooks ( or maybe just not QueryDesc?? ).
> > It will be good to see what he think, or if others an opinion about this,
> > about
> > adding a pointer to CachedPlanSource in PlannedStmt vs setting
> Now wondering if it would be better to spend the effort looking at
> pg_hint_plan and seeing how hard it would be to get global hints added
> which are applied to all queries, and then add a way to disable use of
> a named index. (I don't have any experience with that extension other
> than looki
> > with the types of cached plan. We need to be able to differentiate
> > when cached plans are not used, so a simple boolean is not
> > sufficient.
> Sure. But I modestly hope you would add a CachedPlanSource pointer
> solely to the PlannedStmt and restructure it a little as we discussed
> above.
> > It may be more efficient to set the is_generic_plan option at the top
> > plan node (PlannedStmt) and reference it wherever necessary. To identify
> > a cached plan, we may consider pushing the CachedPlan/CachedPlanSource
> > pointer down throughout pg_plan_query and maintaining a reference to
> > This is already an established pattern has been used by other
> > RDBMS's. Having worked with such interface in the past, a combo of
> > ALTER and GUC, I never thought it was awkward and it's quite simple to
> > understand/maintain. But that is subjective.
> >
>
> It's amazing what people are w
> I would like to oppose to the current version a little.
>
> Commits de3a2ea3b264 and 1d477a907e63 introduced elements that are more
> closely related to the execution phase. While the parameter
> es_parallel_workers_to_launch could be considered a planning parameter,
> es_parallel_workers_launche
> >> I was imagining putting the array in one big DSA allocation instead of
> >> carting around a pointer for each tranche name. (Sorry, I realize I am
> >> hand-waving over some of the details.)
> >
> > I understood it like this. Here is a sketch:
> >
> > ```
> > dsa_pointer p;
> >
> > dsa = dsa_
> Note: the size of the change in pg_stat_statements--1.12--1.13.sql
> points that we should seriously consider splitting these attributes
> into multiple sub-functions.
So we don't lose track of this. This should be a follow-up thread. I do
agree something has to be done about the exploding list
> Yes, I think that this is a much better idea to isolate the whole
> concept and let pgss grab these values. We have lived with such
> additions for monitoring in EState a few times already, see for
> example de3a2ea3b264 and 1d477a907e63 that are tainted with my
> fingerprints.
correct, there i
> > "plan cache mode" to be accessible in ExecutorEnd somehow.
> I agree with this - adding references to CachedPlan into the QueryDesc
> looks kludge.
> The most boring aspect of pg_stat_statements for me is the multiple
> statements case: a single incoming query (the only case in the cache of
> a
> > > it will still be extremely risky in
> > > heavy production workloads. In short, we're both walking a bull
> > > through the china shop, but it would seem mine is much more
> > > temperamental than yours.
> >
> > Robert, Could you describe the GUC you would like to see?
> >
> > Also, I'd like
> it will still be extremely risky in
> heavy production workloads. In short, we're both walking a bull
> through the china shop, but it would seem mine is much more
> temperamental than yours.
Robert, Could you describe the GUC you would like to see?
Also, I'd like to ask. what would be the argu
cost_delay
autovacuum_vacuum_cost_limit
autovacuum_vacuum_insert_scale_factor
autovacuum_vacuum_insert_threshold
autovacuum_vacuum_max_threshold
autovacuum_vacuum_scale_factor
autovacuum_vacuum_threshold
--
Sami Imseih
Amazon Web Services (AWS)
> Hmm, I don't propose modifying costs. The focus is on resetting the plan
> cache decision that PostgreSQL has made in automatic mode. During the
> DBMS operation, various factors may cause a generic plan to be
> suboptimal or make it more desirable as well. Discussions from 2010 to
> 2013 indicat
> Unless someone is willing to try and get “The PostgreSQL team’s blessed guide
> to index management”
> into the documentation
I really doubt we can agree on one set of index management guidelines.
If anything, this thread has proven there are many ways to bake this
cake :) and all approaches h
> My concern with #1 is that when we one day do get query hints, we'll
> be left with a bunch of redundant ways to influence planner behaviour.
The GUC is more than just a hint. It can serve as a hint, but it also offers
operational benefits. For example, if I mark an index as invisible and that
> >> this for better tracking. By adding a CachedPlanSource::cplan link, we
> >> can establish a connection to the entire PlanCache entry instead of only
> >> CachedPlan within a queryDesc and, consequently, make it accessible from
> >> the executor. This would give an access to statistics on costs
> > Ugh. This is plugging into an executor-related structure a completely
> > different layer, so that looks like an invasive layer violation to
> > me.. This is passed through ProcessQuery() from a Portal, changing
> > while on it ExplainOnePlan. If we want to get access from a cached
> > plan,
> > Hi,
> >
> > If a dshash table is used to store tranche names and IDs, where would the
> > tranche name for this table
> > be registered?
>
> I guess it could be a new BuiltinTrancheId for this dsa but not sure what
> Nathan
> and Sami have in mind.
Yes, it will be a BuiltinTrancheId for a sha
> this for better tracking. By adding a CachedPlanSource::cplan link, we
> can establish a connection to the entire PlanCache entry instead of only
> CachedPlan within a queryDesc and, consequently, make it accessible from
> the executor. This would give an access to statistics on costs and the
> n
> Ugh. This is plugging into an executor-related structure a completely
> different layer, so that looks like an invasive layer violation to
> me.. This is passed through ProcessQuery() from a Portal, changing
> while on it ExplainOnePlan. If we want to get access from a cached
> plan, wouldn't
On Tue, Jul 15, 2025 at 11:57 AM Nathan Bossart
wrote:
>
> On Tue, Jul 15, 2025 at 11:52:19AM -0500, Sami Imseih wrote:
> >> Another random thought: I worry that the dshash approach might be quite a
> >> bit slower, and IIUC we just need to map an integer to a string. M
> Another random thought: I worry that the dshash approach might be quite a
> bit slower, and IIUC we just need to map an integer to a string. Maybe we
> should just use a DSA for LWLockTrancheNames. IOW we'd leave it as a char**
> but put it in shared memory.
To use DSA just for this purpose, we
> Ah, I missed the problem with postmaster. Could we have the first backend
> that needs to access the table be responsible for creating it and
> populating it with the built-in/requested-at-startup entries?
We can certainly maintain a flag in the shared state that is set once
the first backend l
>> Attached is a proof of concept that does not alter the
>> LWLockRegisterTranche API.
> IMHO we should consider modifying the API, because right now you have to
> call LWLockRegisterTranche() in each backend. Why not accept the name as
> an argument for LWLockNewTrancheId() and only require it t
> > and instead reuse the existing static hash table, which is
> > capped at 128 custom wait events:
> >
> > ```
> > #define WAIT_EVENT_CUSTOM_HASH_MAX_SIZE 128
> > ```
>
> That's probably still high enough, thoughts?
I have no reason to believe that this number could be too low.
I am not aware of
Thanks for the feedback!
> > Attached is a proof of concept that does not alter the
> > LWLockRegisterTranche API. Instead, it detects when a registration is
> > performed by a normal backend and stores the tranche name in shared memory,
> > using a dshash keyed by tranche ID. Tranche name lookup
www.postgresql.org/message-id/aEiTzmndOVPmA6Mm%40nathan
--
Sami Imseih
Amazon Web Services (AWS)
0001-Improve-LWLock-tranche-name-visibility-across-backen.patch
Description: Binary data
>> Perhaps we should think about removing this description, what do you think?
> I think it's a good topic for another patch/thread. Chances are it's not
> the only description that could be updated.
Sure, that could be taken up after this patch.
> That's what I mean. I think it should say
>
>
> Sure, I am not against keeping the function in an existing section, but
> we should remove the description mentioned above for clarity.
to be clear, I am suggesting we just remove the second sentence
in the description. Therefore, instead of:
"The functions shown in Table 9.84 provide server tr
> On Mon, Jun 30, 2025 at 04:08:04PM +0300, Sami Imseih wrote:
> > Correct, I originally proposed the "Transaction ID and Snapshot
> > Information Functions" section, but as stated in [0],
> > pg_get_multixact_members does not fit the description of the section as
&g
On Mon, Jun 30, 2025 at 7:29 AM Ashutosh Bapat
wrote:
>
> On Sat, Jun 28, 2025 at 4:02 AM Nathan Bossart
> wrote:
> >
> > On Thu, Jun 12, 2025 at 03:10:56PM -0500, Nathan Bossart wrote:
> > > Barring comments/objections, I'll plan on committing/back-patching this in
> > > the near future.
> >
>
rebased patch.
Regards,
Sami
v6-0001-Normalize-variable-fetch-sizes-in-a-FETCH-command.patch
Description: Binary data
rebased patch.
Only changes to the tests due to the revert of nested query
tracking in f85f6ab051b
Regards,
Sami
v10-0001-Add-plan_cache-counters-to-pg_stat_statements.patch
Description: Binary data
> Okay, I've done this in the attached patch.
Thanks! v7 LGTM.
--
Sami
On Thu, Jun 12, 2025 at 11:32 AM Álvaro Herrera wrote:
>
> Hello,
>
> I have pushed that now,
thanks!
> and here's a rebase of patch 0003 to add support
> for PARAM_EXTERN. I'm not really sure about this one yet ...
see v11. I added a missing test to show how external param
normalization behav
> Attached patch removing extra comma. Otherwise LGTM.
Thanks! For v4, the final comma in the list is grammatically correct,
and it’s the style we use throughout the docs.
I marked the patch RFC.
--
Sami
I tested v6 and I think GetNamedDSA is a good addition. I did
not find any issues with the code. However, I am still convinced
that GetNamedDSMHash should not append " Hash" to the tranche
name of the dshash [0]. I am ok with " DSA" because the DSA tranche
is created implicitly by the API.
Also,
> It's not clear, without some effort, which lock mode go with which row lock
> in that description.
> For example, "keysh" does not have "for" in it but "fornokeyupd" does. How
> about rephrasing this
> as "The lock modes "keysh", "sh", fornokeyupd", and "forupd" correspond to
> FOR KEY SHARE, F
> IMO, having this GUC to force the use of invisible indexes is quite
> strange. In my view, it detracts from the guarantees that you're meant
> to get from disabling indexes. What if some connection has
> use_invisible_index set to true? The DBA might assume all is well
> after having seen nobody
> So, if we were adding named LWLocks today, I suspect we might do it
> differently. The first thing that comes to mind is that we could store a
> shared LWLockTrancheNames table.
+1
> and stop requiring each backend to register them individually.
which will prevent odd behavior when a backend
There is also that dynamic tranche named are stored in local backend
look-up table, so if you have some backends that attached some dynamic
hash table
and others that did not, only the ones that registered would be able to
resolve the tranche id to its name.
This is the case which I encountered ye
> I'm not quite following your uneasiness with the tranche names. For the
> dshash table, we'll need a tranche for the DSA and one for the hash table,
> so presumably any wait events for those locks should be named accordingly,
> right?
I may be alone in this opinion, but I prefer the suffixless
> It is not expected behavior IMO, and I still need to debug this a bit more,
> but it may be something outside the scope of this patch that the patch just
> surfaced.
It seems I got it backward. If the tranch is registered, then the wait event
name is the one of the tranche, in that case we will
> On Mon, Jun 09, 2025 at 03:10:30PM -0500, Sami Imseih wrote:
> > One thing I noticed, and I´m not too fond of, is that the wait_event name
> > shows
> > up with the _dsh suffix:
> >
> > ```
> > postgres=# select query, wait_event, wait_event_type from pg
Thanks, I tested v2 a bit more and did a quick hack of pg_stat_statements just
to get a feel for what it would take to use the new API to move the hash table
from static to dynamic.
One thing I noticed, and I’m not too fond of, is that the wait_event name shows
up with the _dsh suffix:
```
postgr
> I've spent a bunch of time looking at this series and here's my take on
> the second one.
Thanks!
> I realized that the whole in_expr production in gram.y is pointless, and
> the whole private struct that was added was unnecessary. A much simpler
> solution is to remove in_expr, expand its use
> In that type of environment, the GUC-only method enables you to
> control changes at very precise levels, so you can do things like:
> - run it ad-hoc at the session level to confirm that the explain plans
> you get in production match your expectations.
> - you can stay ad-hoc at the session lev
> Don't get me wrong, it would be an improvement to have some type of
> mechanism that can move you from almost 100% to 100%, but the real
> problem is how do you SAFELY get to almost 100% in the first place?
This big use case is precisely the "almost 100% to 100%" confidence problem.
Usually, us
derstand,
it works only at the EXPLAIN level. It is purely an experimentation tool.
However, the proposed GUC can also be used in more places,
including, pg_hint_plan ( at least with the SET hint without any changes
to pg_hint_plan).
> > P.S. I really do want to thank Shayon for sticking with this;
> +1
+1
--
Sami Imseih
Amazon Web Services (AWS)
Thanks for the review!
> +1. PFA diff of some edits. Please incorporate them in
> your patch if you find them correct.
sure, the diff looks fine to me. will fix.
> We developers may understand the mode text "sh", "keysh" etc. But it may not
> be user friendly.
yes, I can see that being a point
> Thanks for your work on this.
>
> Since we've changed the placement of these parameters
> between parallel_workers and stats_since, we should also update
> the documentation to reflect their new location.
Absolutely. My miss. Here is v9 with the doc updated.
Thanks!
--
Sami
v9-0001-Add-plan_
h_tranche_name);
3/ It will be good to "Assert(dsh)" before "return dsh;" for safety?
MemoryContextSwitchTo(oldcontext);
LWLockRelease(DSMRegistryLock);
return dsh;
}
--
Sami Imseih
Amazon Web Services (AWS)
> + * If an direction_keyword (i.e., FETCH FORWARD) is used, set this field
> + * to distinguish it from its numeric counterpart (i.e., FETCH 1). This
> + * value is set only within gram.y.
>
> One nitpick comment here is that I would have mentioned that this
> matters for query jumbling.
Done
I realized this thread did not have a CF entry,
so here it is https://commitfest.postgresql.org/patch/5801/
--
Sami
s-postgresql-multixact-member-exhaustion-incidents-may-2025
--
Sami Imseih
Amazon Web Services (AWS)
v2-0001-Document-pg_get_multixact_members.patch
Description: Binary data
ging (or vice versa),
but there may be, and this will be more flexible.
--
Sami Imseih
> Hmm. I was not sure if we'd really need to get down to that, as most
> of the grammar keywords have the same parsed meaning, but there's a
> good point with LAST for example that uses a negative value for
> howMany. If we silence the number, LAST would map with everything
> else that has FETCH_
y I suggest we keep its current behavior, which is to control
logging for both autoanalyze and autovacuum, and instead introduce only one
new GUC, log_autovacuum_analyze_min_duration, which defaults to -1? For
workloads that require different logging for autoanalyze, this new setting
can be enabled.
--
Sami Imseih
Amazon Web Services (AWS)
ues for
n->explicit_direction are nearby each other. It does not change
the behavior.
Also, modified the tests to match and some additional code comments.
--
Sami Imseih
Amazon Web Services (AWS)
v3-0001-Normalize-variable-fetch-sizes-in-a-FETCH-command.patch
Description: Binary data
> Should we offer something more consistent with DeallocateStmt, where
> we have a boolean flag that would be set when ALL is specified,
> included in the jumbling? This would mean two separate entries: one
> for the constants and one for ALL.
Hmm, we could do that to differentiate the keyword AL
, but that could still be taken up after.
See v8 with the field names reorganized.
--
Sami Imseih
Amazon Web Services (AWS)
v8-0001-Add-plan_cache-counters-to-pg_stat_statements.patch
Description: Binary data
--
Sami Imseih
Amazon Web Services (AWS)
On Mon, Jun 2, 2025 at 3:03 PM Nathan Bossart
wrote:
> On Mon, Jun 02, 2025 at 12:46:51PM -0500, Sami Imseih wrote:
> > v1-0001 is the documentation only patch. I improved upon the description
> > suggested in [0]
>
> Your patch
d tests in pg_stat_statements utility.sql to demonstrate
how queryIds
are grouped for the variants of the FETCH statement.
--
Sami Imseih
Amazon Web Services (AWS)
v2-0001-Normalize-variable-fetch-sizes-in-a-FETCH-command.patch
Description: Binary data
);
begin;
select from t for update ;
savepoint s1;
update t set v = v;
select pg_get_multixact_members(a.relminmxid), a.relminmxid from
(select relminmxid from pg_class where relname = 't') a;
commit;
```
Thoughts?
--
Sami Imseih
Amazon Web Services (AWS)
[0] https://www.postgresql.or
c/custom plan counters. I prefer the former since we do it
once in a major version
and do not have to worry about it once new counters are added.
It just feels odd that they sit in between the counters as they have a
high level purpose.
Thanks!
Sami Imseih
Amazon Web Services (AWS)
v7-0001-Add-plan_cache-counters-to-pg_stat_statements.patch
Description: Binary data
Thanks!
> blog posts that recommend it. In any case, I can't think of a reason it
> ought to remain undocumented.
I agree, especially with blogs referencing it.
> Want to put together a patch?
Yes, will do
—
Sami
...@postgresql.org
--
Sami Imseih
Amazon Web Services (AWS)
ested
for "-- Unique query IDs with parameter numbers switched." and what is tested
for "-- Two groups of two queries with the same query ID."
I also added a comment for
``
bool extern_param;
```
v8 addresses the above.
--
Sami Imseih
Amazon Web Services (AWS)
v8-0003-Support-Squashing-of-External-Parameters.patch
Description: Binary data
v8-0002-Fix-Normalization-for-squashed-query-texts.patch
Description: Binary data
v8-0001-Enhanced-query-jumbling-squashing-tests.patch
Description: Binary data
to have a
pg_stat_statements.track_cursor_utility GUC that can toggle the
reporting of utility statements related to explicit cursors, while
still tracking the
underlying statements. I would even suggest making 'on' the default to
maintain the current behavior.
I don’t like that we have to introduce a new GUC for this,
but I can't think of a better alternative.
Thoughts?
--
Sami Imseih
Amazon Web Services (AWS)
BuildCachedPlan code and is not related
to the part that made CachedPlan available to QueryDesc.
See v6 for the rebase of the patch and addition of testing for EXPLAIN
and EXPLAIN ANALYZE which was missing from v5.
[0] https://www.postgresql.org/message-id/605328.1747710...@sss.pgh.pa.us
--
Sami
a reused generic plan? */
```
which we had to account for up until v5. So this simplifies the
tracking a bit more as the only states to track are "generic plan"
or "custom plan" only.
--
Sami Imseih
Amazon Web Services (AWS)
>> * 0001:
> You have mentioned the addition of tests, but v6-0001 includes nothing
> of the kind. Am I missing something? How much coverage did you
> intend to add here? These seem to be included in squashing.sql in
> patch v6-0002, but IMO this should be moved somewhere else to work
> with th
> I've spent a bit of time looking at this, and I want to
> propose the following patchset.
Sorry about this, but I missed to add a comment in one of the
test cases for 0004 that describes the behavior of parameters
and constants that live outside of the squashed list.
The following 2 cases will
> > therefore, a user supplied query like this:
> > ```
> > select where $5 in ($1, $2, $3) and $6 = $4 and 1 = 2
> > ```
> >
> > will be normalized to:
> > ```
> > select where $1 in ($2 /*...*/) and $3 = $4 and $5 = $6
> > ```
>
> Hmm, interesting.
>
> I think this renumbering should not be a pro
> If you subtract recently dead from that number within the heap
> implementation, then it will no longer
> reflect non-removable tuples and the log message in the cluster
> function "found %.0f removable, %.0f nonremovable row versions" will no
> longer be correct.
Yes, that's correct. I did no
> In v17, we are a bit smarter with the numbering, with a normalization
> giving the following, starting at $1:
> select where $5 in ($1, $2, $3) and $6 = $4
>
> So your argument about the $n parameters is kind of true, but I think
> the numbering logic in v17 to start at $1 is a less-confusing res
> On Fri, May 23, 2025 at 04:29:45PM +0200, Dmitry Dolgov wrote:
> > I think it's better to recursively call IsSquashableConst on the nested
> > expression (arg or args for FuncExpr). Something like that was done in
> > the original patch version and was concidered too much at that time, but
> > si
> > On Thu, May 22, 2025 at 10:23:31PM GMT, Sami Imseih wrote:
> > > This does not get squashed:
> > > Q: select where 2 in (1, 4) and
> > > 1 in (5, cast(7 as int), 6, (cast(8 as int)), 9, 10, (cast(8 as
> > > text))::int);
> > > R: select where
> For example with bind queries like that:
> select where $1 in ($3, $2) and 1 in ($4, cast($5 as int))
> \bind 0 1 2 3 4
>
> Should we have a bit more coverage, where we use multiple IN and/or
> ARRAY lists with constants and/or external parameters?
I will add more test coverage. All the tests we
s the value set in
> pg_class.reltuples )
> after we process all the tuples, which looks like the best fix to me.
something like the attached.
--
Sami Imseih
Amazon Web Services (AWS)
v1-0001-Correct-reltuples-count-after-a-VACUUM-CLUSTER-op.patch
Description: Binary data
> IMO adding a struct as suggested is okay, especially if it reduces the
> overall code complexity. But we don't want a node, just a bare struct.
> Adding a node would be more troublesome.
In v4, a new private struct is added in gram.y, but we are also adding
additional fields to track the expres
acting
tups_recently_dead from num_tuples ( which is the value set in
pg_class.reltuples )
after we process all the tuples, which looks like the best fix to me.
--
Sami Imseih
Amazon Web Services (AWS)
ance_worker' to something like 8. If it
so happens that all
3 tables are auto-vacuumed at the same time, there may not be enough
parallel workers,
so one table will be a loser and be vacuumed in serial. That is
acceptable, and a/v logging
( and perhaps other stat views ) should display this behavior: workers
planned vs workers launched.
thoughts?
--
Sami Imseih
Amazon Web Services (AWS)
elect reltuples::int from pg_class where relname = 't';
select n_dead_tup, n_live_tup from pg_stat_all_tables where relname = 't';
reltuples
---
99
(1 row)
postgres=# select n_dead_tup, n_live_tup from pg_stat_all_tables where
relname = 't';
n_dead_tup | n_
is the lack of
visibility into multixact members.
[0]
https://www.postgresql.org/message-id/flat/CALdSSPi3Gh08NtcCn44uVeUAYGOT74sU6uei_06qUTa5rMK43g%40mail.gmail.com#bfd9ae766ef42f7599258183aa8ddb3b
--
Sami Imseih
Amazon Web Services (AWS)
> > > At the same time AFAICT there isn't much more code paths
> > > to worry about in case of a LocationExpr as a node
> >
> > I can imagine there are others like value expressions,
> > row expressions, json array expressions, etc. that we may
> > want to also normalize.
> Exactly. When using a n
ion ParseLoc in
A_ArrayExpr and A_Expr. Doing it this will keep changes to the parse_expr.c
code to a minimum, only the IN transformation will need to set the values
of the A_Expr into the final A_ArrayExpr.
--
Sami Imseih
Amazon Web Services (AWS)
diff --git a/src/backend/parser/gram.y b/src/backend
likely(jumble_len >= JUMBLE_SIZE))
{
- uint64 start_hash;
+ int64 start_hash;
- start_hash = DatumGetUInt64(hash_any_extended(jumble,
- JUMBLE_SIZE, 0));
+ start_hash = DatumGetInt64(hash_any_extended(jumble,
+ JUMBLE_SIZE, 0));
memcpy(jumble, &start_hash, sizeof(start_hash));
jumble_len = sizeof(start_hash);
--
Sami Imseih
Amazon Web Services (AWS)
Tested the patch and it looks good to me.
Not that I thought it would fail, but I also confirmed the pgaudit case
works as expected.
```
LOG: AUDIT: SESSION,10,2,DDL,CREATE TABLE,,,"CREATE TABLE do_table
(""weird name""
INT)",
LOG: AUDIT: SESSION,10,3,DDL,DROP TABLE,,,DROP table do_table,
DO
``
ows)
I am still not sure why this is the case, but wanted to share this
for now.
--
Sami Imseih
Amazon Web Services (AWS)
FWIW, all the hash function SQL interfaces, \df hash*,
have this behavior in which the result is a signed (int/bigint),
but the internal representation of the hash is an unsigned (int/bigint).
I am not sure why a comment is needed specifically for pg_stat_statements
--
Sami Imseih
Amazon Web
1 - 100 of 393 matches
Mail list logo