Thanks for the comments!
On Tue, Apr 15, 2025 at 3:11 AM Andres Freund wrote:
>
> Hi,
>
> On 2025-04-14 09:58:19 -0700, James Hunter wrote:
> > I see two orthogonal problems, in processing Bitmap Heap pages in
> > parallel: (1) we need to prefetch enough pages, far e
On Thu, Apr 10, 2025 at 8:15 PM Thomas Munro wrote:
>
> On Fri, Apr 11, 2025 at 5:50 AM James Hunter
> wrote:
> > I am looking at the pre-streaming code, in PG 17, as I am not familiar
> > with the PG 18 "streaming" code. Back in PG 17, nodeBitmapHeapscan.
On Wed, Apr 9, 2025 at 11:00 PM Thomas Munro wrote:
>
> On Wed, Apr 9, 2025 at 1:46 PM James Hunter wrote:
> > On Mon, Apr 7, 2025 at 7:34 PM Thomas Munro wrote:
> > > On Thu, Feb 13, 2025 at 1:40 PM Melanie Plageman
> > > wrote:
> > > > Thomas menti
er request the next batch of blocks,
when the current block is used up.
Then you can separate "reading" from "prefetching": whoever reads the
last block in the current batch, prefetches the next batch.
This way, you could preserve your existing "reading" logic, and you
wouldn't need to create several new, related queues.
James
has
stack memory but not its own heap.)
2. For PQ, in particular, how does the cost of serializing +
deserializing the query itself compare to the cost of fork()ing the
process?
James
(sequence) variable both before and
*after* touching the data it protects.
James
[1] https://en.wikipedia.org/wiki/Seqlock
On Tue, Nov 19, 2024 at 2:16 PM James Coleman wrote:
>
> Hello,
>
> A colleague noticed today that the docs still say that "Scans of
> common table expressions (CTEs)" are "always parallel restricted".
>
> While I think that strictly remains true at
On Wed, Mar 5, 2025 at 12:16 PM Jeff Davis wrote:
>
> On Wed, 2025-03-05 at 11:33 -0800, James Hunter wrote:
> > For a bitfield, however, the CPU has to read from or write to the
> > byte
> > that contains the bit, but then it also has to mask out the *other*
> > bi
be a byte, and
memory is byte-addressable.
For a bitfield, however, the CPU has to read from or write to the byte
that contains the bit, but then it also has to mask out the *other*
bits in that bitfield. This is a data dependency, so it stalls the CPU
pipeline.
So Booleans tend to be faster than bitfields, because they avoid a
pipeline stall.
James Hunter
runtime, that the planner's
assumptions were so wrong that they'll lead us to execute a
sub-optimal plan, then maybe we can re-plan. But I explicitly wouldn't
re-plan a prepared statement, since the customer's expectation is that
it has already been prepared.
James Hunter
[1]
https://www.postgresql.org/message-id/flat/CAJVSvF5n3_uEGW5GZSRehDuTfz7XVDohbn7tVJ%2B2ZnweQEVFrQ%40mail.gmail.com#abc6e69a396bb9f6505bf33260670a1f
I want to store a 4x 4-byte struct in some sort of list /
expandable array. From a human-readability point of view, it's awkward
to split the struct into 4x IntLists; from a CPU point of view, it's
awkward to take up an 8-byte pointer to point to a 16-byte struct,
allocated on the heap.
James
nk the patches would
be as short as you expect. For example, if Agg nodes behaved as in
your example, quoted at the top of this email, then we wouldn't need
Patch 2's additional logic to assign workmem_limit to Aggs (and we
wouldn't need the corresponding logic in Patch 3, to assign workmem
(estimate) to Aggs, either).
But Aggs aren't as simple as in your example -- they have Hash limits
and Sort limits; they have a side-chain of Agg nodes; they have input
sets they need to Sort; etc. And so we need a couple dozen lines of
code to handle them.
Thanks for the feedback,
James Hunter
at
least neutral). Flag this individual unit test, send it to Robert, and
development continues.
Given enough unit tests, along with reasonable pre-commit CI testing,
the probability of two commits causing regressions / diffs in the same
unit test, over a 1-week period (let's say), is very small.
So: +1 to your proposal.
James
h "pgindent" run on it
first. So the patch / diff is much smaller now.
James
0001-PATCH-Optimize-SP-GiST-text-leaf-comparisons-with-me.patch
Description: Binary data
On Tue, Feb 25, 2025 at 9:39 AM Tomas Vondra wrote:
>
> On 2/25/25 17:30, James Hunter wrote:
> > On Wed, Feb 19, 2025 at 12:22 PM Tomas Vondra wrote:
> > -- OK, but the customer *didn't* set their workmem to 32 MB. (If they
> > had, we wouldn't need this pa
t [1] (should be done by end of day today) I
will deal with the case discussed above by:
1. Doubling Plan.workmem_limit whenever we halve nbatches (so we track
the "workmem" needed by the hash table);
2. Displaying Plan.workmem_limit + Hash.nbatches * (2 * BLCKSIZE),
inside EXPLA
On Mon, Feb 24, 2025 at 6:54 PM Jeff Davis wrote:
>
> On Mon, 2025-02-24 at 12:46 -0800, James Hunter wrote:
> > Attached please find the patch set I mentioned, above, in [1]. It
> > consists of 4 patches that serve as the building blocks for and a
> > prototype of th
quot;workmem_limit" limit.
> Can you write a useful extension with just the above two core patches?
I think so; I will attach a patch for that as well.. (This will be
"Patch 4"; note that "Patch 2" is a prerequisite for "Patch 3".)
> Regards,
> Jeff Davis
Thanks,
James Hunter
methods might want to issue batch reads. However, the "inner"
access method might not be aware of the "outer" access method.
For simplicity, then, I just completed the outer batch. Note that this
is not optimal for performance (because a nested batch ends up
stalling the outer batch), but it does keep the code simple...
James
On Tue, Feb 11, 2025 at 10:00 AM Jeff Davis wrote:
>
> On Mon, 2025-02-10 at 19:09 -0800, James Hunter wrote:
> > I think it makes sense to split the work into two parts: one part
> > that
> > improves SQL execution, and a second part that improves the
> >
itself because we don't want to pass explicit I/O
> contexts through the layers), but code that doesn't call those and
> reaches AsyncReadBuffer() or whatever gets an implicit batch of size
> one and that's also OK. Not sure what semantics nesting would have
> but I doubt it matters much.
I like this idea. If we want to submit a batch, then just submit a batch.
James
challenging task for the DBA...” This is
an impossible task for the cloud provider!
Thanks,
James
[1] https://www.vldb.org/conf/2002/S29P03.pdf
[2]
https://docs.aws.amazon.com/ebs/latest/userguide/ebs-io-characteristics.html#ebs-io-size-throughput-limits
On Mon, Feb 10, 2025 at 7:09 PM James
On Fri, Jan 24, 2025 at 5:48 PM Jeff Davis wrote:
>
> On Fri, 2025-01-24 at 17:04 -0800, James Hunter wrote:
> > Generating "high memory" vs. "low memory" paths would be tricky,
> > because the definition of "high" vs. "low" depends on th
|, etc. (In other words, I suspect that considering absolute
cardinalities will end up easier/cleaner than considering ratios of
increases/decreases in cardinalities.) But I have not thought about
this much, so I am not putting too much weight on my suspicions.
James
he join order won't
matter.
4. Continue joining tables in this fixed (arbitrary) order, unless we
come to a Dk where the optimizer thinks joining to Dk will generate a
significant number of rows.
5. Either we join all tables in order (fast compilation!); or we hit
the case in (4), so we just
there's no collision, but worse in
the less-likely case where there is a collision?
What I mean is, regarding benchmarks: what's the best case scenario
for this kind of patch, and what sort of performance difference would
you expect to see?
Thanks,
James
vel is something that would be a reasonable solution
to this problem, or if folks think there's a better way to solve the
problem.
Regards,
James Coleman
On Tue, Jan 21, 2025 at 1:26 PM Jeff Davis wrote:
>
> On Fri, 2025-01-10 at 10:00 -0800, James Hunter wrote:
> > How should “query_work_mem” work? Let’s start with an example:
> > suppose
> > we have an OLAP query that has 2 Hash Joins, and no other operators
> > tha
On Wed, Jan 22, 2025 at 1:13 PM Tomas Vondra wrote:
>
> On 1/10/25 19:00, James Hunter wrote:
> > ...
> > I wouldn’t change the existing planning logic (at least not in the
> > initial implementaton). So the existing planning logic would choose
> > between differe
; the result expected by the caller of bringetbitmap(), and we know that
> > based on MaxBlockNumber we'll never run out of bits.
>
> That should be simple enough. Are you planning to send a proposal of
> patch?
Attached the proposed one-line fix.
James
0001-Fix-integer
ors in the query,
all operators would be assigned at least work_mem, which would make my
proposal a Pareto improvement.
Last, at runtime, each PlanState would check its plan -> work_mem
field, rather than the global work_mem GUC. Execution would otherwise
be the same as today.
What do you think?
iant Hash Join, for example,
and the hash table is just going to continue to grow...
Before we can solve the problem you describe, we need to be able to
limit the work_mem consumption by an in-progress query.
James
tay within 50 MB of RAM breaks down into splitting that 50 MB into
per-operator "work_mem" limits, which is (as you point out!) at least
an order of magnitude easier than a general workload management
solution. Once we have per-operator "work_mem" limits, existing
PostgreSQL logic takes care of the rest.
Thanks,
James
s either:
* Buy more resources; or
* Reduce your workload.
We can't help with either of those solutions!
However, the (short-term) problem I'd like to solve is: how do we
expose efficient use of resources, in those cases where we *do* have
enough resources to run a workload efficiently, but the existing
"work_mem" and "hash_mem_multiplier" GUCs are insufficient?
Thanks,
James
On Mon, Dec 30, 2024 at 3:12 PM David Rowley wrote:
>
> On Sat, 28 Dec 2024 at 08:14, James Hunter wrote:
> > 2. We use this backend_work_mem to "adjust" work_mem values used by
> > the executor. (I don't care about the optimizer right now -- optimizer
> >
ction -
level limit, times "max_connections", yields a cluster-level limit.
Now, which is easier for customers to understand -- that's up for debate!
James
nce_work_mem_target / (N+2)"
memory...)
[1]
https://www.postgresql.org/message-id/4806d917-c019-49c7-9182-1203129cd295%40vondra.me
[2]
https://www.postgresql.org/message-id/CAJVSvF6i_1Em6VPZ9po5wyTubGwifvfNFLrOYrdgT-e1GmR5Fw%40mail.gmail.com
Thanks,
James
goes to 0.
I think you're right. A per-backend limit is nice because it can be
distributed to individual operators as per-operator working memory,
because operators will just spill. But killing a query based on an
approximation of how much memory we have seems like an overreaction.
Thanks,
James
On Mon, Dec 30, 2024 at 2:56 PM David Rowley wrote:
>
> On Tue, 31 Dec 2024 at 10:11, James Hunter wrote:
> > Does PostgreSQL currently rescan Hash Joins when they are "no longer
> > needed," to free work_mem early? If so, then I would try to reuse this
> > ex
enerate and execute a third
query, etc.?) For example, the first query might call a function that
starts a new portal and executes a second query, and so on. Is this
what you're thinking about? If so, I would model this pattern as each
level of recursion taking up, logically, a "new connection."
Thanks,
James
timizer will allow us to
distribute the total backend_work_mem to individual execution nodes,
with the goal of minimizing spilling, without exceeding the
backend_work_mem limit.
Anyway, I revived this thread to see if there's interest in this sort
of strategy --
Thanks,
James
On Mon, Dec 16, 2024 at 6:17 PM Peter Smith wrote:
>
> While revisiting some old threads, I found this one that seemed to
> reach a conclusion, but then it seemed nothing happened.
>
> After multiple suggestions AFAICT James preferred the docs [1]
> modification sugges
er of pages per range assigned to brin can be 1, still..
Good point -- so the fix should be something like: (a) make totalpages
a BlockNumber or uint32; (b) cast either "totalpages" or "10" to
int64, before returning the result?
James
On Tue, Nov 26, 2024 at 2:57 AM Evgeniy Gorbanyov
wrote:
>
> Function bringetbitmap() in src/backend/access/brin/brin.c:560 returns type
> int64. But the returned variable 'totalpages' is of type int. Maybe it makes
> sense to change the type of variable 'totalpages' to int64 to avoid possible
since the
change to default to trying to inline CTEs rather than defaulting to
materializing them.
Attached is a patch to slightly modify the language; would be happy to
hear suggestions on a better way to improve this.
Regards,
James Coleman
v1-0001-Update-parallel-safety-docs-for-CTE-optimizat
On Wed, Oct 9, 2024 at 5:26 PM David Rowley wrote:
>
> On Thu, 10 Oct 2024 at 08:38, James Hunter wrote:
> > We had two almost-identical copies of the utility function
> > clause_sides_match_join() -- one in joinpath.c, and one in
> > analyzejoins.c. Both copies were mark
ause it is already #included by both of the .c
files.
This change doesn't help very much, on its own (since the
almost-duplicate functions are both "inline"), but it allows us to use
the same utility function from other source files, without making a
third copy.
James
0001-Move-cla
Our tool have detected that postgre in the version of REL9_6_18~ REL9_6_24
may also affected by the vulnerability CVE-2022-2625. The vulnerability
database does not include these versions and you may not fix it in the
REL9_6 branch. Is there a need to backport the patch of CVE-2022-2625?
On Thu, Jul 18, 2024 at 2:38 PM Tom Lane wrote:
>
> James Coleman writes:
> > The plan generated by the planner changed suddenly one morning this
> > week, and in a very surprising way: the innermost scan (of "objects")
> > started choosing a seq scan, des
7;s true I think the early return cost multiplication of
the LIMIT is being applied very naively on the seq scan node. Or
perhaps the issue is that the startup cost for a single tuple on a seq
scan like this shouldn't really have a startup cost of 0 -- that cost
is presumably for tuples being returned _without_ having applied the
filter. That seems slightly odd to me, because the cost of getting the
first row out of that node -- in my naive view thinking about it for
all of 5 seconds -- should be calculated based on applying the filter
(and thus the likelihood that that filter matches right away). If we
did that then this path would never win. But that 0.00 startup cost
for the seq scan with a filter shows up in PG14 and PG11 also, not
just PG16, so that's not something that's changed.
To recap: the estimation of rows is correct, the estimated high
(total) cost of the seq scan is correct, but the seq scan is chosen
over the index scan anyway for a plan that returns a single "random"
row based on the primary key.
Am I right to be surprised here?
James Coleman
re in a kind
of insidious way: the "bare" function call in DEFAULT is *not*
executed as part of the query for DDL like it is with other queries.
Hope this helps explain things.
James Coleman
On Tue, Jun 25, 2024 at 4:59 PM Tom Lane wrote:
>
> James Coleman writes:
> > It's possible I'm the only one who's been in this situation, but I've
> > multiple times found myself explaining to a user how column DEFAULT
> > expressions work: namel
possible I'm
missing something.
Thanks,
James Coleman
On Wed, Jun 5, 2024 at 5:34 AM David Rowley wrote:
>
> On Sun, 2 Jun 2024 at 10:08, James Coleman wrote:
> > See attached for a small patch fixing some typos and grammatical
> > errors in a couple of comments.
>
> Thanks. I pushed this after messing with the comm
of the comment opting to simply
resolve the clear mistakes in the wording here.
Regards,
James Coleman
v1-0001-Fix-comment-grammar-oddities.patch
Description: Binary data
RecordTransactionCommitPrepared() for this to work, and, indeed,
adding some logging to verify, the value of XactLastRecEnd we'd use to
update XactLastCommitEnd is the same at the end of both of those
functions during COMMIT PREPARED.
I'd like to have someone weigh in on whether relying on
nd
2. last_commit_lsn is not advancing, and
3. pg_current_wal_lsn() has advanced a lot
then you can probably infer that there's a large amount of data that
simply cannot be completed by the subscriber, and so there's no "real"
delay. It also gives you an idea of how much data you will need to
churn through (even if not replicated) once the transaction commits.
Certainly understanding the data here will be simplest in the case
where 1.) there's a single database and 2.) all tables are in the
replication set, but I don't think the value is limited to that
situation either.
Regards,
James Coleman
erve to
highlight patches that simply haven't had any review at all.
I'd like to add a feature to the CF app that shows me my current
patches by status, and I'd also like to have the option to have the CF
app notify me when someone changes the status (I've noticed before
that often a status gets changed without notification on list, and
then I get surprised months later when it's stuck in "waiting on
author"). Do either/both of those seem reasonable to add?
Regards,
James Coleman
t;, I'd far prefer to have that outcome
then some automated process closing out my submission without my input
when, as far as I can tell, the real problem is not my lack of
activity by the required reviewers simply not looking at it.
So I'm genuinely confused by you say "As long as this is not a
surprise ending, I don't see the issue.". Perhaps we're imagining
something different here reading between the lines?
Regards,
James Coleman
On Mon, Apr 1, 2024 at 8:06 AM James Coleman wrote:
>
> On Mon, Mar 25, 2024 at 5:53 PM Tom Lane wrote:
> >
> > James Coleman writes:
> > > [ v6 patchset ]
> >
> > I went ahead and committed 0001 after one more round of review
> >
> >
On Mon, Mar 25, 2024 at 5:53 PM Tom Lane wrote:
>
> James Coleman writes:
> > [ v6 patchset ]
>
> I went ahead and committed 0001 after one more round of review
>
> statements; my bad). I also added the changes in test_predtest.c from
> 0002. I attach a rebased versi
tant, but perhaps future readers of the archives will
> be confused.
I was wondering myself :) so thanks for clarifying.
Regards,
James Coleman
On Thu, Mar 21, 2024 at 1:09 PM Robert Haas wrote:
>
> On Thu, Mar 14, 2024 at 9:07 PM James Coleman wrote:
> > If the goal here is the most minimal patch possible, then please
> > commit what you proposed. I am interested in improving the document
> > further, but I d
On Wed, Mar 20, 2024 at 2:15 PM Robert Haas wrote:
>
> On Thu, Mar 14, 2024 at 9:07 PM James Coleman wrote:
> > Obviously I have reasons for the other changes I made: for example,
> > "no longer visible" improves the correctness, since being an old
> > version i
On Thu, Mar 14, 2024 at 10:28 AM Robert Haas wrote:
>
> On Wed, Oct 4, 2023 at 9:12 PM James Coleman wrote:
> > All right, attached is a v3 which attempts to fix the wrong
> > information with an economy of words. I may at some point submit a
> > separate patch tha
>
> If this is correctly implemented, the overhead in the case where the
> feature isn't used should be essentially zero, I believe.
If I can rephrase this idea: it's basically "delay this interrupt
until inline to the next ExecProcNode execution".
That seems pretty promising to me as well.
Regards,
James Coleman
On Fri, Feb 23, 2024 at 10:23 AM Robert Haas wrote:
>
> On Fri, Feb 23, 2024 at 7:50 PM Julien Rouhaud wrote:
> > On Fri, Feb 23, 2024 at 10:22:32AM +0530, Robert Haas wrote:
> > > On Thu, Feb 22, 2024 at 6:25 AM James Coleman wrote:
> > > > This is potent
ewhere. That's not likely to ever be as stable as we want
> PostgreSQL to be.
This is potentially a bit of a wild idea, but I wonder if having some
kind of argument to CHECK_FOR_INTERRUPTS() signifying we're in
"normal" as opposed to "critical" (using that word differently than
the existing critical sections) would be worth it.
Regards,
James Coleman
> NOTHING
>
> Records no information about the old row. This is equivalent to having
> no replica identity. This is the default for system tables.
This is the simplest change, and it does solve the confusion, so I'd
be happy with it also. The other proposals have the benefit of having
all the information necessary on the publications page rather than
requiring the user to refer to the ALTER TABLE REPLICA IDENTITY page
to understand what's meant.
Regards,
James Coleman
or index that doesn't exist) is added ...
I think that would work also. I was reading the initial suggestion as
"(or with replica identity behavior the same as..." as defining what
"without a replica identity" meant, which would avoid the confusion.
But your proposal is more explicit and more succinct, so I think it's
the better option of the two.
Regards,
James Coleman
On Wed, Feb 7, 2024 at 6:04 PM Peter Smith wrote:
>
> On Thu, Feb 8, 2024 at 9:04 AM James Coleman wrote:
> >
> > On Wed, Feb 7, 2024 at 3:22 PM Laurenz Albe
> > wrote:
> > >
> > > On Wed, 2024-02-07 at 15:12 -0500, James Coleman wrote:
> >
On Wed, Feb 7, 2024 at 3:22 PM Laurenz Albe wrote:
>
> On Wed, 2024-02-07 at 15:12 -0500, James Coleman wrote:
> > We recently noticed some behavior that seems reasonable but also
> > surprised our engineers based on the docs.
> >
> > If we have this setup
ble.
I'm wondering if this might be a surprise to anyone else, and if so,
is there a minor docs tweak that might avoid the confusion?
Thanks,
James Coleman
1:
https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY
2: https://www.postgresql.org/docs/current/logical-replication-publication.html
On Thu, Feb 1, 2024 at 1:36 AM Richard Guo wrote:
>
>
> On Thu, Feb 1, 2024 at 11:37 AM David Rowley wrote:
>>
>> On Thu, 1 Feb 2024 at 16:29, Richard Guo wrote:
>> > On Thu, Feb 1, 2024 at 10:04 AM James Coleman wrote:
>> >> I don't see any
On Wed, Jan 31, 2024 at 3:18 PM Robert Haas wrote:
>
> On Tue, Jan 30, 2024 at 9:56 PM James Coleman wrote:
> > I don't follow the "Idle since July" since it just hasn't received
> > review since then, so there's been nothing to reply to.
>
> It
e any reason why this
couldn't happen separately. That being said, on master I don't have a
case showing this is necessary.
Thanks,
James Coleman
1:
https://www.postgresql.org/message-id/flat/CAAaqYe-Aq6oNf9NPZnpPE7SgRLomXXWJA1Gz9L9ndi_Nv%3D94Yw%40mail.gmail.com#e0b1a803d0fdb971
On Tue, Jan 30, 2024 at 10:34 PM Tom Lane wrote:
>
> James Coleman writes:
> > I've finally had a chance to look at this, and I don't believe there's
> > any real failure here, merely drift of how the planner works on master
> > resulting in this query
so there's been nothing to reply to.
That being said, Vignesh's note in January about a now-failing test is
relevant activity, and I've just today responded to that, so I'm
changing the status back from Waiting on Author to Needs Review.
Regards,
James Coleman
ross worker boundaries. However
looking at the diff in the patch at that point (v10) that particular
test query formed a different plan shape (there were two gather nodes
being created, and params crossing between them).
But in the current revision of master with the current patch applied
that's
On Fri, Jan 26, 2024 at 8:33 PM James Coleman wrote:
>
> On Tue, Jan 23, 2024 at 2:46 AM Dilip Kumar wrote:
> >
> > On Tue, Jan 23, 2024 at 7:18 AM James Coleman wrote:
> > >
> > > On Mon, Jan 22, 2024 at 8:21 PM James Coleman wrote:
> > > >
&
On Tue, Jan 23, 2024 at 2:46 AM Dilip Kumar wrote:
>
> On Tue, Jan 23, 2024 at 7:18 AM James Coleman wrote:
> >
> > On Mon, Jan 22, 2024 at 8:21 PM James Coleman wrote:
> > >
> > > See rebased patch attached.
> >
> > I just realized I left a c
On Mon, Jan 22, 2024 at 8:21 PM James Coleman wrote:
>
> See rebased patch attached.
I just realized I left a change in during the rebase that wasn't necessary.
v4 attached.
Regards,
James Coleman
v4-0002-Opportunistically-prune-to-avoid-building-a-new-p.patch
Description: Binar
f necessary.
>
> ==
> [1] https://commitfest.postgresql.org/46/4384//
> [2]
> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4384
See rebased patch attached.
Thanks,
James Coleman
v3-0001-Allow-getting-lock-before-calling-heap_page_prune.patch
Descripti
f necessary.
>
> ==
> [1]
> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4355
>
> Kind Regards,
> Peter Smith.
Updated patch attached,
Thanks,
James Coleman
v3-0001-Add-last-commit-s-LSN-to-pg_stat_database.patch
Description: Binary data
On Sat, Jan 20, 2024 at 5:57 PM Tom Lane wrote:
>
> James Coleman writes:
> > On Sat, Jan 20, 2024 at 12:59 PM Tom Lane wrote:
> >> A HINT if the bogus column name (1) matches the relation name and
> >> (2) is field-qualified seems plausible to me. Then it
On Sat, Jan 20, 2024 at 12:59 PM Tom Lane wrote:
>
> James Coleman writes:
> > I do wonder if it's plausible (and sufficiently easy) to improve the
> > error message here. "column 'foo' of relation 'foo'" makes one thing
> > that you
On Sat, Jan 20, 2024 at 11:12 AM Tom Lane wrote:
>
> James Coleman writes:
> > Suppose I have this table:
> > create table foo (id int primary key);
>
> > On PG11 this works:
> > postgres=# insert into foo (id) values (1) on conflict (id) do update
> > set
On Fri, Jan 19, 2024 at 1:53 PM David G. Johnston
wrote:
>
> On Fri, Jan 19, 2024 at 10:01 AM James Coleman wrote:
>>
>> Making this more confusing is the fact that if I want to do something
>> like "SET bar = foo.bar + 1" the table qualification cannot be prese
cation on the read column the error is more understandable:
ERROR: column reference "bar" is ambiguous
It seems to me that it'd be desirable to either allow the unnecessary
qualification or give an error that's more easily understood.
Regards,
James Coleman
On Sun, Jan 14, 2024 at 6:01 AM vignesh C wrote:
>
> On Sat, 10 Jun 2023 at 07:57, James Coleman wrote:
> >
> > I've previously noted in "Add last commit LSN to
> > pg_last_committed_xact()" [1] that it's not possible to monitor how
> > many
mit_lsn() is marked as STABLE which is
> probably correct. But I would appreciate a second opinion on this.
Sounds good.
> - Wouldn't it be appropriate to add a test or two?
Added.
> - `if (!XLogRecPtrIsInvalid(commit_lsn))` - I suggest adding
> XLogRecPtrIsValid() macro for better read
On Fri, Dec 22, 2023 at 2:48 PM Tom Lane wrote:
>
> James Coleman writes:
> > I've not yet applied all of your feedback, but I wanted to get an
> > initial read on your thoughts on how using switch statements ends up
> > looking. Attached is a single (pure refa
On Thu, Dec 14, 2023 at 4:38 PM Tom Lane wrote:
>
> James Coleman writes:
> > On Wed, Dec 13, 2023 at 1:36 PM Tom Lane wrote:
> >> I don't have an objection in principle to adding more smarts to
> >> predtest.c. However, we should be wary of slowing down c
Hi All, not sure how to "Specify thread msgid" - choose one which i think is
close to my new feature request.
query:
SELECT count(1) FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id WHERE
t1.a_indexed_col='some_value' OR t2.a_indexed_col='some_vable';
can the server automatically replace the O
Thanks for taking a look!
On Wed, Dec 13, 2023 at 1:36 PM Tom Lane wrote:
>
> James Coleman writes:
> > Attached is a patch that solves that issue. It also teaches predtest about
> > quite a few more cases involving BooleanTest expressions (e.g., how they
> > relate
o run the "x, y" case as well as the "y, x" case
with a single call so as to eliminate a lot of repetition in
clause/expression test cases. If reviewers agree that's desirable, then I
could do that as a precursor.
Regards,
James Coleman
v1-0001-Teach-predtest-about-IS-NOT-bool-proofs.patch
Description: Binary data
auto_explain and
> see its feasibility.
>
> >>> There is a lot of similarity between what this feature does and what
> >>> auto explain does. I see the code is also duplicated. There is some
> >>> merit in avoiding this duplication
> >>> 1. we will get a
On Fri, Oct 6, 2023 at 8:58 AM torikoshia wrote:
>
> On 2023-10-04 03:00, James Coleman wrote:
> > On Thu, Sep 7, 2023 at 2:09 AM torikoshia
> > wrote:
> >>
> >> On 2023-09-06 11:17, James Coleman wrote:
> >>
> >> >> > I
Hi,
Thanks for taking a look!
On Fri, Oct 6, 2023 at 1:18 AM Dilip Kumar wrote:
>
> On Thu, Oct 5, 2023 at 2:35 AM James Coleman wrote:
> >
> > I talked to Andres and Peter again today, and out of that conversation
> > I have some observations and ideas for future impr
On Wed, Oct 4, 2023 at 9:42 AM Robert Haas wrote:
>
> On Wed, Oct 4, 2023 at 9:36 AM James Coleman wrote:
> > Are you thinking we should simply elide the fact that there is pruning
> > that happens outside of HOT? Or add that information onto the HOT
> > page, even though
1 - 100 of 695 matches
Mail list logo