>In my observation, very few users require an accurate query plan for
temporary tables to
perform manual analyze.
Absolutely not true in my observations or personal experience. It's one of
the main reasons I have needed to use (local) temporary tables rather than
just materializing a CTE when deco
Just wanted to mention that this would be a useful feature for me. Had
previously been bitten by this:
https://www.postgresql.org/message-id/flat/CAMjNa7c4pKLZe%2BZ0V49isKycnXQ6Y%3D3BO-4Gsj3QAwsd2r7Wrw%40mail.gmail.com
Ended up "solving" by putting a where clause on all my exclusion
constraints I
Just wanted to link to the discussion on this from HN for anyone intersted:
https://news.ycombinator.com/item?id=26114281
ow
much time this took to prepare i'll keep it at this for now.
If you need anything clarified or have any issues, just let me know.
On Fri, Oct 23, 2020 at 3:58 AM Yugo NAGATA wrote:
> Hi Adam,
>
> On Thu, 22 Oct 2020 10:07:29 -0400
> Adam Brusselback wrote:
>
> > Hey ther
> How about JOIN WITH?
I'm -1 on this, reusing WITH is just likely to cause confusion because WITH
can appear other places in a query having an entirely different meaning.
I'd just avoid that from the start.
>> Can with think of some other suitable reserved keyword?
>FOREIGN? Or even spell out "
> Does anyone else like the name "Tuple Cache"?
I personally like that name best.
It makes sense to me when thinking about looking at an EXPLAIN and trying
to understand why this node may be there. The way we look up the value
stored in the cache doesn't really matter to me as a user, I'm more
thi
On Tue, Apr 24, 2018 at 9:52 AM, Merlin Moncure wrote:
>
> Why does it have to be completely transparent? As long as the feature
> is optional (say, a .conf setting) the tradeoffs can be managed. It's
> a reasonable to expect to exchange some functionality for pooling;
> pgbouncer provides a 're
I hope it's alright to throw in my $0.02 as a user. I've been following
this (and the other thread on reading WAL to find modified blocks,
prefaulting, whatever else) since the start with great excitement and would
love to see the built-in backup capabilities in Postgres greatly improved.
I know th
As a user, I am interested in the optimizer changes for sure, and I
actually had wished they were highlighted more in previous releases.
> I think planner smarts are arguably one of our weakest areas when
> compared to the big commercial databases. The more we can throw in
> there about this sort
This is something I'm very interested in. Very helpful for fixing mistakes
you didn't realize in time.
One question, would it be possible to allow this to be able to be
configured on a hot standby and not the master?
That would be very helpful by being able to keep some arbitrary length of
extra
I would be very interested in a extension which generated sequential uuids.
My entire db is key'd with uuids, and I have measured some index bloat
related specifically to random uuid generation.
Thanks for bringing this up.
> I don't know how much what I write on this thread is read by others or
how useful this is for others who are following this work
I've been following this thread and many others like it, silently soaking
it up, because I don't feel like i'd have anything useful to add in most
cases. It is very i
Thanks Thomas, appreciate the rebase and the work you've done on this.
I should have some time to test this out over the weekend.
-Adam
> ALTER TABLE ... SET STORAGE does not propagate to indexes, even though
> indexes created afterwards get the new storage setting. So depending on
> the order of commands, you can get inconsistent storage settings between
> indexes and tables.
I've absolutely noticed this behavior, I just thought
> What would you actually do with it?
I am one of the users of these do-it-yourself functions, and I use them in
my ETL pipelines heavily.
For me, data gets loaded into a staging table, all columns text, and I run
a whole bunch of validation queries
on the data prior to it moving to the next sta
Am I mistaken in thinking that this would allow CREATE DATABASE to run
inside a transaction block now, further reducing the DDL commands that are
non-transactional?
>
> > Why not implement it in the core of Postgres? Are there any
disadvantages of
> implementing it in the core of Postgres?
I was surprised this wasn't a feature when I looked into it a couple years
ago. I'd use it if it were built in, but I am not installing something
extra just for this.
> I’m
> But I didn't fill a big interest to it from community.
Just fyi, it is something that I use in my database design now (just hacked
together using ranges / exclusion constraints) and
would love for a well supported solution.
I've chimed in a couple times as this feature has popped up in discussio
It's something I know I am interested in. For me, I don't really care if my
statement doesn't cancel until the very end if there is a RI violation. The
benefit of not having deletes be slow on tables which have others
referencing it with a fkey which don't have their own index is huge IMO. I
have
Hi all, just wanted to say I am very happy to see progress made on this,
my codebase has multiple "materialized tables" which are maintained with
statement triggers (transition tables) and custom functions. They are ugly
and a pain to maintain, but they work because I have no other
solution...for
I just wanted to express my excitement that this is being picked up again.
I was very much looking forward to this years ago, and the use case for me
is still there, so I am excited to see this moving again.
> From my point of view if will be very helpful if such > "PgExt Store"
> will be available.
> May be such resources already exists, but I do not > know about them.
https://pgxn.org/
That'd be useful in my book. My scripts just have a hard coded timestamp I
replace when I call reset so those calculations work, but it would be much
preferred to have that data available by a built in function.
-Adam
dated, and then sometimes have the need to query that
"materialized table" in a subsequent statement and need to see the changes
reflected.
As soon as my coworker gets that example built up I'll send a followup with
it attached.
Thank you,
Adam Brusselback
Here it is formatted a little better.
So a little over 50% performance improvement for a couple of the test cases.
On Wed, Dec 6, 2017 at 11:53 AM, Tom Lane wrote:
> Konstantin Knizhnik writes:
> > Below are some results (1000xTPS) of select-only (-S) pgbench with scale
> > 100 at my des
> "barely a 50% speedup" - Hah. I don't believe the numbers, but that'd be
> huge.
They are numbers derived from a benchmark that any sane person would
be using a connection pool for in a production environment, but
impressive if true none the less.
Bumping this, because I am genuinely interested if there is a better way to
do this.
I'd really like to know if there is a better way than executing dummy
queries...it feels dirty.
I've seen plenty of extensions not handle query cancellation / exceptions
gracefully. Also seems like something to t
27 matches
Mail list logo