As said, terminating a backend is the current way to kill a job.
An alternative if this is something you do often:
https://github.com/GoSimpleLLC/jpgAgent
jpgAgent supports terminating a job by issuing a NOTIFY command on the
correct channel like this: NOTIFY jpgagent_kill_job, 'job_id_here';
It w
> Is there some way to make it auto-detect when it should be enabled? If
not, please document that it should be used on ZFS and any other file
system with CoW properties on files.
In addition to this, wondering what type of performance regression this
would show on something like ext4 (if any).
One thing to note, if this is a query you would like to run on a replica,
temp tables are a non-starter.
I really wish that wasn't the case. I have quite a few analytical queries I
had to optimize with temp tables and indexes, and I really wish I could run
on my hot standby.
I in most cases I can
> How different is a "*temp* materialized view" from a regular view?
If it existed, it would be useful for cases when you need to reference that
view in multiple queries in the same session. I've gotten around this by
just using temp tables.
Why not extract and store that metadata with the image rather than trying
to extract it to filter on at query time? That way you can index your
height and width columns to speed up that filtering if necessary.
You may be able to write a wrapper for a command line tool like imagemagic
or something
I have this exact setup, and I use roles / schema names that match so the
$user var works with the search path when I set role as my application user.
> When search_path contains “$user”, does it refer to session_user or
current_user ?
It uses current_user, not session_user. Works perfectly with s
> An interesting answer, if there needs to be shared data, is for the
shared data to go in its own database, and use a Foreign Data Wrapper to
have each tenants' database access it <
https://www.postgresql.org/docs/12/postgres-fdw.html>
For my application I went the schema-per-tenant route, but I
> How good will that be in performance.
In my experience, not great. It's definitely better than not having it at
all, but it does not make for quick queries and caused serious
connection overhead when a query referenced that foreign table. I've since
moved to logical replication to improve the s
> For parallelism, there are these options
That only matters if you want to use those extra cores to make individual
queries / commands faster.
If all OP cares about is "will PG use my extra cores", the answer is yes it
will without doing anything special.
Another thing that was said I wasn't aware of and have not been able to
find any evidence to support:
> 10. Blobs don’t participate in Logical replication.
>
> > https://www.postgresql.org/docs/12/logical-replication-restrictions.html
> >
> > "Large objects (see Chapter 34) are not replicated. There is no
> > workaround for that, other than storing data in normal tables."
> >
> > Of course that does not apply to bytea:
> >
> https://www.postgres
I would highly suggest you reach out to AWS support for Aurora questions,
that's part of what you're paying for, support.
For reasons you mentioned and more, it's pretty hard to debug issues
because it isn't actually Postgres.
>
> I would point out, however, that using a V1 UUID rather than a V4 can
help with this as it is sequential, not random (based on MAC address and
timestamp + random)
I wanted to make this point, using sequential UUIDs helped me reduce write
amplification quite a bit with my application, I didn't u
I mentioned this in another email thread yesterday about a similar topic,
but I'd highly suggest if you do go the UUID route, do not use the standard
UUID generation functions, they all suck for database use (v1 also sucks).
I use: https://pgxn.org/dist/sequential_uuids/ written by Thomas Vondara
> how to do "hot backup" (copying files) while database running?
As others have shown, there are ways to do this with PG's internal tooling
(pg_basebackup).
However, I would highly recommend you use an external backup tool like
pgbackrest [1] to save yourself the pain of implementing things incorr
> Admittedly, the system probably should be made to save the text, should
someone wish to write such a patch.
Just wanted to throw $0.02 behind this idea if anyone does want to take it
up later. Using a source control system is better obviously. But even if
you use source control it is still incre
Absolutely, it'd be much easier having this info integrated with my
work/personal calendar, as that's how I try and organize things anyways.
Thanks for the suggestion.
-Adam
On Tue, Oct 4, 2022 at 5:02 PM Bruce Momjian wrote:
> Would people be interesting in subscribing to a Postgres calendar t
I think the main "gotcha" when I moved from SQL Server to Postgres was I
didn't even realize the amount of in-line t-sql I would use to just get
stuff done for ad-hoc analysis. Postgres doesn't have a good way to emulate
this. DO blocks cannot return resultsets, so short of creating a function
and
Temp tables are not visibile outside of a single connection, so the
autovacuum worker connection isn't able to see it.
Are you sure that it's actually an issue with accumulating dead tuples, and
not an issue with bad statistics?
In my processes which are heavy on temp tables, I have to manually r
So my experience isn't with pgagent directly, because I have been using my
re-written version of it for ~5 years (but at least at one point I had a
pretty darn good understanding from doing that rewrite)...please take this
with a grain of salt if I am incorrect on anything.
So the agent is only ab
Hey there everyone,
I am going through the process of writing my first pgtap tests for my
database, and I wanted to get some feedback on if my solution seems fine,
is just dumb, or could be acomplished much easier another way.
So my main problem I was trying to work around, was my tests are writt
> Checking data (DML), if functions are doing the right things is
something we do in our code unit tests.
This is exactly what I am writing, unit tests for my code (which is
pl/pgsql). This is an ELT pipeline for my customers to bulk update their
data in my system, with detailed error reporting f
Hey all, first off...
Postgres version: 10.1
OS: Debian 9.0
So I have a database called: authentication
It stores my user table for my application. I have it separated from
the main database of my application to allow the same account to be
used by multiple instances of my application.
>From a c
Just bumping this because I posted it right before Thanksgiving and it was
very easy to overlook.
Sorry if this is bad etiquette for the list... Just let me know if it is
and I won't do it in the future.
In my testing, gen_random_uuid() is quite a bit faster than uuid_generate_v4().
There are no built in tools for this in Postgres.
There are other tools like the one mentioned that you can use instead. I've
used Liquibase for migrations for multiple companies now and it works well
enough.
If you have to support rollbacks for your deployments, that is a pretty
manual process fo
26 matches
Mail list logo