On 11/1/21 17:58, Stephen Frost wrote:
Well, at least one alternative to performing these snapshots would be to
use a tool like pg_basebackup or pgbackrest to perform the backups
instead.
File system based backups are much slower than snapshots. The
feasibility of file based utility like pg_
> On 2/11/2021, at 10:58 AM, Stephen Frost wrote:
>
> Well, at least one alternative to performing these snapshots would be to
> use a tool like pg_basebackup or pgbackrest to perform the backups
> instead. At least with pgbackrest you can run a backup which pushes the
> data directly to s3 fo
Greetings,
* Lucas (r...@sud0.nz) wrote:
> > On 2/11/2021, at 6:00 AM, Stephen Frost wrote:
> > * Lucas (r...@sud0.nz) wrote:
> >> The snapshots are done this way:
> >> 1. Grab the latest applied WAL File for further references, stores that in
> >> a variable in Bash
> >> 2. Stop the Postgres pr
> On 2/11/2021, at 6:00 AM, Stephen Frost wrote:
>
> Greetings,
>
> * Lucas (r...@sud0.nz) wrote:
>>> On 27/10/2021, at 8:35 AM, Stephen Frost wrote:
>>> I do want to again stress that I don't recommend writing your own tools
>>> for doing backup/restore/PITR and I would caution people against
On 11/1/21 12:00 PM, Stephen Frost wrote:
[snip]
Having the database offline for 10 minutes is a luxury that many don't
have; I'm a bit surprised that it's not an issue here, but if it isn't,
then that's great.
Exactly. Shutting down the database is easy... shutting down the
"application" can
Greetings,
* Lucas (r...@sud0.nz) wrote:
> > On 27/10/2021, at 8:35 AM, Stephen Frost wrote:
> > I do want to again stress that I don't recommend writing your own tools
> > for doing backup/restore/PITR and I would caution people against people
> > trying to use this approach you've suggested. A
Florents Tselai writes:
> I have the following simple query
> select row_to_json(d) from documents d
> The output of this goes to script that expects new-line-delimited stream of
> JSON objects.
> But as-is, ti looks like the server’s memory fills-up before ti starts
> emitting results.
Usually
Hi everyone.
Can someone tell me why these two equivalent queries, one involving a
"naked" EXISTS
versus one involving an EXISTS inside a SELECT statement perform so
differently?
I can see that the slow one scans the entire child table while the fast one
only scans children
that have the same pare
I have the following simple query
select row_to_json(d) from documents d
The output of this goes to script that expects new-line-delimited stream of
JSON objects.
But as-is, ti looks like the server’s memory fills-up before ti starts emitting
results.
Any ideas how I could bypass this?