Hello list,
I have noticed that the performance during a SELECT COUNT(*) command is
much slower than what the device can provide. Parallel workers improve the
situation but for simplicity's sake, I disable parallelism for my
measurements here by setting max_parallel_workers_per_gather to 0.
Stra
Hello and thanks for the feedback!
On Wed, 12 Jul 2023, Thomas Munro wrote:
On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou wrote:
Note that I suspect my setup being related, (btrfs compression behaving
suboptimally) since the raw device can give me up to 1GB/s rate. It is however
Thanks, it sounds promising! Are the changes in the 16 branch already,
i.e. is it enough to fetch sources for 16-beta2? If
so do I just configure --with-liburing (I'm on linux) and run with
io_method=io_uring? Else, if I use the io_method=worker what is a sensible
amount of worker threads? Should
this through.
Dimitris
On 12 April 2024 07:45:52 CEST, Thomas Munro wrote:
>On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou wrote:
>> So would it make sense for postgres to perform reads in bigger blocks? Is it
>> easy-ish to implement (where would one look for that)? Or must
On Thu, 20 Mar 2025, Tom Lane wrote:
I am betting that the problem is that the dump's TOC (table of
contents) lacks offsets to the actual data of the database objects,
and thus the readers have to reconstruct that information by scanning
the dump file. Normally, pg_dump will back-fill offset da
On Sun, 23 Mar 2025, Tom Lane wrote:
Dimitrios Apostolou writes:
On Thu, 20 Mar 2025, Tom Lane wrote:
I am betting that the problem is that the dump's TOC (table of
contents) lacks offsets to the actual data of the database objects,
and thus the readers have to reconstruct that inform
Hello list,
I noticed the weird behaviour that doing a pg_restore of a huge database
dump, leads to constant read I/O (at about 15K IOPS from the NVMe drive
that has the dump file) for about one hour. I believe it happens with
any -j value>=2.
In particular, I get output like the following in th
Hello again,
I traced the seeking-reading behaviour of parallel pg_restore inside
_skipData() when called from _PrintTocData(). Since most of today's I/O
devices (both rotating and solid state) can read 1MB faster sequentially
than it takes to seek and read 4KB, I tried the following change:
dif
Sent proper patch to pgsql-hackers mailing list, at:
https://www.postgresql.org/message-id/flat/2edb7a57-b225-3b23-a680-62ba90658fec%40gmx.net
FWIW I implemented a pg_restore --freeze patch, see attached. It needs
another patch of mine from [1] that implements pg_restore --data-only
--clean, which for parallel restores encases each COPY in its own transaction
and prepends it with a TRUNCATE. All feedback is welcome.
[1]
https://ww
On Thu, 5 Jun 2025, Frédéric Yhuel wrote:
On 6/4/25 16:12, Dimitrios Apostolou wrote:
In general I have noticed most operations are slower after a succesful
pg_restore until VACUUM is complete, which is unfortunate as the database
is huge and it takes days to run. Something I have on my
On Fri, 12 Apr 2024, Thomas Munro wrote:
On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou wrote:
So would it make sense for postgres to perform reads in bigger blocks? Is it
easy-ish to implement (where would one look for that)? Or must the I/O unit be
tied to postgres' page size?
F
Hello list,
I'm debugging the abysmal performance of pg_restoring a huge
(10TB) database, which includes a table with more than 1000 partitions.
As part of pg_restore -j... --section=post-data
I see *days* being spent in the sequential creation of foreign keys:
ALTER TABLE the_master_partit
On Wed, 4 Jun 2025, Frédéric Yhuel wrote:
On 6/3/25 17:34, Dimitrios Apostolou wrote:
The backend process for each of the above ALTER TABLE commands, does not
parallelize the foreign key checks for the different partitions. I
know, because in the logs I see gigabytes of temporary
Hello Stepan!
I can see you tested my patch by itself, and with the following command:
On Monday 2025-07-21 07:58, Stepan Neretin wrote:
pg_restore -d "$DB_NAME" -j "$JOBS" --clean --if-exists --freeze
"$DUMP_FILE" > /dev/null
On the other hand, I have tested combining --freeze with -
15 matches
Mail list logo