В Чт, 28/10/2021 в 03:14 +0530, Ashkil Dighin пишет:
> Hi,
> Yes, lock contention reduced with postgresqlv14.
> Lock acquire reduced 18% to 10%
> 10.49 %postgres postgres[.] LWLockAcquire
> 5.09% postgres postgres[.] _bt_compare
>
> Is lock contention can be reduced to 0
"David G. Johnston" writes:
> On Monday, December 20, 2021, Tom Lane wrote:
>> It would help if somebody had labeled the units of I/O Time
>> ... but I'm guessing those are microsec vs. the millisec
>> of the other times, because otherwise it's completely wrong.
> Related to my preceding observa
On Monday, December 20, 2021, Tom Lane wrote:
> Justin Pryzby writes:
> > On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:
> >> Still taking 10X more I/O to read the smaller table. Very odd.
>
> > If I'm not wrong, it's even worse than that ?
> > It takes 20 or 30sec to run the que
On Monday, December 20, 2021, Justin Pryzby wrote:
> On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:
> > ok, here are results after I did:
> > set max_parallel_workers_per_gather = 0;
> >
> > HashAggregate (cost=1676432.13..1676432.16 rows=3 width=15) (actual
> time=19908.343..199
Justin Pryzby writes:
> On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:
>> Still taking 10X more I/O to read the smaller table. Very odd.
> If I'm not wrong, it's even worse than that ?
> It takes 20 or 30sec to run the query - but it says the associated I/O times
> are ~500sec or
On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:
> ok, here are results after I did:
> set max_parallel_workers_per_gather = 0;
>
> HashAggregate (cost=1676432.13..1676432.16 rows=3 width=15) (actual
> time=19908.343..19908.345 rows=5 loops=1)
> I/O Timings: read=532369.898
> Exe
ok, here are results after I did:
set max_parallel_workers_per_gather = 0;
no nulls table is 11.462 GB:
QUERY PLAN
HashAggregate (cost=1676432.13..1676432.16 rows=3 width=15) (actual
time=19908.343..19908.345 rows=5 loops=1)
Group Key: roys_creation_user
Batches: 1 Memory Usage: 24kB
I/O T
Lars Bergeson writes:
> What is it about null values in the table that slows down the full table
> scan?
If a row has any nulls, then it contains a "nulls bitmap" [1] that says
which columns are null, and that bitmap has to be consulted while
walking through the row contents. So the most obvious
On Monday, December 20, 2021, Lars Bergeson wrote:
>
> What is it about null values in the table that slows down the full table
> scan?
>
> If I populate blank/zero for all of the unused values in columns that are
> NULLable, the query is fast again. So just defining the columns as NULLable
> isn
First of all, here is the version of PostgreSQL I'm using:
PostgreSQL 13.3 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc
(GCC) 7.4.0, 64-bit
I'm new to PostgreSQL, and I'm deciding if I should make columns in my
database nullable or not.
I have no need to distinguish between blank/z
10 matches
Mail list logo