On Wed, 15 Jul 2020 at 12:24, David Rowley wrote:
> If we've not seen any performance regressions within 1 week, then I
> propose that we (pending final review) push this to allow wider
> testing. It seems we're early enough in the PG14 cycle that there's a
> large window of time for us to do some
Hi Soumyadeep,
Thanks for re-running the tests.
On Thu, 23 Jul 2020 at 06:01, Soumyadeep Chakraborty
wrote:
> On Tue, Jul 14, 2020 at 8:52 PM David Rowley wrote:
> > It would be good to see EXPLAIN (ANALYZE, BUFFERS) with SET
> > track_io_timing = on; for each value of max_parallel_workers.
>
>
On Wed, Jul 22, 2020 at 10:03 AM Thomas Munro wrote:
>
> On Wed, Jul 22, 2020 at 3:57 PM Amit Kapila wrote:
> > Yeah, that is true but every time before the test the same amount of
> > data should be present in shared buffers (or OS cache) if any which
> > will help in getting consistent results.
On Tue, Jul 21, 2020 at 9:33 PM Thomas Munro wrote:
>
> On Wed, Jul 22, 2020 at 3:57 PM Amit Kapila wrote:
> > Yeah, that is true but every time before the test the same amount of
> > data should be present in shared buffers (or OS cache) if any which
> > will help in getting consistent results.
Hi David,
Apologies for the delay, I had missed these emails.
On Tue, Jul 14, 2020 at 8:52 PM David Rowley wrote:
> It would be good to know if the
> regression is repeatable or if it was affected by some other process.
These are the latest results on the same setup as [1].
(TL;DR: the FreeBS
On Wednesday, July 22, 2020 2:21 PM (GMT+9), David Rowley wrote:
> On Wed, 22 Jul 2020 at 16:40, k.jami...@fujitsu.com
> wrote:
> > I used the default max_parallel_workers & max_worker_proceses which is 8 by
> default in postgresql.conf.
> > IOW, I ran all those tests with maximum of 8 processes
On Tuesday, July 21, 2020 7:33 PM, Amit Kapila wrote:
> On Tue, Jul 21, 2020 at 3:08 PM k.jami...@fujitsu.com
> wrote:
> >
> > On Tuesday, July 21, 2020 12:18 PM, Amit Kapila wrote:
> > > On Tue, Jul 21, 2020 at 8:06 AM k.jami...@fujitsu.com
> > >
> > > wrote:
> > > >
> > > > I am definitely miss
On Wed, 22 Jul 2020 at 18:17, k.jami...@fujitsu.com
wrote:
> Even though I read the documentation [1][2] on parallel query, I might not
> have
> understood it clearly yet. So thank you very much for explaining simpler how
> the
> relation size, GUCs, and reloption affect the query planner's beha
On Wed, 22 Jul 2020 at 16:40, k.jami...@fujitsu.com
wrote:
> I used the default max_parallel_workers & max_worker_proceses which is 8 by
> default in postgresql.conf.
> IOW, I ran all those tests with maximum of 8 processes set. But my query
> planner capped both the
> Workers Planned and Launch
On Wed, Jul 22, 2020 at 3:57 PM Amit Kapila wrote:
> Yeah, that is true but every time before the test the same amount of
> data should be present in shared buffers (or OS cache) if any which
> will help in getting consistent results. However, it is fine to
> reboot the machine as well if that is
On Wed, Jul 22, 2020 at 5:25 AM David Rowley wrote:
>
> I understand that Amit wrote:
>
> On Fri, 17 Jul 2020 at 21:18, Amit Kapila wrote:
> > I think recreating the database and restarting the server after each
> > run might help in getting consistent results. Also, you might want to
> > take m
Hi Kirk,
Thank you for doing some testing on this. It's very useful to get some
samples from other hardware / filesystem / os combinations.
On Tue, 21 Jul 2020 at 21:38, k.jami...@fujitsu.com
wrote:
> Query Planner I/O Timings (ms):
> | Worker | I/O READ (Master) | I/O READ (Patch) | I/O WRITE (
On Friday, July 17, 2020 6:18 PM (GMT+9), Amit Kapila wrote:
> On Fri, Jul 17, 2020 at 11:35 AM k.jami...@fujitsu.com
> wrote:
> >
> > On Wednesday, July 15, 2020 12:52 PM (GMT+9), David Rowley wrote:
> >
> > >On Wed, 15 Jul 2020 at 14:51, Amit Kapila wrote:
> > >>
> > >> On Wed, Jul 15, 2020 at
On Tuesday, July 21, 2020 12:18 PM, Amit Kapila wrote:
> On Tue, Jul 21, 2020 at 8:06 AM k.jami...@fujitsu.com
> wrote:
> >
> > Thank you for the advice. I repeated the test as per your advice and
> > average of 3 runs per worker/s planned.
> > It still shows the following similar performance resu
On Tue, Jul 21, 2020 at 3:08 PM k.jami...@fujitsu.com
wrote:
>
> On Tuesday, July 21, 2020 12:18 PM, Amit Kapila wrote:
> > On Tue, Jul 21, 2020 at 8:06 AM k.jami...@fujitsu.com
> >
> > wrote:
> > >
> > > I am definitely missing something. Perhaps I think I could not
> > > understand why there's
On Tue, Jul 21, 2020 at 8:06 AM k.jami...@fujitsu.com
wrote:
>
> Thank you for the advice. I repeated the test as per your advice and average
> of 3 runs
> per worker/s planned.
> It still shows the following similar performance results between Master and
> Patch V2.
> I wonder why there's no di
On Wednesday, July 15, 2020 12:52 PM (GMT+9), David Rowley wrote:
>On Wed, 15 Jul 2020 at 14:51, Amit Kapila wrote:
>>
>> On Wed, Jul 15, 2020 at 5:55 AM David Rowley wrote:
>>> If we've not seen any performance regressions within 1 week, then I
>>> propose that we (pending final review) push t
On Fri, Jul 17, 2020 at 11:35 AM k.jami...@fujitsu.com
wrote:
>
> On Wednesday, July 15, 2020 12:52 PM (GMT+9), David Rowley wrote:
>
> >On Wed, 15 Jul 2020 at 14:51, Amit Kapila wrote:
> >>
> >> On Wed, Jul 15, 2020 at 5:55 AM David Rowley wrote:
> >>> If we've not seen any performance regressi
On Wed, 15 Jul 2020 at 14:51, Amit Kapila wrote:
>
> On Wed, Jul 15, 2020 at 5:55 AM David Rowley wrote:
> > If we've not seen any performance regressions within 1 week, then I
> > propose that we (pending final review) push this to allow wider
> > testing.
>
> I think Soumyadeep has reported a r
On Wed, Jul 15, 2020 at 5:55 AM David Rowley wrote:
>
> On Tue, 14 Jul 2020 at 19:13, Thomas Munro wrote:
> >
> > On Fri, Jun 26, 2020 at 3:33 AM Robert Haas wrote:
> > > On Tue, Jun 23, 2020 at 11:53 PM David Rowley
> > > wrote:
> > > > In summary, based on these tests, I don't think we're ma
On Tue, 14 Jul 2020 at 19:13, Thomas Munro wrote:
>
> On Fri, Jun 26, 2020 at 3:33 AM Robert Haas wrote:
> > On Tue, Jun 23, 2020 at 11:53 PM David Rowley wrote:
> > > In summary, based on these tests, I don't think we're making anything
> > > worse in regards to synchronize_seqscans if we cap t
On Fri, Jun 26, 2020 at 3:33 AM Robert Haas wrote:
> On Tue, Jun 23, 2020 at 11:53 PM David Rowley wrote:
> > In summary, based on these tests, I don't think we're making anything
> > worse in regards to synchronize_seqscans if we cap the maximum number
> > of blocks to allocate to each worker at
On Tue, Jun 23, 2020 at 11:53 PM David Rowley wrote:
> In summary, based on these tests, I don't think we're making anything
> worse in regards to synchronize_seqscans if we cap the maximum number
> of blocks to allocate to each worker at once to 8192. Perhaps there's
> some argument for using som
Em seg., 22 de jun. de 2020 às 23:29, David Rowley
escreveu:
> On Tue, 23 Jun 2020 at 07:33, Robert Haas wrote:
> > On Mon, Jun 22, 2020 at 12:47 PM Ranier Vilela
> wrote:
> > > Questions:
> > > 1. Why acquire and release lock in retry: loop.
> >
> > This is a super-bad idea. Note the coding ru
On Tue, 23 Jun 2020 at 07:33, Robert Haas wrote:
> On Mon, Jun 22, 2020 at 12:47 PM Ranier Vilela wrote:
> > Questions:
> > 1. Why acquire and release lock in retry: loop.
>
> This is a super-bad idea. Note the coding rule mentioned in spin.h.
> There are many discussion on this mailing list abou
On Tue, 23 Jun 2020 at 09:52, Thomas Munro wrote:
>
> On Fri, Jun 19, 2020 at 2:10 PM David Rowley wrote:
> > Here's a patch which caps the maximum chunk size to 131072. If
> > someone doubles the page size then that'll be 2GB instead of 1GB. I'm
> > not personally worried about that.
>
> I wond
On Fri, Jun 19, 2020 at 2:10 PM David Rowley wrote:
> Here's a patch which caps the maximum chunk size to 131072. If
> someone doubles the page size then that'll be 2GB instead of 1GB. I'm
> not personally worried about that.
I wonder how this interacts with the sync scan feature. It has a
conf
Em seg., 22 de jun. de 2020 às 16:33, Robert Haas
escreveu:
> Ranier,
>
> This topic is largely unrelated to the current thread. Also...
>
Weel, I was trying to improve the patch for the current thread.
Or perhaps, you are referring to something else, which I may not have
understood.
>
> On Mon
Ranier,
This topic is largely unrelated to the current thread. Also...
On Mon, Jun 22, 2020 at 12:47 PM Ranier Vilela wrote:
> Questions:
> 1. Why acquire and release lock in retry: loop.
This is a super-bad idea. Note the coding rule mentioned in spin.h.
There are many discussion on this maili
Em seg., 22 de jun. de 2020 às 02:53, David Rowley
escreveu:
> On Mon, 22 Jun 2020 at 16:54, David Rowley wrote:
> > I also tested this an AMD machine running Ubuntu 20.04 on kernel
> > version 5.4.0-37. I used the same 100GB table I mentioned in [1], but
> > with the query "select * from t whe
On Sun, Jun 21, 2020 at 6:52 PM David Rowley wrote:
> Perhaps that's not a problem though, but then again, perhaps just
> keeping it at 131072 regardless of RELSEG_SIZE and BLCKSZ is also ok.
> The benchmarks I did on Windows [1] showed that the returns diminished
> once we started making the step
On Mon, 22 Jun 2020 at 16:54, David Rowley wrote:
> I also tested this an AMD machine running Ubuntu 20.04 on kernel
> version 5.4.0-37. I used the same 100GB table I mentioned in [1], but
> with the query "select * from t where a < 0;", which saves having to
> do any aggregate work.
I just want
On Sat, 20 Jun 2020 at 08:00, Robert Haas wrote:
>
> On Thu, Jun 18, 2020 at 10:10 PM David Rowley wrote:
> > Here's a patch which caps the maximum chunk size to 131072. If
> > someone doubles the page size then that'll be 2GB instead of 1GB. I'm
> > not personally worried about that.
>
> Maybe
On Thu, Jun 18, 2020 at 10:10 PM David Rowley wrote:
> Here's a patch which caps the maximum chunk size to 131072. If
> someone doubles the page size then that'll be 2GB instead of 1GB. I'm
> not personally worried about that.
Maybe use RELSEG_SIZE?
> I tested the performance on a Windows 10 la
On Fri, 19 Jun 2020 at 11:34, David Rowley wrote:
>
> On Fri, 19 Jun 2020 at 03:26, Robert Haas wrote:
> >
> > On Thu, Jun 18, 2020 at 6:15 AM David Rowley wrote:
> > > With a 32TB relation, the code will make the chunk size 16GB. Perhaps
> > > I should change the code to cap that at 1GB.
> >
>
On Fri, 19 Jun 2020 at 03:26, Robert Haas wrote:
>
> On Thu, Jun 18, 2020 at 6:15 AM David Rowley wrote:
> > With a 32TB relation, the code will make the chunk size 16GB. Perhaps
> > I should change the code to cap that at 1GB.
>
> It seems pretty hard to believe there's any significant advantag
On Thu, Jun 18, 2020 at 6:15 AM David Rowley wrote:
> With a 32TB relation, the code will make the chunk size 16GB. Perhaps
> I should change the code to cap that at 1GB.
It seems pretty hard to believe there's any significant advantage to a
chunk size >1GB, so I would be in favor of that change
On Wed, 17 Jun 2020 at 03:20, Robert Haas wrote:
>
> On Mon, Jun 15, 2020 at 5:09 PM David Rowley wrote:
> > * Perhaps when there are less than 2 full chunks remaining, workers
> > can just take half of what is left. Or more specifically
> > Max(pg_next_power2(remaining_blocks) / 2, 1), which ide
On Tue, Jun 16, 2020 at 6:57 AM Amit Kapila wrote:
> I agree that won't be a common scenario but apart from that also I am
> not sure if we can conclude that the proposed patch won't cause any
> regressions. See one of the tests [1] done by Soumyadeep where the
> patch has caused regression in on
On Mon, Jun 15, 2020 at 5:09 PM David Rowley wrote:
> To summarise what's all been proposed so far:
>
> 1. Use a constant, (e.g. 64) as the parallel step size
> 2. Ramp up the step size over time
> 3. Ramp down the step size towards the end of the scan.
> 4. Auto-determine a good stepsize based on
On Mon, Jun 15, 2020 at 8:59 PM Robert Haas wrote:
>
> On Sat, Jun 13, 2020 at 2:13 AM Amit Kapila wrote:
> > The performance can vary based on qualification where some workers
> > discard more rows as compared to others, with the current system with
> > step-size as one, the probability of unequ
On Tue, 16 Jun 2020 at 03:29, Robert Haas wrote:
>
> On Sat, Jun 13, 2020 at 2:13 AM Amit Kapila wrote:
> > The performance can vary based on qualification where some workers
> > discard more rows as compared to others, with the current system with
> > step-size as one, the probability of unequal
On Sat, Jun 13, 2020 at 2:13 AM Amit Kapila wrote:
> The performance can vary based on qualification where some workers
> discard more rows as compared to others, with the current system with
> step-size as one, the probability of unequal work among workers is
> quite low as compared to larger ste
On Fri, Jun 12, 2020 at 11:28 PM Robert Haas wrote:
>
> On Thu, Jun 11, 2020 at 4:54 PM David Rowley wrote:
> > I think someone at some point is not going to like the automatic
> > choice. So perhaps a reloption to allow users to overwrite it is a
> > good idea. -1 should most likely mean use the
On Thu, Jun 11, 2020 at 4:54 PM David Rowley wrote:
> I think someone at some point is not going to like the automatic
> choice. So perhaps a reloption to allow users to overwrite it is a
> good idea. -1 should most likely mean use the automatic choice based
> on relation size. I think for parall
On Fri, Jun 12, 2020 at 2:24 AM David Rowley wrote:
>
> On Thu, 11 Jun 2020 at 23:35, Amit Kapila wrote:
> > Another point I am thinking is that whatever formula we come up here
> > might not be a good fit for every case. For ex. as you mentioned
> > above that larger step-size can impact the pe
On Thu, 11 Jun 2020 at 23:35, Amit Kapila wrote:
> Another point I am thinking is that whatever formula we come up here
> might not be a good fit for every case. For ex. as you mentioned
> above that larger step-size can impact the performance based on
> qualification, similarly there could be ot
On Thu, Jun 11, 2020 at 10:13 AM David Rowley wrote:
>
> On Thu, 11 Jun 2020 at 16:03, Amit Kapila wrote:
> > I think something on these lines would be a good idea especially
> > keeping step-size proportional to relation size. However, I am not
> > completely sure if doubling the step-size with
On Thu, 11 Jun 2020 at 16:03, Amit Kapila wrote:
> I think something on these lines would be a good idea especially
> keeping step-size proportional to relation size. However, I am not
> completely sure if doubling the step-size with equal increase in
> relation size (ex. what is happening betwee
On Thu, Jun 11, 2020 at 8:35 AM David Rowley wrote:
>
> On Thu, 11 Jun 2020 at 14:09, Amit Kapila wrote:
> >
> > On Thu, Jun 11, 2020 at 7:18 AM David Rowley wrote:
> > >
> > > On Thu, 11 Jun 2020 at 01:24, Amit Kapila wrote:
> > > > Can we try the same test with 4, 8, 16 workers as well? I do
On Thu, 11 Jun 2020 at 14:09, Amit Kapila wrote:
>
> On Thu, Jun 11, 2020 at 7:18 AM David Rowley wrote:
> >
> > On Thu, 11 Jun 2020 at 01:24, Amit Kapila wrote:
> > > Can we try the same test with 4, 8, 16 workers as well? I don't
> > > foresee any problem with a higher number of workers but i
On Thu, Jun 11, 2020 at 7:18 AM David Rowley wrote:
>
> On Thu, 11 Jun 2020 at 01:24, Amit Kapila wrote:
> > Can we try the same test with 4, 8, 16 workers as well? I don't
> > foresee any problem with a higher number of workers but it might be
> > better to once check that if it is not too much
On Thu, 11 Jun 2020 at 01:24, Amit Kapila wrote:
> Can we try the same test with 4, 8, 16 workers as well? I don't
> foresee any problem with a higher number of workers but it might be
> better to once check that if it is not too much additional work.
I ran the tests again with up to 7 workers.
On Wed, Jun 10, 2020 at 6:04 PM David Rowley wrote:
>
> On Wed, 10 Jun 2020 at 17:39, David Rowley wrote:
> >
> > On Wed, 10 Jun 2020 at 17:21, Thomas Munro wrote:
> > > I also heard from Andres that he likes this patch with his AIO
> > > prototype, because of the way request merging works. So
On Wed, 10 Jun 2020 at 17:39, David Rowley wrote:
>
> On Wed, 10 Jun 2020 at 17:21, Thomas Munro wrote:
> > I also heard from Andres that he likes this patch with his AIO
> > prototype, because of the way request merging works. So it seems like
> > there are several reasons to want it.
> >
> > B
On Wed, 10 Jun 2020 at 17:21, Thomas Munro wrote:
> I also heard from Andres that he likes this patch with his AIO
> prototype, because of the way request merging works. So it seems like
> there are several reasons to want it.
>
> But ... where should we get the maximum step size from? A GUC?
I
On Wed, Jun 10, 2020 at 5:06 PM David Rowley wrote:
> I repeated this test on an up-to-date Windows 10 machine to see if the
> later kernel is any better at the readahead.
>
> Results for the same test are:
>
> Master:
>
> max_parallel_workers_per_gather = 0: Time: 148481.244 ms (02:28.481)
> (706
On Thu, 21 May 2020 at 17:06, David Rowley wrote:
> create table t (a int, b text);
> -- create a table of 100GB in size.
> insert into t select x,md5(x::text) from
> generate_series(1,100*1572.7381809)x; -- took 1 hr 18 mins
> vacuum freeze t;
>
> query = select count(*) from t;
> Disk = Sams
On Wed, Jun 3, 2020 at 3:18 PM Soumyadeep Chakraborty
wrote:
> Idk if that is a lesser evil than the workers
> being idle..probably not?
Apologies, I meant that the extra atomic fetches is probably a lesser
evil than the workers being idle.
Soumyadeep
On Sat, May 23, 2020 at 12:00 AM Robert Haas
wrote:
> I think there's a significant difference. The idea I remember being
> discussed at the time was to divide the relation into equal parts at
> the very start and give one part to each worker. I think that carries
> a lot of risk of some workers f
> It doesn't look like it's using table_block_parallelscan_nextpage() as
> a block allocator so it's not affected by the patch. It has its own
> thing zs_parallelscan_nextrange(), which does
> pg_atomic_fetch_add_u64(&pzscan->pzs_allocatedtids,
> ZS_PARALLEL_CHUNK_SIZE), and that macro is 0x10
On Sat, 23 May 2020 at 06:31, Robert Haas wrote:
>
> On Thu, May 21, 2020 at 6:28 PM Thomas Munro wrote:
> > Right, I think it's safe. I think you were probably right that
> > ramp-up isn't actually useful though, it's only the end of the scan
> > that requires special treatment so we don't get
On Sat, May 23, 2020 at 12:00 AM Robert Haas wrote:
>
> On Tue, May 19, 2020 at 10:23 PM Amit Kapila wrote:
> > Good experiment. IIRC, we have discussed a similar idea during the
> > development of this feature but we haven't seen any better results by
> > allocating in ranges on the systems we
On Thu, May 21, 2020 at 6:28 PM Thomas Munro wrote:
> Right, I think it's safe. I think you were probably right that
> ramp-up isn't actually useful though, it's only the end of the scan
> that requires special treatment so we don't get unfair allocation as
> the work runs out, due to course grai
On Tue, May 19, 2020 at 10:23 PM Amit Kapila wrote:
> Good experiment. IIRC, we have discussed a similar idea during the
> development of this feature but we haven't seen any better results by
> allocating in ranges on the systems we have tried. So, we want with
> the current approach which is m
On Fri, May 22, 2020 at 1:14 PM Soumyadeep Chakraborty
wrote:
> Some more data points:
Thanks!
> max_parallel_workers_per_gatherTime(seconds)
> 0 29.04s
> 1 29.17s
> 2 28.7
Hi Thomas,
Some more data points:
create table t_heap as select generate_series(1, 1) i;
Query: select count(*) from t_heap;
shared_buffers=32MB (so that I don't have to clear buffers, OS page
cache)
OS: FreeBSD 12.1 with UFS on GCP
4 vCPUs, 4GB RAM Intel Skylake
22G Google PersistentDis
On Fri, May 22, 2020 at 10:00 AM David Rowley wrote:
> On Thu, 21 May 2020 at 17:06, David Rowley wrote:
> > For the patch. I know you just put it together quickly, but I don't
> > think you can do that ramp up the way you have. It looks like there's
> > a risk of torn reads and torn writes and I
On Thu, 21 May 2020 at 17:06, David Rowley wrote:
> For the patch. I know you just put it together quickly, but I don't
> think you can do that ramp up the way you have. It looks like there's
> a risk of torn reads and torn writes and I'm unsure how much that
> could affect the test results here.
On Thu, 21 May 2020 at 14:32, Thomas Munro wrote:
> Thanks. So it seems like Linux, Windows and anything using ZFS are
> OK, which probably explains why we hadn't heard complaints about it.
I tried out a different test on a Windows 8.1 machine I have here. I
was concerned that the test that was
On Thu, May 21, 2020 at 1:38 PM Ranier Vilela wrote:
>> >> On Thu, May 21, 2020 at 11:15 AM Ranier Vilela
>> >> wrote:
>> >> > postgres=# set max_parallel_workers_per_gather = 0;
>> >> > Time: 227238,445 ms (03:47,238)
>> >> > postgres=# set max_parallel_workers_per_gather = 1;
>> >> > Time: 138
Em qua., 20 de mai. de 2020 às 21:03, Thomas Munro
escreveu:
> On Thu, May 21, 2020 at 11:51 AM Ranier Vilela
> wrote:
> > Em qua., 20 de mai. de 2020 às 20:48, Thomas Munro <
> thomas.mu...@gmail.com> escreveu:
> >> On Thu, May 21, 2020 at 11:15 AM Ranier Vilela
> wrote:
> >> > postgres=# set
On Thu, May 21, 2020 at 11:51 AM Ranier Vilela wrote:
> Em qua., 20 de mai. de 2020 às 20:48, Thomas Munro
> escreveu:
>> On Thu, May 21, 2020 at 11:15 AM Ranier Vilela wrote:
>> > postgres=# set max_parallel_workers_per_gather = 0;
>> > Time: 227238,445 ms (03:47,238)
>> > postgres=# set max_p
Em qua., 20 de mai. de 2020 às 20:48, Thomas Munro
escreveu:
> On Thu, May 21, 2020 at 11:15 AM Ranier Vilela
> wrote:
> > postgres=# set max_parallel_workers_per_gather = 0;
> > Time: 227238,445 ms (03:47,238)
> > postgres=# set max_parallel_workers_per_gather = 1;
> > Time: 138027,351 ms (02:1
On Thu, May 21, 2020 at 11:15 AM Ranier Vilela wrote:
> postgres=# set max_parallel_workers_per_gather = 0;
> Time: 227238,445 ms (03:47,238)
> postgres=# set max_parallel_workers_per_gather = 1;
> Time: 138027,351 ms (02:18,027)
Ok, so it looks like NT/NTFS isn't suffering from this problem.
Tha
Em qua., 20 de mai. de 2020 às 18:49, Thomas Munro
escreveu:
> On Wed, May 20, 2020 at 11:03 PM Ranier Vilela
> wrote:
> > Time: 47767,916 ms (00:47,768)
> > Time: 32645,448 ms (00:32,645)
>
> Just to make sure kernel caching isn't helping here, maybe try making
> the table 2x or 4x bigger? My
On Wed, May 20, 2020 at 11:03 PM Ranier Vilela wrote:
> Time: 47767,916 ms (00:47,768)
> Time: 32645,448 ms (00:32,645)
Just to make sure kernel caching isn't helping here, maybe try making
the table 2x or 4x bigger? My test was on a virtual machine with only
4GB RAM, so the table couldn't be en
Em qua., 20 de mai. de 2020 às 00:09, Thomas Munro
escreveu:
> On Wed, May 20, 2020 at 2:23 PM Amit Kapila
> wrote:
> > Good experiment. IIRC, we have discussed a similar idea during the
> > development of this feature but we haven't seen any better results by
> > allocating in ranges on the sy
On Wed, May 20, 2020 at 2:23 PM Amit Kapila wrote:
> Good experiment. IIRC, we have discussed a similar idea during the
> development of this feature but we haven't seen any better results by
> allocating in ranges on the systems we have tried. So, we want with
> the current approach which is mo
On Wed, May 20, 2020 at 7:24 AM Thomas Munro wrote:
>
> Hello hackers,
>
> Parallel sequential scan relies on the kernel detecting sequential
> access, but we don't make the job easy. The resulting striding
> pattern works terribly on strict next-block systems like FreeBSD UFS,
> and degrades rap
Hello hackers,
Parallel sequential scan relies on the kernel detecting sequential
access, but we don't make the job easy. The resulting striding
pattern works terribly on strict next-block systems like FreeBSD UFS,
and degrades rapidly when you add too many workers on sliding window
systems like
81 matches
Mail list logo