Hi Dmitry,
On 1/27/23 16:18, Dmitry Dolgov wrote:
As I've noted off-list, there was noticeable difference in the dumped
bitcode, which I haven't noticed since we were talking mostly about
differences between executions of the same query.
Thanks for the clarification and also thanks for helping
> On Fri, Jan 27, 2023 at 10:02:32AM +0100, David Geier wrote:
> It's very curious as to why we didn't really see that when dumping the
> bitcode. It seems like the bitcode is always different enough to not spot
> that.
As I've noted off-list, there was noticeable difference in the dumped
bitcode,
Hi,
Thanks for taking a look!
On 12/1/22 22:12, Dmitry Dolgov wrote:
First to summarize things a bit: from what I understand there are two
suggestions on the table, one is about caching modules when doing
inlining, the second is about actual lazy jitting. Are those to tightly
coupled together,
> On Thu, Jul 14, 2022 at 02:45:29PM +0200, David Geier wrote:
> On Mon, Jul 4, 2022 at 10:32 PM Andres Freund wrote:
> > On 2022-06-27 16:55:55 +0200, David Geier wrote:
> > > Indeed, the total JIT time increases the more modules are used. The
> > reason
> > > for this to happen is that the inlin
Can you elaborate a bit more on how you conclude that?
Looking at the numbers I measured in one of my previous e-mails, it
looks to me like the overhead of using multiple modules is fairly low
and only measurable in queries with dozens of modules. Given that JIT is
most useful in queries that
On Mon, Jul 4, 2022 at 10:32 PM Andres Freund wrote:
> Hi,
>
> On 2022-06-27 16:55:55 +0200, David Geier wrote:
> > Indeed, the total JIT time increases the more modules are used. The
> reason
> > for this to happen is that the inlining pass loads and deserializes all
> to
> > be inlined modules
Hi,
On 2022-06-27 16:55:55 +0200, David Geier wrote:
> Indeed, the total JIT time increases the more modules are used. The reason
> for this to happen is that the inlining pass loads and deserializes all to
> be inlined modules (.bc files) from disk prior to inlining them via
> llvm::IRMover. Ther
Hi,
On 2022-07-04 06:43:00 +, Luc Vlaming Hummel wrote:
> Thanks for reviewing this and the interesting examples!
>
> Wanted to give a bit of extra insight as to why I'd love to have a system
> that can lazily emit JIT code and hence creates roughly a module per function:
> In the end I'm ho
aming
(ServiceNow)
From: David Geier
Sent: Wednesday, June 29, 2022 11:03 AM
To: Alvaro Herrera
Cc: Luc Vlaming ; Andres Freund ;
PostgreSQL-development
Subject: Re: Lazy JIT IR code generation to increase JIT speed with partitions
[External Email]
Hi Alvaro,
That's a very interesting
Hi Alvaro,
That's a very interesting case and might indeed be fixed or at least
improved by this patch. I tried to reproduce this, but at least when
running a simple, serial query with increasing numbers of functions, the
time spent per function is linear or even slightly sub-linear (same as Tom
o
On 2021-Jan-18, Luc Vlaming wrote:
> I would like this topic to somehow progress and was wondering what other
> benchmarks / tests would be needed to have some progress? I've so far
> provided benchmarks for small(ish) queries and some tpch numbers, assuming
> those would be enough.
Hi, some time
Hi hackers,
I picked this up and had a closer look in which way the total JIT time
depends on the number of modules to be jitted.
Indeed, the total JIT time increases the more modules are used. The reason
for this to happen is that the inlining pass loads and deserializes all to
be inlined module
On 18-01-2021 08:47, Luc Vlaming wrote:
Hi everyone, Andres,
On 03-01-2021 11:05, Luc Vlaming wrote:
On 30-12-2020 14:23, Luc Vlaming wrote:
On 30-12-2020 02:57, Andres Freund wrote:
Hi,
Great to see work in this area!
I would like this topic to somehow progress and was wondering what othe
Hi everyone, Andres,
On 03-01-2021 11:05, Luc Vlaming wrote:
On 30-12-2020 14:23, Luc Vlaming wrote:
On 30-12-2020 02:57, Andres Freund wrote:
Hi,
Great to see work in this area!
I would like this topic to somehow progress and was wondering what other
benchmarks / tests would be needed to
On 30-12-2020 14:23, Luc Vlaming wrote:
On 30-12-2020 02:57, Andres Freund wrote:
Hi,
Great to see work in this area!
On 2020-12-28 09:44:26 +0100, Luc Vlaming wrote:
I would like to propose a small patch to the JIT machinery which
makes the
IR code generation lazy. The reason for postponing
On 30-12-2020 02:57, Andres Freund wrote:
Hi,
Great to see work in this area!
On 2020-12-28 09:44:26 +0100, Luc Vlaming wrote:
I would like to propose a small patch to the JIT machinery which makes the
IR code generation lazy. The reason for postponing the generation of the IR
code is that wit
Hi,
Great to see work in this area!
On 2020-12-28 09:44:26 +0100, Luc Vlaming wrote:
> I would like to propose a small patch to the JIT machinery which makes the
> IR code generation lazy. The reason for postponing the generation of the IR
> code is that with partitions we get an explosion in the
Hi,
I would like to propose a small patch to the JIT machinery which makes
the IR code generation lazy. The reason for postponing the generation of
the IR code is that with partitions we get an explosion in the number of
JIT functions generated as many child tables are involved, each with
the
18 matches
Mail list logo