On Mon, Mar 07, 2022 at 09:54:22AM -0800, Andres Freund wrote:
> > Initially select didn't break anything, but when I tuned down
> > jit_above_cost so that it will kick in - got fails immediately.
> Could you set jit_debugging_support=on and show a backtrace with that?
Here you go:
Program receive
On Mon, Mar 07, 2022 at 12:22:26PM -0500, Tom Lane wrote:
> Neither of those configurations fail for me, so either
> it's been fixed since 12.9, or (more likely) there is
> something to your test case beyond what you've mentioned.
Upgraded to 12.10 from pgdg, same problem.
> (I guess a long-shot
Hi,
On 2022-03-07 16:11:28 +0100, hubert depesz lubaczewski wrote:
> On Sun, Mar 06, 2022 at 11:10:00AM -0500, Tom Lane wrote:
> > > I tore these boxes down, so can't check immediately, but I think
> > > I remember that you're right - single-row queries didn't use JIT.
>
> Got focal box up. Loade
hubert depesz lubaczewski writes:
> Got focal box up. Loaded schema for Pg.
> Initially select didn't break anything, but when I tuned down
> jit_above_cost so that it will kick in - got fails immediately.
Hmph. I tried two more builds:
* downgraded llvm to llvm-9, compiled PG 12 from source
*
On Sun, Mar 06, 2022 at 11:10:00AM -0500, Tom Lane wrote:
> > I tore these boxes down, so can't check immediately, but I think
> > I remember that you're right - single-row queries didn't use JIT.
Got focal box up. Loaded schema for Pg.
Initially select didn't break anything, but when I tuned dow
hubert depesz lubaczewski writes:
> On Fri, Mar 04, 2022 at 05:03:14PM -0500, Tom Lane wrote:
>> Mmm ... it might have just been that the planner chose not to use
>> JIT when it thought there were fewer rows involved. Did you check
>> with EXPLAIN that these cut-down cases still used JIT?
> I to
On Fri, Mar 04, 2022 at 05:03:14PM -0500, Tom Lane wrote:
> hubert depesz lubaczewski writes:
> > On Fri, Mar 04, 2022 at 02:09:52PM -0500, Tom Lane wrote:
> >> I tried and failed to reproduce this on Fedora 35 on aarch64,
> >> but that has what I think is a newer LLVM version:
>
> > I have suspi
Mladen Gogala writes:
> On 3/4/22 17:03, Tom Lane wrote:
>> Mmm ... it might have just been that the planner chose not to use
>> JIT when it thought there were fewer rows involved. Did you check
>> with EXPLAIN that these cut-down cases still used JIT?
> This is interesting and informative answe
On 3/4/22 17:03, Tom Lane wrote:
Mmm ... it might have just been that the planner chose not to use
JIT when it thought there were fewer rows involved. Did you check
with EXPLAIN that these cut-down cases still used JIT?
This is interesting and informative answer. How do I check whether JIT
is
hubert depesz lubaczewski writes:
> On Fri, Mar 04, 2022 at 02:09:52PM -0500, Tom Lane wrote:
>> I tried and failed to reproduce this on Fedora 35 on aarch64,
>> but that has what I think is a newer LLVM version:
> I have suspicion that it also kinda depends on number of rows in there.
> When I d
On Fri, Mar 04, 2022 at 02:09:52PM -0500, Tom Lane wrote:
> arm64, eh? I wonder if that's buggier than the Intel code paths.
>
> I tried and failed to reproduce this on Fedora 35 on aarch64,
> but that has what I think is a newer LLVM version:
I have suspicion that it also kinda depends on numbe
hubert depesz lubaczewski writes:
> OK. Traced it back to JIT. With JIT enabled:
Hah, that's useful info. Seems like it must be incorrect code
generated by JIT.
> versions of things that I think are relevant:
> =$ dpkg -l | grep -E 'llvm|clang|gcc|glibc'
> ii gcc
On Thu, Mar 03, 2022 at 05:39:21PM +0100, hubert depesz lubaczewski wrote:
> On Thu, Mar 03, 2022 at 04:11:56PM +0100, hubert depesz lubaczewski wrote:
> > On Thu, Mar 03, 2022 at 04:04:28PM +0100, hubert depesz lubaczewski wrote:
> > > and it worked, so I'm kinda at loss here.
> >
> > based on so
On Thu, Mar 03, 2022 at 04:11:56PM +0100, hubert depesz lubaczewski wrote:
> On Thu, Mar 03, 2022 at 04:04:28PM +0100, hubert depesz lubaczewski wrote:
> > and it worked, so I'm kinda at loss here.
>
> based on some talk on IRC, I was able to get stack trace from fail:
Based on the stack trace I
On Thu, Mar 03, 2022 at 04:04:28PM +0100, hubert depesz lubaczewski wrote:
> and it worked, so I'm kinda at loss here.
based on some talk on IRC, I was able to get stack trace from fail:
(gdb) bt
#0 0xfffe4a36e4d8 in ?? ()
#1 0xbe03ffb8 in ExecProcNode (node=0xe4f87cf8) at
./bu
Hi,
I know it's going to be most likely due to glibc and locales, but I found
interesting case that I can't figure out how to fix.
We have pg 12.6 on bionic. Works. Added focal replica (binary).
Replicates OK, but then fails when I try to pg_dump -s.
Error is:
pg_dump: error: query failed: serv
16 matches
Mail list logo