lhames added a comment.

In D107049#3096727 <https://reviews.llvm.org/D107049#3096727>, @uabelho wrote:

> Hi,
>
> We're seeing a problem with this patch in our downstream (not public) 
> buildbots. With an asan-built compiler we see the following:
>
>   ...
>   10:08:55 [ RUN      ] InterpreterTest.CatchException
>   10:08:55 libunwind: __unw_add_dynamic_fde: bad fde: FDE is really a CIE
>   10:08:55 libc++abi: terminating with uncaught exception of type 
> custom_exception
>   ...

I suspect that this is a Linux distro that's using libunwind as the unwinder, 
at least for this test. Linux distros typically use libgcc_s for unwinding. The 
two libraries have different behavior for their `__register_frame` functions: 
libunwind's function expects to be passed a single FDE, libgcc_s's expects to 
be passed an entire .eh_frame section. We try to guess the unwinder in the JIT 
and use the appropriate argument (see [1][2]), but when we get it wrong this is 
often the result: we try to pass an .eh-frame section to libunwind's 
`__register_frame` and it errors out on a CIE at the start of the section.

@uabelho -- In your setup I'm seeing:

  -- Looking for __unw_add_dynamic_fde
  -- Looking for __unw_add_dynamic_fde - not found

So the question is, how are we failing to find `__unw_add_dynamic_fde` during 
config, only to crash in it during the test? Is use of libunwind on your 
platform expected?

side note: Peter Housel recently posted https://reviews.llvm.org/D111863 to add 
a new registration API with consistent behavior. Hopefully in the future we can 
rely on dynamic detection of this to eliminate the issue for users of future 
libunwinds.

In D107049#3100630 <https://reviews.llvm.org/D107049#3100630>, @rnk wrote:

> So, to back up a bit, do I understand correctly that this change adds tests 
> to the check-clang test suite that JIT compiles C++ code for the host and 
> throws C++ exceptions? Can we reconsider that?
>
> We have a policy of not running execution tests in the clang test suite 
> because we know they always turn out to be unreliable, flaky, and highly 
> dependent on the environment rather than the code under test. Integration 
> tests are great, everyone needs them, but they should definitely not be part 
> of the default set of tests that developers run anywhere and everywhere and 
> expect to work out of the box, regardless of the environment. +@dblaikie to 
> confirm if I am off base here about our testing strategy, and maybe Lang can 
> advise us about past JIT testing strategies.

The JIT has always used execution tests. It's difficult to test the JIT 
properly without them, since it doesn't have persistent output.

In practice the trick has always been to tighten the criteria for where these 
can run until things stabilize. We're seeing more failures on the test cases 
that Vassil is writing because he's pushing the boundaries of ORC's feature 
set, but I think the solution should still be the same: fix the JIT when we 
find bugs there, and restrict the tests to environments where they're expected 
to pass.

[1] 
https://github.com/llvm/llvm-project/commit/957334382cd12ec07b46c0ddfdcc220731f6d80f
[2] https://llvm.org/PR44074


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D107049/new/

https://reviews.llvm.org/D107049

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to