rsmith added a comment. I think this is the wrong way to handle this issue. We need to give lambdas a mangling if they occur in functions for which there can be definitions in multiple translation units. In regular C++ code, that's inline functions and function template specializations, so that's what we're currently checking for. CUDA adds more cases (in particular, `__host__ __device__` functions, plus anything else that can be emitted for multiple targets), so we should additionally check for those cases when determining whether to number lambdas. I don't see any need for a flag to control this behavior.
Repository: rG LLVM Github Monorepo CHANGES SINCE LAST ACTION https://reviews.llvm.org/D63164/new/ https://reviews.llvm.org/D63164 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits