On 4/21/21 6:40 AM, Honza Horak wrote:
On 3/19/21 8:15 PM, Tom Lane wrote:
Andres Freund <and...@anarazel.de> writes:
I think the error above comes from a "mismatch" between the clang used
to compile bitcode, and the LLVM version linked to. Normally we're
somewhat tolerant of differences between the two, but there was an ABI
change at some point, leading to that error.  IIRC I hit that, but it
vanished as soon as I used a matching libllvm and clang.

Thanks, I passed that advice on.

            regards, tom lane

Tom Stellard was so kind to look at this issue deeper with his LLVM skills and 
found PostgreSQL is not actually handling the LLVM perfectly. He's working on 
improving the patch, but sharing even the first attempt with upstream seems 
like a good idea:

https://src.fedoraproject.org/rpms/postgresql/pull-request/29


I wrote a new patch based on the bug discussion[1].  It works around
the issue specifically on s390x rather than disabling specific
CPUs and features for all targets.  The patch is attached.


[1] 
https://www.postgresql.org/message-id/flat/16971-5d004d34742a3d35%40postgresql.org


Regards,
Honza


>From 0edaa982336823d4d7af8f10b91579fe0099ef3d Mon Sep 17 00:00:00 2001
From: Tom Stellard <tstel...@redhat.com>
Date: Tue, 20 Apr 2021 20:14:21 -0700
Subject: [PATCH] jit: Workaround potential datalayout mismatch on s390x

LLVM's s390x target uses a different datalayout for z13 and newer processors.
If llvmjit_types.bc is compiled to target a processor older than z13, and
then the JIT runs on a z13 or newer processor, then there will be a mismatch
in datalayouts between llvmjit_types.bc and the JIT engine.  This mismatch
causes the JIT to fail at runtime.
---
 src/backend/jit/llvm/llvmjit.c | 46 ++++++++++++++++++++++++++++++++--
 1 file changed, 44 insertions(+), 2 deletions(-)

diff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c
index 98a27f08bf..05b6438ba8 100644
--- a/src/backend/jit/llvm/llvmjit.c
+++ b/src/backend/jit/llvm/llvmjit.c
@@ -776,6 +776,35 @@ llvm_compile_module(LLVMJitContext *context)
 			 errhidecontext(true)));
 }
 
+/*
+ * For the systemz target, LLVM uses a different datalayout for z13 and newer
+ * CPUs than it does for older CPUs.  This can cause a mismatch in datalayouts
+ * in the case where the llvm_types_module is compiled with a pre-z13 CPU
+ * and the JIT is running on z13 or newer.
+ * See computeDataLayout() function in
+ * llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp for information on the
+ * datalayout differences.
+ */
+static bool
+needs_systemz_workaround(void)
+{
+	bool ret = false;
+	LLVMContextRef llvm_context;
+	LLVMTypeRef vec_type;
+	LLVMTargetDataRef llvm_layoutref;
+	if (strncmp(LLVMGetTargetName(llvm_targetref), "systemz", strlen("systemz")))
+	{
+		return false;
+	}
+
+	llvm_context = LLVMGetModuleContext(llvm_types_module);
+	vec_type = LLVMVectorType(LLVMIntTypeInContext(llvm_context, 32), 4);
+	llvm_layoutref = LLVMCreateTargetData(llvm_layout);
+	ret = (LLVMABIAlignmentOfType(llvm_layoutref, vec_type) == 16);
+	LLVMDisposeTargetData(llvm_layoutref);
+	return ret;
+}
+
 /*
  * Per session initialization.
  */
@@ -785,6 +814,7 @@ llvm_session_initialize(void)
 	MemoryContext oldcontext;
 	char	   *error = NULL;
 	char	   *cpu = NULL;
+	char       *host_features = NULL;
 	char	   *features = NULL;
 	LLVMTargetMachineRef opt0_tm;
 	LLVMTargetMachineRef opt3_tm;
@@ -816,10 +846,17 @@ llvm_session_initialize(void)
 	 * features not all CPUs have (weird, huh).
 	 */
 	cpu = LLVMGetHostCPUName();
-	features = LLVMGetHostCPUFeatures();
+	features = host_features = LLVMGetHostCPUFeatures();
 	elog(DEBUG2, "LLVMJIT detected CPU \"%s\", with features \"%s\"",
 		 cpu, features);
 
+	if (needs_systemz_workaround())
+	{
+		const char *no_vector =",-vector";
+		features = malloc(sizeof(char) * (strlen(host_features) + strlen(no_vector) + 1));
+		sprintf(features, "%s%s", host_features, no_vector);
+	}
+
 	opt0_tm =
 		LLVMCreateTargetMachine(llvm_targetref, llvm_triple, cpu, features,
 								LLVMCodeGenLevelNone,
@@ -833,8 +870,13 @@ llvm_session_initialize(void)
 
 	LLVMDisposeMessage(cpu);
 	cpu = NULL;
-	LLVMDisposeMessage(features);
+	if (features != host_features)
+	{
+		free(features);
+	}
 	features = NULL;
+	LLVMDisposeMessage(host_features);
+	host_features = NULL;
 
 	/* force symbols in main binary to be loaded */
 	LLVMLoadLibraryPermanently(NULL);
-- 
2.27.0

Reply via email to