kristof.beyls created this revision.
kristof.beyls added a reviewer: olista01.
Herald added subscribers: aheejin, javed.absar, dschuff, jfb, aemerson.

Recently, Google Project Zero disclosed several classes of attack
against speculative execution. One of these, known as variant-1
(CVE-2017-5753), allows explicit bounds checks to be bypassed under
speculation, providing an arbitrary read gadget. Further details can
be found on the GPZ blog [1].

This patch adds a new builtin function that provides a mechanism for
limiting speculation by a CPU after a bounds-checked memory access.
This patch provides the clang-side of the needed functionality; there is
also an llvm-side patch this patch is dependent on.
We've tried to design this in such a way that it can be used for any
target where this might be necessary.  The patch provides a generic
implementation of the builtin, with most of the target-specific
support in the LLVM counter part to this clang patch.

The signature of the new, polymorphic, builtin is:

T __builtin_load_no_speculate(const volatile T *ptr,

  const volatile void *lower,
  const volatile void *upper,
  T failval,
  const volatile void *cmpptr)

T can be any integral type (signed or unsigned char, int, short, long,
etc) or any pointer type.

The builtin implements the following logical behaviour:

inline T __builtin_load_no_speculate(const volatile T *ptr,

                                     const volatile void *lower,
                                     const volatile void *upper, T failval,
                                     const volatile void *cmpptr) {
  T result;
  if (cmpptr >= lower && cmpptr < upper)
    result = *ptr;
  else
    result = failval;
  return result;

}

In addition, the builtin ensures that future speculation using *ptr may
only continue iff cmpptr lies within the bounds specified.

To make the builtin easier to use, the final two arguments can both be
omitted: failval will default to 0 in this case and if cmpptr is omitted
ptr will be used for expansions of the range check.  In addition, either
lower or upper (but not both) may be a literal NULL and the expansion
will then ignore that boundary condition when expanding.

This also introduces the predefined pre-processor macro
__HAVE_LOAD_NO_SPECULATE, that allows users to check if their version of
the compiler supports this intrinsic.

The builtin is defined for all architectures, even if they do not
provide a mechanism for inhibiting speculation.  If they do not have
such support the compiler will emit a warning and simply implement the
architectural behavior of the builtin.

This patch can be used with the header file that Arm recently
published here: https://github.com/ARM-software/speculation-barrier.

Kernel patches are also being developed, eg:
https://lkml.org/lkml/2018/1/3/754.  The intent is that eventually
code like this will be able to use support directly from the compiler
in a portable manner.

Similar patches are also being developed for GCC and have been posted to
their development list, see
https://gcc.gnu.org/ml/gcc-patches/2018-01/msg00205.html

[1] More information on the topic can be found here:
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Arm specific information can be found here:
https://www.arm.com/security-update


https://reviews.llvm.org/D41760

Files:
  include/clang/Basic/Builtins.def
  include/clang/Basic/DiagnosticGroups.td
  include/clang/Basic/DiagnosticSemaKinds.td
  include/clang/Basic/TargetInfo.h
  include/clang/Sema/Sema.h
  lib/Basic/Targets/AArch64.cpp
  lib/Basic/Targets/AArch64.h
  lib/Basic/Targets/ARM.cpp
  lib/Basic/Targets/ARM.h
  lib/CodeGen/CGBuiltin.cpp
  lib/Frontend/InitPreprocessor.cpp
  lib/Sema/SemaChecking.cpp
  test/CodeGen/builtin-load-no-speculate.c
  test/Preprocessor/init.c
  test/Sema/builtin-load-no-speculate-c.c
  test/Sema/builtin-load-no-speculate-cxx.cpp
  test/Sema/builtin-load-no-speculate-target-not-supported.c

Index: test/Sema/builtin-load-no-speculate-target-not-supported.c
===================================================================
--- /dev/null
+++ test/Sema/builtin-load-no-speculate-target-not-supported.c
@@ -0,0 +1,6 @@
+// REQUIRES: arm-registered-target
+// RUN: %clang_cc1 -triple thumbv8m.baseline -fsyntax-only -verify %s
+
+void test_valid(int *ptr, int *lower, int *upper, int failval, int *cmpptr) {
+  __builtin_load_no_speculate(ptr, lower, upper, failval, cmpptr); // expected-warning {{this target does not support anti-speculation operations. Your program will still execute correctly, but speculation will not be inhibited}}
+}
Index: test/Sema/builtin-load-no-speculate-cxx.cpp
===================================================================
--- /dev/null
+++ test/Sema/builtin-load-no-speculate-cxx.cpp
@@ -0,0 +1,144 @@
+// REQUIRES: arm-registered-target
+// REQUIRES: aarch64-registered-target
+// RUN: %clang_cc1 -triple aarch64 -x c++ -std=c++11 -DENABLE_ERRORS -verify %s
+// RUN: %clang_cc1 -triple armv7a -x c++ -std=c++11 -DENABLE_ERRORS -verify %s
+// RUN: %clang_cc1 -triple aarch64 -x c++ -std=c++11 %s -emit-llvm -o -
+// RUN: %clang_cc1 -triple armv7a -x c++ -std=c++11 %s -emit-llvm -o -
+
+void test_valid(int *ptr, int *lower, int *upper, int failval, int *cmpptr) {
+  __builtin_load_no_speculate(ptr, lower, upper);
+  __builtin_load_no_speculate(ptr, lower, upper, failval);
+  __builtin_load_no_speculate(ptr, lower, upper, failval, cmpptr);
+}
+
+void test_null_bounds(int *ptr, int *lower, int *upper, int failval, int *cmpptr) {
+  __builtin_load_no_speculate(ptr, lower, 0);
+  __builtin_load_no_speculate(ptr, lower, (void*)0);
+
+#ifdef ENABLE_ERRORS
+  __builtin_load_no_speculate(ptr, 0, 0); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+  __builtin_load_no_speculate(ptr, (void*)0, (void*)0); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+  __builtin_load_no_speculate(ptr, 0, (void*)0); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+  __builtin_load_no_speculate(ptr, 0, 0, failval, cmpptr); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+#endif
+
+  __builtin_load_no_speculate(ptr, lower, nullptr);
+  __builtin_load_no_speculate(ptr, nullptr, upper);
+#ifdef ENABLE_ERRORS
+  __builtin_load_no_speculate(ptr, nullptr, nullptr); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+  __builtin_load_no_speculate(ptr, 0, nullptr); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+  __builtin_load_no_speculate(ptr, nullptr, (void*)0); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+#endif
+}
+
+void test_load_type(void *lower, void *upper) {
+  char c;
+  c = __builtin_load_no_speculate(&c, lower, upper);
+
+  short s;
+  s = __builtin_load_no_speculate(&s, lower, upper);
+
+  int i;
+  i = __builtin_load_no_speculate(&i, lower, upper);
+
+  long l;
+  l = __builtin_load_no_speculate(&l, lower, upper);
+
+  long long ll;
+  ll = __builtin_load_no_speculate(&ll, lower, upper);
+
+  int *ip;
+  ip = __builtin_load_no_speculate(&ip, lower, upper);
+
+  int (*fp)(int, int);
+  fp = __builtin_load_no_speculate(&fp, lower, upper);
+
+#ifdef ENABLE_ERRORS
+  i = __builtin_load_no_speculate(i, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('int' invalid)}}
+
+  float f;
+  f = __builtin_load_no_speculate(&f, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('float *' invalid)}}
+
+  struct S { int a; } S;
+  S = __builtin_load_no_speculate(&S, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('struct S *' invalid)}}
+
+  union U { int a; } U;
+  U = __builtin_load_no_speculate(&U, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('union U *' invalid)}}
+
+  char __attribute__((vector_size(16))) v;
+  v = __builtin_load_no_speculate(&v, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('__attribute__((__vector_size__(16 * sizeof(char)))) char *' invalid)}}
+#endif
+}
+
+void test_bounds_types(void *lower, void *upper, void *cmpptr) {
+  int i;
+
+#ifdef ENABLE_ERRORS
+  __builtin_load_no_speculate(&i, 1234, upper); // expected-error {{cannot initialize a parameter of type 'const volatile void *' with an rvalue of type 'int'}}
+  __builtin_load_no_speculate(&i, lower, 5678); // expected-error {{cannot initialize a parameter of type 'const volatile void *' with an rvalue of type 'int}}
+  __builtin_load_no_speculate(&i, lower, upper, 0, 5678); // expected-error {{cannot initialize a parameter of type 'const volatile void *' with an rvalue of type 'int}}
+
+  __builtin_load_no_speculate(&i, 3.141, upper); // expected-error {{cannot initialize a parameter of type 'const volatile void *' with an rvalue of type 'double'}}
+  __builtin_load_no_speculate(&i, lower, 3.141); // expected-error {{cannot initialize a parameter of type 'const volatile void *' with an rvalue of type 'double'}}
+  __builtin_load_no_speculate(&i, lower, upper, 0, 3.141); // expected-error {{cannot initialize a parameter of type 'const volatile void *' with an rvalue of type 'double'}}
+#endif
+
+  // The bounds pointers are of type 'void *', the pointers do not have to be
+  // compatible with the first arg.
+  struct S {};
+  __builtin_load_no_speculate(&i, (short *)lower, (S *)upper, 0, (float *)0);
+
+  __builtin_load_no_speculate(&i, (const int *)lower, upper);
+  __builtin_load_no_speculate(&i, (volatile float *)lower, upper);
+  __builtin_load_no_speculate(&i, lower, (const volatile char *)upper);
+  __builtin_load_no_speculate(&i, lower, upper, 0, (const volatile char *)cmpptr);
+}
+
+void test_failval_type(void *lower, void *upper) {
+  char c;
+  int i;
+  int *ip;
+  int * const cip = 0;
+  int * volatile vip;
+  char *cp;
+  int (*fp)(int);
+
+  // Exactly correct type
+  __builtin_load_no_speculate(&i, lower, upper, 0);
+  __builtin_load_no_speculate(&i, lower, upper, i);
+
+  // Implicitly convertible type
+  __builtin_load_no_speculate(&i, lower, upper, (char)0);
+  __builtin_load_no_speculate(&i, lower, upper, 0.0);
+  __builtin_load_no_speculate(&ip, lower, upper, cip);
+  __builtin_load_no_speculate(&ip, lower, upper, vip);
+
+#ifdef ENABLE_ERRORS
+  __builtin_load_no_speculate(&i, lower, upper, ip); // expected-error {{assigning to 'int' from incompatible type 'int *'; dereference with *}}
+  __builtin_load_no_speculate(&ip, lower, upper, i); // expected-error {{assigning to 'int *' from incompatible type 'int'; take the address with &}}
+  __builtin_load_no_speculate(&ip, lower, upper, cp); // expected-error {{assigning to 'int *' from incompatible type 'char *'}}
+  __builtin_load_no_speculate(&ip, lower, upper, fp); // expected-error {{assigning to 'int *' from incompatible type 'int (*)(int)'}}
+  __builtin_load_no_speculate(&fp, lower, upper, ip); // expected-error {{assigning to 'int (*)(int)' from incompatible type 'int *'}}
+#endif
+}
+
+#ifdef ENABLE_ERRORS
+template<typename T>
+T load(const T *ptr, const T *upper, const T *lower) {
+  return __builtin_load_no_speculate(ptr, upper, lower); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('const S *' invalid)}}
+}
+
+void test_templates() {
+  int ia[5];
+  load(&ia[3], &ia[0], &ia[5]);
+
+  struct S { int a; };
+  S Sa[5];
+  load(&Sa[3], &Sa[0], &Sa[5]); // expected-note {{in instantiation of function template specialization 'load<S>' requested here}}
+}
+#endif
+
+void test_convert_failval_type() {
+  bool foo;
+  __builtin_load_no_speculate(&foo, (void *)0x1000, (void *)0x2000, false);
+}
Index: test/Sema/builtin-load-no-speculate-c.c
===================================================================
--- /dev/null
+++ test/Sema/builtin-load-no-speculate-c.c
@@ -0,0 +1,113 @@
+// REQUIRES: arm-registered-target
+// REQUIRES: aarch64-registered-target
+// RUN: %clang_cc1 -triple aarch64 -DENABLE_ERRORS -verify %s
+// RUN: %clang_cc1 -triple armv7a -DENABLE_ERRORS -verify %s
+// RUN: %clang_cc1 -triple aarch64 %s -emit-llvm -o -
+// RUN: %clang_cc1 -triple armv7a %s -emit-llvm -o -
+
+void test_valid(int *ptr, int *lower, int *upper, int failval, int *cmpptr) {
+  __builtin_load_no_speculate(ptr, lower, upper);
+  __builtin_load_no_speculate(ptr, lower, upper, failval);
+  __builtin_load_no_speculate(ptr, lower, upper, failval, cmpptr);
+}
+
+void test_null_bounds(int *ptr, int *lower, int *upper, int failval, int *cmpptr) {
+  __builtin_load_no_speculate(ptr, lower, 0);
+  __builtin_load_no_speculate(ptr, lower, (void*)0);
+
+#ifdef ENABLE_ERRORS
+  __builtin_load_no_speculate(ptr, 0, 0); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+  __builtin_load_no_speculate(ptr, (void*)0, (void*)0); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+  __builtin_load_no_speculate(ptr, 0, (void*)0); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+  __builtin_load_no_speculate(ptr, 0, 0, failval, cmpptr); // expected-error {{lower and upper bounds arguments to load_no_speculate builtin must not both be null}}
+#endif
+}
+
+void test_load_type(void *lower, void *upper) {
+  char c;
+  c = __builtin_load_no_speculate(&c, lower, upper);
+
+  short s;
+  s = __builtin_load_no_speculate(&s, lower, upper);
+
+  int i;
+  i = __builtin_load_no_speculate(&i, lower, upper);
+
+  long l;
+  l = __builtin_load_no_speculate(&l, lower, upper);
+
+  long long ll;
+  ll = __builtin_load_no_speculate(&ll, lower, upper);
+
+  int *ip;
+  ip = __builtin_load_no_speculate(&ip, lower, upper);
+
+  int (*fp)(int, int);
+  fp = __builtin_load_no_speculate(&fp, lower, upper);
+
+#ifdef ENABLE_ERRORS
+  i = __builtin_load_no_speculate(i, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('int' invalid)}}
+
+  float f;
+  f = __builtin_load_no_speculate(&f, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('float *' invalid)}}
+
+  struct S { int a; } S;
+  S = __builtin_load_no_speculate(&S, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('struct S *' invalid)}}
+
+  union U { int a; } U;
+  U = __builtin_load_no_speculate(&U, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('union U *' invalid)}}
+
+  char __attribute__((vector_size(16))) v;
+  v = __builtin_load_no_speculate(&v, lower, upper); // expected-error {{argument to load_no_speculate builtin must be a pointer to a pointer or integer ('__attribute__((__vector_size__(16 * sizeof(char)))) char *' invalid)}}
+#endif
+}
+
+void test_bounds_types(void *lower, void *upper, void *cmpptr) {
+  int i;
+
+  __builtin_load_no_speculate(&i, 1234, upper); // expected-warning {{incompatible integer to pointer conversion passing 'int' to parameter of type 'const volatile void *'}}
+  __builtin_load_no_speculate(&i, lower, 5678); // expected-warning {{incompatible integer to pointer conversion passing 'int' to parameter of type 'const volatile void *'}}
+  __builtin_load_no_speculate(&i, lower, upper, 0, 5678); // expected-warning {{incompatible integer to pointer conversion passing 'int' to parameter of type 'const volatile void *'}}
+
+#ifdef ENABLE_ERRORS
+  __builtin_load_no_speculate(&i, 3.141, upper); // expected-error {{passing 'double' to parameter of incompatible type 'const volatile void *'}}
+  __builtin_load_no_speculate(&i, lower, 3.141); // expected-error {{passing 'double' to parameter of incompatible type 'const volatile void *'}}
+  __builtin_load_no_speculate(&i, lower, upper, 0, 3.141); // expected-error {{passing 'double' to parameter of incompatible type 'const volatile void *'}}
+#endif
+
+  // The bounds pointers are of type 'void *', the pointers do not have to be
+  // compatible with the first arg.
+  __builtin_load_no_speculate(&i, (short*)lower, (int(*)(int))upper, 0, (float*)0);
+
+  __builtin_load_no_speculate(&i, (const int *)lower, upper);
+  __builtin_load_no_speculate(&i, (volatile float *)lower, upper);
+  __builtin_load_no_speculate(&i, lower, (const volatile char *)upper);
+  __builtin_load_no_speculate(&i, lower, upper, 0, (const volatile char *)cmpptr);
+}
+
+void test_failval_type(void *lower, void *upper) {
+  char c;
+  int i;
+  int *ip;
+  int * const cip;
+  int * volatile vip;
+  char *cp;
+  int (*fp)(int);
+
+  // Exactly correct type
+  __builtin_load_no_speculate(&i, lower, upper, 0);
+  __builtin_load_no_speculate(&i, lower, upper, i);
+
+  // Implicitly convertible type
+  __builtin_load_no_speculate(&i, lower, upper, (char)0);
+  __builtin_load_no_speculate((char*)&c, (char*)lower, (char*)upper, (int)0);
+  __builtin_load_no_speculate(&i, lower, upper, 0.0);
+  __builtin_load_no_speculate(&ip, lower, upper, cip);
+  __builtin_load_no_speculate(&ip, lower, upper, vip);
+
+  __builtin_load_no_speculate(&i, lower, upper, ip); // expected-warning {{incompatible pointer to integer conversion passing 'int *' to parameter of type 'int'; dereference with *}}
+  __builtin_load_no_speculate(&ip, lower, upper, i); // expected-warning {{incompatible integer to pointer conversion passing 'int' to parameter of type 'int *'; take the address with &}}
+  __builtin_load_no_speculate(&ip, lower, upper, cp); // expected-warning {{incompatible pointer types passing 'char *' to parameter of type 'int *'}}
+  __builtin_load_no_speculate(&ip, lower, upper, fp); // expected-warning {{incompatible pointer types passing 'int (*)(int)' to parameter of type 'int *'}}
+  __builtin_load_no_speculate(&fp, lower, upper, ip); // expected-warning {{incompatible pointer types passing 'int *' to parameter of type 'int (*)(int)'}}
+}
Index: test/Preprocessor/init.c
===================================================================
--- test/Preprocessor/init.c
+++ test/Preprocessor/init.c
@@ -96,6 +96,7 @@
 // COMMON:#define __GNUC_STDC_INLINE__ 1
 // COMMON:#define __GNUC__ {{.*}}
 // COMMON:#define __GXX_ABI_VERSION {{.*}}
+// COMMON:#define __HAVE_LOAD_NO_SPECULATE 1
 // COMMON:#define __ORDER_BIG_ENDIAN__ 4321
 // COMMON:#define __ORDER_LITTLE_ENDIAN__ 1234
 // COMMON:#define __ORDER_PDP_ENDIAN__ 3412
@@ -9134,6 +9135,7 @@
 // WEBASSEMBLY32-NEXT:#define __GNUC_STDC_INLINE__ 1
 // WEBASSEMBLY32-NEXT:#define __GNUC__ {{.*}}
 // WEBASSEMBLY32-NEXT:#define __GXX_ABI_VERSION 1002
+// WEBASSEMBLY32-NEXT:#define __HAVE_LOAD_NO_SPECULATE 1
 // WEBASSEMBLY32-NEXT:#define __ILP32__ 1
 // WEBASSEMBLY32-NEXT:#define __INT16_C_SUFFIX__
 // WEBASSEMBLY32-NEXT:#define __INT16_FMTd__ "hd"
@@ -9466,6 +9468,7 @@
 // WEBASSEMBLY64-NEXT:#define __GNUC_STDC_INLINE__ 1
 // WEBASSEMBLY64-NEXT:#define __GNUC__ {{.}}
 // WEBASSEMBLY64-NEXT:#define __GXX_ABI_VERSION 1002
+// WEBASSEMBLY64-NEXT:#define __HAVE_LOAD_NO_SPECULATE 1
 // WEBASSEMBLY64-NOT:#define __ILP32__
 // WEBASSEMBLY64-NEXT:#define __INT16_C_SUFFIX__
 // WEBASSEMBLY64-NEXT:#define __INT16_FMTd__ "hd"
Index: test/CodeGen/builtin-load-no-speculate.c
===================================================================
--- /dev/null
+++ test/CodeGen/builtin-load-no-speculate.c
@@ -0,0 +1,122 @@
+// RUN: %clang_cc1 -triple aarch64-linux-gnu -emit-llvm %s -o - | FileCheck -check-prefix=CHECK-SUPPORTED %s
+// RUN: %clang_cc1 -triple thumbv8m.baseline -emit-llvm %s -o - | FileCheck --check-prefix=CHECK-EXPANDED %s
+
+void test_load_widths(void *ptr, void *lower, void *upper) {
+#ifdef __AARCH64EL__
+  // CHECK-LABEL-SUPPORTED: define void @test_load_widths
+
+  __builtin_load_no_speculate((_Bool*)ptr, lower, upper);
+  // CHECK-SUPPORTED: call i8 @llvm.nospeculateload.i8(i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8 0, i8* %{{[0-9a-z]+}})
+
+  _Bool FailVal;
+  __builtin_load_no_speculate((_Bool*)ptr, lower, upper, FailVal);
+  // CHECK-SUPPORTED: %[[FAILVALEXT:[0-9a-zA-Z]+]] = zext i1 %{{[0-9a-z]+}} to i8
+  // CHECK-SUPPORTED: call i8 @llvm.nospeculateload.i8(i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8 %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}})
+
+
+  __builtin_load_no_speculate((char*)ptr, lower, upper);
+  // CHECK-SUPPORTED: call i8 @llvm.nospeculateload.i8(i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8 0, i8* %{{[0-9a-z]+}})
+
+  __builtin_load_no_speculate((short*)ptr, lower, upper);
+  // CHECK-SUPPORTED: call i16 @llvm.nospeculateload.i16(i16* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i16 0, i8* %{{[0-9a-z]+}})
+
+  __builtin_load_no_speculate((int*)ptr, lower, upper);
+  // CHECK-SUPPORTED: call i32 @llvm.nospeculateload.i32(i32* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i32 0, i8* %{{[0-9a-z]+}})
+
+  __builtin_load_no_speculate((long*)ptr, lower, upper);
+  // CHECK-SUPPORTED: call i64 @llvm.nospeculateload.i64(i64* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i64 0, i8* %{{[0-9a-z]+}})
+
+  __builtin_load_no_speculate((__int128*)ptr, lower, upper);
+  // CHECK-SUPPORTED: call i128 @llvm.nospeculateload.i128(i128* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i128 0, i8* %{{[0-9a-z]+}})
+
+  __builtin_load_no_speculate((int**)ptr, lower, upper);
+  // CHECK-SUPPORTED: call i32* @llvm.nospeculateload.p0i32(i32** %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i32* null, i8* %{{[0-9a-z]+}})
+
+  int arr[4];
+  __builtin_load_no_speculate(arr, lower, upper);
+  // CHECK-SUPPORTED: call i32 @llvm.nospeculateload.i32(i32* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i32 0, i8* %{{[0-9a-z]+}})
+#endif
+}
+
+void test_default_args(int *ptr, void *lower, void *upper, int failval, void *cmpptr) {
+  __builtin_load_no_speculate((int*)ptr, lower, upper);
+  // CHECK-SUPPORTED: %[[CMPPTR:[0-9a-z]+]] = bitcast i32* %[[PTR:[0-9a-z]+]] to i8*
+  // CHECK-SUPPORTED: call i32 @llvm.nospeculateload.i32(i32* %[[PTR]], i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i32 0, i8* %[[CMPPTR]])
+  //
+  // CHECK-EXPANDED:       %[[CMPPTR:[0-9a-z]+]] = bitcast i32* %[[PTR:[0-9a-z]+]] to i8*
+  // CHECK-EXPANDED:       %cmplower = icmp uge i8* %[[CMPPTR]], %[[LOWER:[0-9a-z]+]]
+  // CHECK-EXPANDED-NEXT:  %cmpupper = icmp ult i8* %[[CMPPTR]], %[[UPPER:[0-9a-z]+]]
+  // CHECK-EXPANDED-NEXT:  %cond = and i1 %cmplower, %cmpupper
+  // CHECK-EXPANDED-NEXT:  br i1 %cond, label %[[INBOUNDSBB:[0-9a-zA-Z]+]], label %[[NEXTBB:[0-9a-zA-Z]+]]
+
+  // CHECK-EXPANDED:  [[INBOUNDSBB]]: ; preds = %entry
+  // CHECK-EXPANDED-NEXT:  %PtrDeref = load volatile i32, i32* %[[PTR]], align 4
+  // CHECK-EXPANDED-NEXT:  br label %[[NEXTBB:[0-9a-zA-Z]+]]
+
+  // CHECK-EXPANDED:       [[NEXTBB]]: ; preds = %[[INBOUNDSBB]], %entry
+  // CHECK-EXPANDED-NEXT:  %load_no_speculate_res = phi i32 [ %PtrDeref, %[[INBOUNDSBB]] ], [ 0, %entry ]
+
+  __builtin_load_no_speculate((int*)ptr, lower, upper, 42);
+  // CHECK-SUPPORTED: %[[CMPPTR:[0-9a-z]+]] = bitcast i32* %[[PTR:[0-9a-z]+]] to i8*
+  // CHECK-SUPPORTED: call i32 @llvm.nospeculateload.i32(i32* %[[PTR]], i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i32 42, i8* %[[CMPPTR]])
+
+  // CHECK-EXPANDED:       %[[CMPLOWER:cmplower[0-9]*]] = icmp uge i8* %[[CMPPTR:[0-9a-z]+]], %[[LOWER:[0-9a-z]+]]
+  // CHECK-EXPANDED-NEXT:  %[[CMPUPPER:cmpupper[0-9]*]] = icmp ult i8* %[[CMPPTR]], %[[UPPER:[0-9a-z]+]]
+  // CHECK-EXPANDED-NEXT:  %[[COND:cond[0-9]*]] = and i1 %[[CMPLOWER]], %[[CMPUPPER]]
+  // CHECK-EXPANDED-NEXT:  br i1 %[[COND]], label %[[INBOUNDSBB:[0-9a-zA-Z]+]], label %[[NEXTBB2:[0-9a-zA-Z]+]]
+
+  // CHECK-EXPANDED:  [[INBOUNDSBB]]: ; preds = %[[NEXTBB]]
+  // CHECK-EXPANDED-NEXT:  %[[PTRDEREF:PtrDeref[0-9]*]] = load volatile i32, i32* %[[PTR:[0-9a-z]+]], align 4
+  // CHECK-EXPANDED-NEXT:  br label %[[NEXTBB2]]
+
+  // CHECK-EXPANDED:       [[NEXTBB2]]: ; preds = %[[INBOUNDSBB]], %[[NEXTBB]]
+  // CHECK-EXPANDED-NEXT:  %load_no_speculate_res{{[0-9*]}} = phi i32 [ %[[PTRDEREF]], %[[INBOUNDSBB]] ], [ 42, %[[NEXTBB]] ]
+
+  __builtin_load_no_speculate((int*)ptr, lower, upper, 42, cmpptr);
+  // CHECK-SUPPORTED-NOT: bitcast
+  // CHECK-SUPPORTED: call i32 @llvm.nospeculateload.i32(i32* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i32 42, i8* %{{[0-9a-z]+}})
+
+  // CHECK-EXPANDED-NOT:   bitcast
+  // CHECK-EXPANDED:       %[[CMPLOWER:cmplower[0-9]*]] = icmp uge i8* %[[CMPPTR:[0-9a-z]+]], %[[LOWER:[0-9a-z]+]]
+  // CHECK-EXPANDED-NEXT:  %[[CMPUPPER:cmpupper[0-9]*]] = icmp ult i8* %[[CMPPTR]], %[[UPPER:[0-9a-z]+]]
+  // CHECK-EXPANDED-NEXT:  %[[COND:cond[0-9]*]] = and i1 %[[CMPLOWER]], %[[CMPUPPER]]
+  // CHECK-EXPANDED-NEXT:  br i1 %[[COND]], label %[[INBOUNDSBB3:[0-9a-zA-Z]+]], label %[[NEXTBB3:[0-9a-zA-Z]+]]
+
+  // CHECK-EXPANDED:  [[INBOUNDSBB3]]: ; preds = %[[NEXTBB2]]
+  // CHECK-EXPANDED-NEXT:  %[[PTRDEREF:PtrDeref[0-9]*]] = load volatile i32, i32* %[[PTR:[0-9a-z]+]], align 4
+  // CHECK-EXPANDED-NEXT:  br label %[[NEXTBB3]]
+
+  // CHECK-EXPANDED:       [[NEXTBB3]]: ; preds = %[[INBOUNDSBB3]], %[[NEXTBB2]]
+  // CHECK-EXPANDED-NEXT:  %load_no_speculate_res{{[0-9]*}} = phi i32 [ %[[PTRDEREF]], %[[INBOUNDSBB3]] ], [ 42, %[[NEXTBB2]] ]
+}
+
+void test_nolower_noupper(int *ptr, void *lower, void *upper, int failval, void *cmpptr) {
+  __builtin_load_no_speculate((int*)ptr, 0, upper, 42, cmpptr);
+  // CHECK-SUPPORTED-NOT: bitcast
+  // CHECK-SUPPORTED: call i32 @llvm.nospeculateload.nolower.i32(i32* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i32 42, i8* %{{[0-9a-z]+}})
+
+  // CHECK-EXPANDED-NOT:   bitcast
+  // CHECK-EXPANDED:  %[[CMPUPPER:cmpupper[0-9]*]] = icmp ult i8* %[[CMPPTR:[0-9a-z]+]], %[[UPPER:[0-9a-z]+]]
+  // CHECK-EXPANDED-NEXT:  br i1 %[[CMPUPPER]], label %[[INBOUNDSBB3:[0-9a-zA-Z]+]], label %[[NEXTBB3:[0-9a-zA-Z]+]]
+
+  // CHECK-EXPANDED:  [[INBOUNDSBB3]]: ; preds = %entry
+  // CHECK-EXPANDED-NEXT:  %[[PTRDEREF:PtrDeref[0-9]*]] = load volatile i32, i32* %[[PTR:[0-9a-z]+]], align 4
+  // CHECK-EXPANDED-NEXT:  br label %[[NEXTBB3]]
+
+  // CHECK-EXPANDED:       [[NEXTBB3]]: ; preds = %[[INBOUNDSBB3]], %entry
+  // CHECK-EXPANDED-NEXT:  %load_no_speculate_res{{[0-9]*}} = phi i32 [ %[[PTRDEREF]], %[[INBOUNDSBB3]] ], [ 42, %entry ]
+  __builtin_load_no_speculate((int*)ptr, lower, 0, 42, cmpptr);
+  // CHECK-SUPPORTED-NOT: bitcast
+  // CHECK-SUPPORTED: call i32 @llvm.nospeculateload.noupper.i32(i32* %{{[0-9a-z]+}}, i8* %{{[0-9a-z]+}}, i32 42, i8* %{{[0-9a-z]+}})
+
+  // CHECK-EXPANDED-NOT:   bitcast
+  // CHECK-EXPANDED:       %[[CMPLOWER:cmplower[0-9]*]] = icmp uge i8* %[[CMPPTR:[0-9a-z]+]], %[[LOWER:[0-9a-z]+]]
+  // CHECK-EXPANDED-NEXT:  br i1 %[[CMPLOWER]], label %[[INBOUNDSBB4:[0-9a-zA-Z]+]], label %[[NEXTBB4:[0-9a-zA-Z]+]]
+
+  // CHECK-EXPANDED:  [[INBOUNDSBB4]]: ; preds = %[[NEXTBB3]]
+  // CHECK-EXPANDED-NEXT:  %[[PTRDEREF:PtrDeref[0-9]*]] = load volatile i32, i32* %[[PTR:[0-9a-z]+]], align 4
+  // CHECK-EXPANDED-NEXT:  br label %[[NEXTBB4]]
+
+  // CHECK-EXPANDED:       [[NEXTBB4]]: ; preds = %[[INBOUNDSBB4]], %[[NEXTBB3]]
+  // CHECK-EXPANDED-NEXT:  %load_no_speculate_res{{[0-9]*}} = phi i32 [ %[[PTRDEREF]], %[[INBOUNDSBB4]] ], [ 42, %[[NEXTBB3]] ]
+}
Index: lib/Sema/SemaChecking.cpp
===================================================================
--- lib/Sema/SemaChecking.cpp
+++ lib/Sema/SemaChecking.cpp
@@ -1226,6 +1226,8 @@
     if (SemaBuiltinOSLogFormat(TheCall))
       return ExprError();
     break;
+  case Builtin::BI__builtin_load_no_speculate:
+    return SemaBuiltinLoadNoSpeculate(TheCallResult);
   }
 
   // Since the target specific builtins for each arch overlap, only check those
@@ -3335,7 +3337,7 @@
   InitializedEntity Entity =
     InitializedEntity::InitializeParameter(S.Context, Param);
 
-  ExprResult Arg = E->getArg(0);
+  ExprResult Arg = E->getArg(ArgIndex);
   Arg = S.PerformCopyInitialization(Entity, SourceLocation(), Arg);
   if (Arg.isInvalid())
     return true;
@@ -3784,6 +3786,107 @@
   return TheCallResult;
 }
 
+ExprResult Sema::SemaBuiltinLoadNoSpeculate(ExprResult TheCallResult) {
+  CallExpr *TheCall = (CallExpr *)TheCallResult.get();
+  DeclRefExpr *DRE =
+      cast<DeclRefExpr>(TheCall->getCallee()->IgnoreParenCasts());
+  FunctionDecl *FDecl = cast<FunctionDecl>(DRE->getDecl());
+  unsigned BuiltinID = FDecl->getBuiltinID();
+  assert(BuiltinID == Builtin::BI__builtin_load_no_speculate &&
+         "Unexpected no_speculate_val builtin!");
+
+  // Too few args.
+  if (TheCall->getNumArgs() < 3)
+    return Diag(TheCall->getLocEnd(), diag::err_typecheck_call_too_few_args_at_least)
+           << 0 /*function call*/ << 3 /* min args */ << TheCall->getNumArgs();
+
+  // Too many args.
+  if (TheCall->getNumArgs() > 5)
+    return Diag(TheCall->getArg(5)->getLocStart(),
+                diag::err_typecheck_call_too_many_args_at_most)
+      << 0 /*function call*/ << 5 << TheCall->getNumArgs()
+      << SourceRange(TheCall->getArg(5)->getLocStart(),
+                     (*(TheCall->arg_end()-1))->getLocEnd());
+
+  // Derive the return type from the pointer argument.
+  ExprResult FirstArg = DefaultFunctionArrayLvalueConversion(TheCall->getArg(0));
+  if (FirstArg.isInvalid())
+    return true;
+  TheCall->setArg(0, FirstArg.get());
+  QualType FirstArgType = FirstArg.get()->getType();
+
+  // The first argument must be a pointer type.
+  if (!FirstArgType->isAnyPointerType())
+    return Diag(
+               TheCall->getArg(0)->getLocStart(),
+               diag::
+                   err_nospeculateload_builtin_must_be_pointer_to_pointer_or_integral)
+           << TheCall->getArg(0)->getType()
+           << TheCall->getArg(0)->getSourceRange();
+
+  QualType RetType = FirstArgType->getPointeeType();
+  TheCall->setType(RetType);
+
+  // The first argument must be a pointer to an integer or pointer type.
+  if (!(RetType->isIntegerType() || RetType->isAnyPointerType()))
+    return Diag(
+               TheCall->getArg(0)->getLocStart(),
+               diag::
+                   err_nospeculateload_builtin_must_be_pointer_to_pointer_or_integral)
+           << TheCall->getArg(0)->getType()
+           << TheCall->getArg(0)->getSourceRange();
+
+  // The lower and upper bounds must be pointers.
+  if (checkBuiltinArgument(*this, TheCall, 1))
+    return true;
+  if (checkBuiltinArgument(*this, TheCall, 2))
+    return true;
+
+  // The lower and upper bounds must not both be null.
+  bool LowerBoundNull =
+      TheCall->getArg(1)->IgnoreParenCasts()->isNullPointerConstant(
+          Context, Expr::NPC_ValueDependentIsNotNull);
+  bool UpperBoundNull =
+      TheCall->getArg(2)->IgnoreParenCasts()->isNullPointerConstant(
+          Context, Expr::NPC_ValueDependentIsNotNull);
+  if (LowerBoundNull && UpperBoundNull) {
+    return Diag(DRE->getLocStart(),
+                diag::err_nospeculateload_builtin_bounds_cannot_both_be_null)
+           << TheCall->getArg(1)->getSourceRange()
+           << TheCall->getArg(2)->getSourceRange();
+  }
+
+  // If the failval argument is provided, it must be compatible with the type
+  // being loaded.
+  if (TheCall->getNumArgs() >= 4) {
+    Expr *ValArg = TheCall->getArg(3);
+    ExprResult ValArgResult = DefaultFunctionArrayLvalueConversion(ValArg);
+    AssignConvertType ConvTy =
+        CheckSingleAssignmentConstraints(RetType, ValArgResult);
+
+    if (ValArgResult.isInvalid())
+      return ExprError();
+    if (DiagnoseAssignmentResult(ConvTy, TheCall->getArg(3)->getLocStart(),
+                                 RetType, ValArg->getType(), ValArg,
+                                 AA_Passing))
+      return true;
+    TheCall->setArg(3, ValArgResult.get());
+  }
+
+  // If provided, the cmpptr argument must be a pointer.
+  if (TheCall->getNumArgs() >= 5)
+    if (checkBuiltinArgument(*this, TheCall, 4))
+      return true;
+
+  // If the target doesn't support the non-speculation semantics of the
+  // intrinsic, issue a warning.
+  if (!Context.getTargetInfo().hasBuiltinLoadNoSpeculate())
+    Diag(DRE->getLocStart(),
+         diag::warn_nospeculateload_builtin_target_not_supported);
+
+  return TheCallResult;
+}
+
 /// CheckObjCString - Checks that the argument to the builtin
 /// CFString constructor is correct
 /// Note: It might also make sense to do the UTF-16 conversion here (would
Index: lib/Frontend/InitPreprocessor.cpp
===================================================================
--- lib/Frontend/InitPreprocessor.cpp
+++ lib/Frontend/InitPreprocessor.cpp
@@ -1060,6 +1060,8 @@
     Builder.defineMacro("__GLIBCXX_BITSIZE_INT_N_0", "128");
   }
 
+  Builder.defineMacro("__HAVE_LOAD_NO_SPECULATE");
+
   // Get other target #defines.
   TI.getTargetDefines(LangOpts, Builder);
 }
Index: lib/CodeGen/CGBuiltin.cpp
===================================================================
--- lib/CodeGen/CGBuiltin.cpp
+++ lib/CodeGen/CGBuiltin.cpp
@@ -772,6 +772,73 @@
   return Fn;
 }
 
+/// @brief Utility to insert expansion for __builtin_load_no_speculate for
+///        targets than don't support llvm.nospeculateload. Expand
+///        __builtin_load_no_speculate(ptr, lower, upper, failval, cmpptr)
+///        to a call to:
+/// inline TYP __builtin_load_no_speculate
+///     (const volatile TYP *ptr,
+///      const volatile void *lower,
+///      const volatile void *upper,
+///      TYP failval,
+///      const volatile void *cmpptr)
+/// {
+///   TYP result;
+///   if (cmpptr >= lower && cmpptr < upper)
+///     result = *ptr;
+///   else
+///     result = failval;
+///   return result;
+/// }
+///
+static Value *MakeBuiltinLoadNoSpeculate(CodeGenFunction &CGF, Value *Ptr,
+                                         Value *LowerBound, Value *UpperBound,
+                                         Value *FailVal, Value *CmpPtr,
+                                         clang::QualType ResultType) {
+  Value *LowerBoundCmp = nullptr;
+  Value *UpperBoundCmp = nullptr;
+  Value *Condition = nullptr;
+  if (LowerBound)
+    LowerBoundCmp = CGF.Builder.CreateICmp(llvm::CmpInst::ICMP_UGE, CmpPtr,
+                                           LowerBound, "cmplower");
+  if (UpperBound)
+    UpperBoundCmp = CGF.Builder.CreateICmp(llvm::CmpInst::ICMP_ULT, CmpPtr,
+                                           UpperBound, "cmpupper");
+  if (LowerBoundCmp && UpperBoundCmp)
+    Condition = CGF.Builder.CreateAnd(LowerBoundCmp, UpperBoundCmp, "cond");
+  else if (LowerBoundCmp)
+    Condition = LowerBoundCmp;
+  else {
+    assert(UpperBoundCmp != nullptr);
+    Condition = UpperBoundCmp;
+  }
+
+  llvm::BasicBlock *InitialBB = CGF.Builder.GetInsertBlock();
+  llvm::BasicBlock *InBoundsBB =
+      CGF.createBasicBlock("inBoundsBB", CGF.CurFn, InitialBB->getNextNode());
+  llvm::BasicBlock *NextBB =
+      CGF.createBasicBlock("loadnospecJoinBB", CGF.CurFn,
+                           InBoundsBB->getNextNode());
+
+  CGF.Builder.CreateCondBr(Condition, InBoundsBB, NextBB);
+
+  CGF.Builder.SetInsertPoint(InBoundsBB);
+  Address PtrAddress(Ptr,
+                     CGF.getContext().getTypeAlignInChars(ResultType));
+  Value *PtrDeref =
+      CGF.Builder.CreateLoad(PtrAddress, true /*IsVolatile*/, "PtrDeref");
+  CGF.Builder.CreateBr(NextBB);
+
+  CGF.Builder.SetInsertPoint(NextBB);
+  llvm::PHINode *Phi =
+      CGF.Builder.CreatePHI(FailVal->getType(), 2, "load_no_speculate_res");
+  Phi->addIncoming(PtrDeref, InBoundsBB);
+  Phi->addIncoming(FailVal, InitialBB);
+
+  return Phi;
+}
+
+
 RValue CodeGenFunction::emitBuiltinOSLogFormat(const CallExpr &E) {
   assert(E.getNumArgs() >= 2 &&
          "__builtin_os_log_format takes at least 2 arguments");
@@ -3228,6 +3295,71 @@
     Value *ArgPtr = Builder.CreateLoad(SrcAddr, "ap.val");
     return RValue::get(Builder.CreateStore(ArgPtr, DestAddr));
   }
+  case Builtin::BI__builtin_load_no_speculate: {
+    Value *Ptr = EmitScalarExpr(E->getArg(0));
+
+    bool LowerBoundNull =
+        E->getArg(1)->IgnoreParenCasts()->isNullPointerConstant(
+            getContext(), Expr::NPC_ValueDependentIsNotNull);
+    bool UpperBoundNull =
+        E->getArg(2)->IgnoreParenCasts()->isNullPointerConstant(
+            getContext(), Expr::NPC_ValueDependentIsNotNull);
+
+    assert((!LowerBoundNull || !UpperBoundNull) &&
+           "at least one bound must be non-null!");
+
+    Value *LowerBound = LowerBoundNull ? nullptr : EmitScalarExpr(E->getArg(1));
+    Value *UpperBound = UpperBoundNull ? nullptr : EmitScalarExpr(E->getArg(2));
+
+    llvm::Type *T = ConvertType(E->getType());
+    assert((isa<llvm::IntegerType>(T) || isa<llvm::PointerType>(T)) &&
+           "unsupported type");
+
+    if (isa<llvm::IntegerType>(T)) {
+      CharUnits LoadSize = getContext().getTypeSizeInChars(E->getType());
+      T = llvm::IntegerType::get(getLLVMContext(), LoadSize.getQuantity() * 8);
+      Ptr = Builder.CreateBitCast(Ptr, T->getPointerTo());
+    }
+
+    // If failval is omitted, zero of the appropriate type will be used.
+    Value *FailVal = nullptr;
+    if (E->getNumArgs() >= 4) {
+      FailVal = EmitScalarExpr(E->getArg(3));
+      if (T != FailVal->getType()) {
+        assert(T->getScalarSizeInBits() >
+               FailVal->getType()->getScalarSizeInBits());
+        FailVal = Builder.CreateZExt(FailVal, T);
+      }
+    } else {
+      if (isa<llvm::IntegerType>(T)) {
+        FailVal = llvm::ConstantInt::get(T, 0);
+      } else if (isa<llvm::PointerType>(T)) {
+        FailVal = llvm::ConstantPointerNull::get(cast<llvm::PointerType>(T));
+      }
+    }
+    assert(FailVal && "default FailVal not set");
+
+    // If cmpptr is omitted, ptr will be used.
+    Value *CmpPtr = (E->getNumArgs() >= 5) ? EmitScalarExpr(E->getArg(4)) : Ptr;
+    CmpPtr = Builder.CreateBitCast(CmpPtr, Builder.getInt8PtrTy());
+
+    if (!getContext().getTargetInfo().hasBuiltinLoadNoSpeculate())
+      return RValue::get(MakeBuiltinLoadNoSpeculate(
+            *this, Ptr, LowerBound, UpperBound, FailVal, CmpPtr,
+            E->getType()));
+    else if (LowerBoundNull)
+      return RValue::get(Builder.CreateCall(
+          CGM.getIntrinsic(Intrinsic::nospeculateload_nolower, T),
+          {Ptr, UpperBound, FailVal, CmpPtr}));
+    else if (UpperBoundNull)
+      return RValue::get(Builder.CreateCall(
+          CGM.getIntrinsic(Intrinsic::nospeculateload_noupper, T),
+          {Ptr, LowerBound, FailVal, CmpPtr}));
+    else
+      return RValue::get(
+          Builder.CreateCall(CGM.getIntrinsic(Intrinsic::nospeculateload, T),
+                             {Ptr, LowerBound, UpperBound, FailVal, CmpPtr}));
+  }
   }
 
   // If this is an alias for a lib function (e.g. __builtin_sin), emit
Index: lib/Basic/Targets/ARM.h
===================================================================
--- lib/Basic/Targets/ARM.h
+++ lib/Basic/Targets/ARM.h
@@ -140,6 +140,8 @@
 
   ArrayRef<Builtin::Info> getTargetBuiltins() const override;
 
+  bool hasBuiltinLoadNoSpeculate() const override;
+
   bool isCLZForZeroUndef() const override;
   BuiltinVaListKind getBuiltinVaListKind() const override;
 
Index: lib/Basic/Targets/ARM.cpp
===================================================================
--- lib/Basic/Targets/ARM.cpp
+++ lib/Basic/Targets/ARM.cpp
@@ -749,6 +749,17 @@
                                          : TargetInfo::VoidPtrBuiltinVaList);
 }
 
+bool ARMTargetInfo::hasBuiltinLoadNoSpeculate() const {
+  if (isThumb()) {
+    if (supportsThumb2()) {
+      if (ArchVersion >= 7)
+        return true;
+    }
+  } else
+    return true;
+  return false;
+}
+
 const char *const ARMTargetInfo::GCCRegNames[] = {
     // Integer registers
     "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", "r11",
Index: lib/Basic/Targets/AArch64.h
===================================================================
--- lib/Basic/Targets/AArch64.h
+++ lib/Basic/Targets/AArch64.h
@@ -67,6 +67,8 @@
 
   CallingConvCheckResult checkCallingConvention(CallingConv CC) const override;
 
+  bool hasBuiltinLoadNoSpeculate() const override;
+
   bool isCLZForZeroUndef() const override;
 
   BuiltinVaListKind getBuiltinVaListKind() const override;
Index: lib/Basic/Targets/AArch64.cpp
===================================================================
--- lib/Basic/Targets/AArch64.cpp
+++ lib/Basic/Targets/AArch64.cpp
@@ -263,6 +263,8 @@
 
 bool AArch64TargetInfo::isCLZForZeroUndef() const { return false; }
 
+bool AArch64TargetInfo::hasBuiltinLoadNoSpeculate() const { return true; }
+
 TargetInfo::BuiltinVaListKind AArch64TargetInfo::getBuiltinVaListKind() const {
   return TargetInfo::AArch64ABIBuiltinVaList;
 }
Index: include/clang/Sema/Sema.h
===================================================================
--- include/clang/Sema/Sema.h
+++ include/clang/Sema/Sema.h
@@ -10337,6 +10337,7 @@
   bool SemaBuiltinAssumeAligned(CallExpr *TheCall);
   bool SemaBuiltinLongjmp(CallExpr *TheCall);
   bool SemaBuiltinSetjmp(CallExpr *TheCall);
+  ExprResult SemaBuiltinLoadNoSpeculate(ExprResult TheCallResult);
   ExprResult SemaBuiltinAtomicOverloaded(ExprResult TheCallResult);
   ExprResult SemaBuiltinNontemporalOverloaded(ExprResult TheCallResult);
   ExprResult SemaAtomicOpsOverloaded(ExprResult TheCallResult,
Index: include/clang/Basic/TargetInfo.h
===================================================================
--- include/clang/Basic/TargetInfo.h
+++ include/clang/Basic/TargetInfo.h
@@ -593,6 +593,10 @@
   /// idea to avoid optimizing based on that undef behavior.
   virtual bool isCLZForZeroUndef() const { return true; }
 
+  /// \brief Check whether this target supports the __builtin_load_no_speculate
+  /// intrinsic.
+  virtual bool hasBuiltinLoadNoSpeculate() const { return false; }
+
   /// \brief Returns the kind of __builtin_va_list type that should be used
   /// with this target.
   virtual BuiltinVaListKind getBuiltinVaListKind() const = 0;
Index: include/clang/Basic/DiagnosticSemaKinds.td
===================================================================
--- include/clang/Basic/DiagnosticSemaKinds.td
+++ include/clang/Basic/DiagnosticSemaKinds.td
@@ -7108,6 +7108,18 @@
   "address argument to nontemporal builtin must be a pointer to integer, float, "
   "pointer, or a vector of such types (%0 invalid)">;
 
+def err_nospeculateload_builtin_must_be_pointer_to_pointer_or_integral : Error<
+  "argument to load_no_speculate builtin must be a pointer to a pointer or integer "
+  "(%0 invalid)">;
+def err_nospeculateload_builtin_bounds_cannot_both_be_null : Error<
+  "lower and upper bounds arguments to load_no_speculate builtin must not both "
+  "be null">;
+def warn_nospeculateload_builtin_target_not_supported : Warning<
+  "this target does not support anti-speculation operations. Your program will "
+  "still execute correctly, but speculation will not be inhibited">,
+  InGroup<LoadNoSpeculate>;
+
+
 def err_deleted_function_use : Error<"attempt to use a deleted function">;
 def err_deleted_inherited_ctor_use : Error<
   "constructor inherited by %0 from base class %1 is implicitly deleted">;
Index: include/clang/Basic/DiagnosticGroups.td
===================================================================
--- include/clang/Basic/DiagnosticGroups.td
+++ include/clang/Basic/DiagnosticGroups.td
@@ -987,3 +987,5 @@
 // A warning group for warnings about code that clang accepts when
 // compiling OpenCL C/C++ but which is not compatible with the SPIR spec.
 def SpirCompat : DiagGroup<"spir-compat">;
+
+def LoadNoSpeculate: DiagGroup<"load-no-speculate">;
Index: include/clang/Basic/Builtins.def
===================================================================
--- include/clang/Basic/Builtins.def
+++ include/clang/Basic/Builtins.def
@@ -1449,6 +1449,13 @@
 BUILTIN(__builtin_ms_va_end, "vc*&", "n")
 BUILTIN(__builtin_ms_va_copy, "vc*&c*&", "n")
 
+// T __builtin_load_no_speculate (const volatile T * ptr,
+//                                const volatile void * lower,
+//                                const volatile void * upper
+//                                [, T failval
+//                                 [, const volatile void * cmpptr]])
+BUILTIN(__builtin_load_no_speculate, "vvCD*vCD*vCD*vvCD*", "t")
+
 #undef BUILTIN
 #undef LIBBUILTIN
 #undef LANGBUILTIN
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
  • [PATCH] D41760: Introduce __... Kristof Beyls via Phabricator via cfe-commits

Reply via email to