Author: kcc
Date: Mon Dec 18 13:40:07 2017
New Revision: 321027

URL: http://llvm.org/viewvc/llvm-project?rev=321027&view=rev
Log:
[hwasan] update the design doc

Modified:
    cfe/trunk/docs/HardwareAssistedAddressSanitizerDesign.rst

Modified: cfe/trunk/docs/HardwareAssistedAddressSanitizerDesign.rst
URL: 
http://llvm.org/viewvc/llvm-project/cfe/trunk/docs/HardwareAssistedAddressSanitizerDesign.rst?rev=321027&r1=321026&r2=321027&view=diff
==============================================================================
--- cfe/trunk/docs/HardwareAssistedAddressSanitizerDesign.rst (original)
+++ cfe/trunk/docs/HardwareAssistedAddressSanitizerDesign.rst Mon Dec 18 
13:40:07 2017
@@ -21,7 +21,7 @@ The redzones, the quarantine, and, to a
 sources of AddressSanitizer's memory overhead.
 See the `AddressSanitizer paper`_ for details.
 
-AArch64 has the `Address Tagging`_, a hardware feature that allows
+AArch64 has the `Address Tagging`_ (or top-byte-ignore, TBI), a hardware 
feature that allows
 software to use 8 most significant bits of a 64-bit pointer as
 a tag. HWASAN uses `Address Tagging`_
 to implement a memory safety tool, similar to :doc:`AddressSanitizer`,
@@ -31,7 +31,7 @@ accuracy guarantees.
 Algorithm
 =========
 * Every heap/stack/global memory object is forcibly aligned by `N` bytes
-  (`N` is e.g. 16 or 64)
+  (`N` is e.g. 16 or 64). We call `N` the **granularity** of tagging.
 * For every such object a random `K`-bit tag `T` is chosen (`K` is e.g. 4 or 8)
 * The pointer to the object is tagged with `T`.
 * The memory for the object is also tagged with `T`
@@ -44,19 +44,35 @@ Instrumentation
 
 Memory Accesses
 ---------------
-All memory accesses are prefixed with a call to a run-time function.
-The function encodes the type and the size of access in its name;
-it receives the address as a parameter, e.g. `__hwasan_load4(void *ptr)`;
-it loads the memory tag, compares it with the
-pointer tag, and executes `__builtin_trap` (or calls 
`__hwasan_error_load4(void *ptr)`) on mismatch.
+All memory accesses are prefixed with an inline instruction sequence that
+verifies the tags. Currently, the following sequence is used:
 
-It's possible to inline this callback too.
+
+.. code-block:: asm
+
+  // int foo(int *a) { return *a; }
+  // clang -O2 --target=aarch64-linux -fsanitize=hwaddress -c load.c
+  foo:
+       0:      08 dc 44 d3     ubfx    x8, x0, #4, #52  // shadow address
+       4:      08 01 40 39     ldrb    w8, [x8]         // load shadow
+       8:      09 fc 78 d3     lsr     x9, x0, #56      // address tag
+       c:      3f 01 08 6b     cmp     w9, w8           // compare tags
+      10:      61 00 00 54     b.ne    #12              // jump on mismatch
+      14:      00 00 40 b9     ldr     w0, [x0]         // original load
+      18:      c0 03 5f d6     ret             
+      1c:      40 20 40 d4     hlt     #0x102           // halt
+      20:      00 00 40 b9     ldr     w0, [x0]         // original load
+      24:      c0 03 5f d6     ret
+
+
+Alternatively, memory accesses are prefixed with a function call.
 
 Heap
 ----
 
 Tagging the heap memory/pointers is done by `malloc`.
 This can be based on any malloc that forces all objects to be N-aligned.
+`free` tags the memory with a different tag.
 
 Stack
 -----
@@ -75,7 +91,7 @@ TODO: details.
 Error reporting
 ---------------
 
-Errors are generated by `__builtin_trap` and are handled by a signal handler.
+Errors are generated by the `HLT` instruction and are handled by a signal 
handler.
 
 Attribute
 ---------


_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to