On 07.10.23 06:12, Kees Cook wrote:
On Sat, Oct 07, 2023 at 12:30:01AM +0200, Lukas Loidolt wrote:
In my tests, however, the performance version behaves more or less like the
full version of randstruct.

Can you try this patch?


commit d73a3244700d3c945cedea7e1fb7042243c41e08
Author:     Kees Cook <keesc...@chromium.org>
AuthorDate: Fri Oct 6 21:09:28 2023 -0700
Commit:     Kees Cook <keesc...@chromium.org>
CommitDate: Fri Oct 6 21:09:28 2023 -0700

     randstruct: Fix gcc-plugin performance mode to stay in group

     The performance mode of the gcc-plugin randstruct was shuffling struct
     members outside of the cache-line groups. Limit the range to the
     specified group indexes.

     Cc: linux-hardening@vger.kernel.org
     Reported-by: Lukas Loidolt <e1634...@student.tuwien.ac.at>
     Closes: 
https://lore.kernel.org/all/f3ca77f0-e414-4065-83a5-ae4c4d255...@student.tuwien.ac.at
     Signed-off-by: Kees Cook <keesc...@chromium.org>

diff --git a/scripts/gcc-plugins/randomize_layout_plugin.c 
b/scripts/gcc-plugins/randomize_layout_plugin.c
index 951b74ba1b24..178831917f01 100644
--- a/scripts/gcc-plugins/randomize_layout_plugin.c
+++ b/scripts/gcc-plugins/randomize_layout_plugin.c
@@ -191,7 +191,7 @@ static void partition_struct(tree *fields, unsigned long 
length, struct partitio

  static void performance_shuffle(tree *newtree, unsigned long length, ranctx 
*prng_state)
  {
-       unsigned long i, x;
+       unsigned long i, x, index;
         struct partition_group size_group[length];
         unsigned long num_groups = 0;
         unsigned long randnum;
@@ -206,11 +206,14 @@ static void performance_shuffle(tree *newtree, unsigned 
long length, ranctx *prn
         }

         for (x = 0; x < num_groups; x++) {
-               for (i = size_group[x].start + size_group[x].length - 1; i > 
size_group[x].start; i--) {
+               for (index = size_group[x].length - 1; index > 0; index--) {
                         tree tmp;
+
+                       i = size_group[x].start + index;
                         if (DECL_BIT_FIELD_TYPE(newtree[i]))
                                 continue;
                         randnum = ranval(prng_state) % (i + 1);
+                       randnum += size_group[x].start;
                         // we could handle this case differently if desired
                         if (DECL_BIT_FIELD_TYPE(newtree[randnum]))
                                 continue;

--
Kees Cook

I think, this is still missing a change in the randnum calculation to use index 
instead of i.
Without that, randnum can be larger than the length of newtree, which crashes 
kernel compilation for me.

diff --git a/scripts/gcc-plugins/randomize_layout_plugin.c 
b/scripts/gcc-plugins/randomize_layout_plugin.c
index 178831917f01..4b4627e3f2ce 100644
--- a/scripts/gcc-plugins/randomize_layout_plugin.c
+++ b/scripts/gcc-plugins/randomize_layout_plugin.c
@@ -212,7 +212,7 @@ static void performance_shuffle(tree *newtree, unsigned 
long length, ranctx *prn
                        i = size_group[x].start + index;
                        if (DECL_BIT_FIELD_TYPE(newtree[i]))
                                continue;
-                       randnum = ranval(prng_state) % (i + 1);
+                       randnum = ranval(prng_state) % (index + 1);
                        randnum += size_group[x].start;
                        // we could handle this case differently if desired
                        if (DECL_BIT_FIELD_TYPE(newtree[randnum]))


The patch seems to work after that though. For the previous example, I now get 
the following layout:

func1 (offset: 0)
func3 (offset: 8)
func4 (offset: 16)
func6 (offset: 24)
func7 (offset: 32)
func8 (offset: 40)
func5 (offset: 48)
func2 (offset: 56)
func10 (offset: 64)
func9 (offset: 72)

Regarding the shuffling of groups/partitions (rather than just the 
randomization of structure members within each partition), I'm not sure if that 
was intended at some point, but it might be worth looking into.
I'd assume it would improve randomization without sacrificing performance, and 
it's also what the clang implementation of randstruct does.

Lukas Loidolt

Reply via email to