On 2023-08-15 19:07, Alexander Monakov wrote:

On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:

Thanks, this is nicer (see notes below). My main concern is that we
shouldn't pretend there's some method of verifying that arbitrary source
code is "safe" to pass to an unsandboxed compiler, nor should we push
the responsibility of doing that on users.

But responsibility would be pushed to users, wouldn't it?

Making users responsible for verifying that sources are "safe" is not okay
(we cannot teach them how to do that since there's no general method).
Making users responsible for sandboxing the compiler is fine (there's
a range of sandboxing solutions, from which they can choose according
to their requirements and threat model). Sorry about the ambiguity.

No I understood the distinction you're trying to make, I just wanted to point out that the effect isn't all that different. The intent of the wording is not to prescribe a solution, but to describe what the compiler cannot do and hence, users must find a way to do this. I think we have a consensus on this part of the wording though because we're not really responsible for the prescription here and I'm happy with just asking users to sandbox.

I suppose it's kinda like saying "don't try this at home". You know many will and some will break their leg while others will come out of it feeling invincible. Our job is to let them know that they will likely break their leg :)

inside a sandboxed environment to ensure that it does not compromise the
development environment.  Note that this still does not guarantee safety of
the produced output programs and that such programs should still either be
analyzed thoroughly for safety or run only inside a sandbox or an isolated
system to avoid compromising the execution environment.

The last statement seems to be a new addition. It is too broad and again
makes a reference to analysis that appears quite theoretical. It might be
better to drop this (and instead talk in more specific terms about any
guarantees that produced binary code matches security properties intended
by the sources; I believe Richard Sandiford raised this previously).

OK, so I actually cover this at the end of the section; Richard's point AFAICT
was about hardening, which I added another note for to make it explicit that
missed hardening does not constitute a CVE-worthy threat:

Thanks for the reminder. To illustrate what I was talking about, let me give
two examples:

1) safety w.r.t timing attacks: even if the source code is written in
a manner that looks timing-safe, it might be transformed in a way that
mounting a timing attack on the resulting machine code is possible;

2) safety w.r.t information leaks: even if the source code attempts
to discard sensitive data (such as passwords and keys) immediately
after use, (partial) copies of that data may be left on stack and
in registers, to be leaked later via a different vulnerability.

For both 1) and 2), GCC is not engineered to respect such properties
during optimization and code generation, so it's not appropriate for such
tasks (a possible solution is to isolate such sensitive functions to
separate files, compile to assembly, inspect the assembly to check that it
still has the required properties, and use the inspected asm in subsequent
builds instead of the original high-level source).

How about this in the last section titled "Security features implemented in GCC", since that's where we also deal with security hardening.

    Similarly, GCC may transform code in a way that the correctness of
    the expressed algorithm is preserved but supplementary properties
    that are observable only outside the program or through a
    vulnerability in the program, may not be preserved.  This is not a
    security issue in GCC and in such cases, the vulnerability that
    caused exposure of the supplementary properties must be fixed.

Thanks,
Sid

Reply via email to