On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:

> Does this as the first paragraph address your concerns:

Thanks, this is nicer (see notes below). My main concern is that we shouldn't
pretend there's some method of verifying that arbitrary source code is "safe"
to pass to an unsandboxed compiler, nor should we push the responsibility of
doing that on users.

> The compiler driver processes source code, invokes other programs such as the
> assembler and linker and generates the output result, which may be assembly
> code or machine code.  It is necessary that all source code inputs to the
> compiler are trusted, since it is impossible for the driver to validate input
> source code for safety.

The statement begins with "It is necessary", but the next statement offers
an alternative in case the code is untrusted. This is a contradiction.
Is it necessary or not in the end?

I'd suggest to drop this statement and instead make a brief note that
compiling crafted/untrusted sources can result in arbitrary code execution
and unconstrained resource consumption in the compiler.

> For untrusted code should compilation should be done
                     ^^^^^^
                     typo (spurious 'should')
                     
> inside a sandboxed environment to ensure that it does not compromise the
> development environment.  Note that this still does not guarantee safety of
> the produced output programs and that such programs should still either be
> analyzed thoroughly for safety or run only inside a sandbox or an isolated
> system to avoid compromising the execution environment.

The last statement seems to be a new addition. It is too broad and again
makes a reference to analysis that appears quite theoretical. It might be
better to drop this (and instead talk in more specific terms about any
guarantees that produced binary code matches security properties intended
by the sources; I believe Richard Sandiford raised this previously).

Thanks.
Alexander

Reply via email to