At 2018-12-17T11:49:58+0000, [email protected] wrote:
> CAmkES is good for building static embedded systems (excluding stuff
> running in VMs), but can't really do anything more AFAIK.
> 
> Yes, CAmkES is designed for static architectures, which is where we
> can give an assurance story.
> 
> If you want something dynamic, i.e. more resembling a full OS, then
> have a look at Genode. But then all assurance and ability to reason
> about security properties goes out of the window.

To invert a classic naïve programmer's question, isn't that true only in
practice?

In principle, once sufficient components are in place for seL4 to host
its own build dependencies including the theorem prover, then it is a
general enough development environment to compile and verify arbitrary
extensions.

Naturally, the theorem prover itself has to be added to the list of
"assumed valid" components; we can't use it to verify itself without
disproving Gödel (I think).  At the same time, my intuition is that a
theorem prover should be harder to Trojan than a compiler.  When
Thompson famously subverted the C compiler[1], he apparently did simple
(but effective) pattern-matching on strings (in the symbol table, I
suppose) to recognize whether the program being compiled was "login",
the disassembler, or the C compiler itself.

With knowledge of that trick in mind it should be straightforward to
produce a version of Isabelle/HOL with symbols arbitrarily renamed.  The
challenge for attackers then is to develop a backdoor that can recognize
a theorem prover by data flow analysis or an unlabeled call graph.  And
that in turn seems like[2] it should be practically impossible to do
without invalidating the theorem prover for other purposes, which
exposes the fact of its compromise[3].

This in turn is an argument for having proofs for many more components
of the system, not just the kernel; even small and simple ones can
contribute to building confidence that the theorem prover has not been
subverted, and it also will help us build a literature for teaching the
next generation of software engineers how to prove their programs.
Imagine an update of Kernighan & Plauger's _Software Tools_ (1976),
wherein the reader is taught not only how to develop modular, reusable
software components, but how to validate their correctness as well.

I can't help thinking that a brighter future lies ahead--a future that
is only eight to fourteen years away.  ;-)

To return from my digression, what is the distance from the status quo
to a trusted computing base sufficient to build and validate seL4?

Regards,
Branden

[1] https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf

[2] I apologize for repeated recourse to my own instinct and intuition
to support my assertions; I raise the issues here not to flaunt my
ignorance but because I hope the experts here can point me and other
interested readers to the real literature on the subject raised.

[3] I note also that running coverage analysis of Thompson's subverted C
compiler against the Unix sources themselves should also have revealed
the fact of its compromise.  To truly escape detection, a Thompson-style
Trojan must not only recognize the source code of all the tools that
might be brought to bear against the subverted executable, but those
developed in the future, which is too large a space to tractably exhaust
in anticipation--at least in a thriving software ecosystem like that of
Unix and its descendants.  The pessimism that Thompson expresses is
therefore not warranted on technical grounds--but it may still be for
cultural ones, in that we don't have enough people subjecting compiled
objects to skeptical analysis.  Hence formal verification.

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Devel mailing list
[email protected]
https://sel4.systems/lists/listinfo/devel

Reply via email to