On Fri, Aug 21, 2020 at 04:18:03PM -0400, Alex Carter wrote: > Hi everyone, > > My name is Alex, I’m a student at the University of Michigan and I just > completed an internship at IBM Research. There, I have been working on a > project very related to this topic. I tested using Cloud Hypervisor’s > Rust-based vhost-user virtiofs and block devices with QEMU. Bigger picture, > I wanted to explore the implications of using Rust for vhost-user devices > in QEMU. > > > > I agree with the points from the original post, namely: > > · C programming bugs are responsible for a large number of CVEs, and > specifically CVEs coming from the implementations of virtual devices. > > · As a programming language, Rust has matured to a point where it is > worth considering it more seriously for production use. It has extensive > libraries and community support. Many big players in the industry are > already using Rust for production workloads. > > > > Full Transparency: the Drawbacks: > > It would be deceptive to only showcase Rust in an ideal light. > > · The benchmarks I ran show a noticeable performance hit from > switching to a RustVMM implementation of a virtiofsd device.
I think it'd be interesting to be able to repeat those tests in a different environment. I ran multiple benchmarks in the past comparing vhost-user-blk (Rust) vs. qemu/contrib/vhost-user-blk (C) and vhost-user-fs (Rust) vs virtiofsd (C) and never found that performance hit. Much on contrary, I found Rust's zero-cost abstractions promise to live up even with very idiomatic chunks of code (such as vm-virtio::Queue). > · While Rust has matured greatly, it still is missing a bit. One > example of this that came up was that the rust compiler does not have > Control Flow Integrity (CFI) features. While these are not as important as > in “unsafe” languages such as C, the ability to express unsafe portions of > code does allow for some types of memory bugs – although to a much lesser > extent (an interesting case of this surfaced from Firecracker, and the > handling of mmio [1]). So further protections such as Control Flow > Integrity can still be desirable, even with rust code. > > · There have been years of optimization work put into the C > implementations of these devices, and it’s hard to evaluate how optimized > the relatively novel rust implementations are. > > A piece of exciting news is that many of these drawbacks show a pathway for > future improvement. Improvements to rust infrastructure are very realistic. > Rust boils down to LLVM just like C, so porting over C’s CFI features > should be feasible. If more development resources are put into the RustVMM > project, there is no reason their implementations can’t be as optimized as > the C versions, and this could be greatly aided by expertise coming from > the QEMU communities familiarity with these topics. > > > > I believe vhost-user devices are an excellent place to start since It > lowers the entry barrier for developing in Rust. The device only has to > interface with the C-based QEMU binary through a standardized protocol. It > removes many worries of moving entirely away from C, since adding a set of > Rust devices would simply be giving more options and room to explore. > > > > I am putting together the scripts I used for all of the tests at this repo: > > https://github.com/Alex-Carter01/Qemu-Rust-Testing > > I am working to standardize everything to make it easier to replicate. I > would love any community involvement if people wanted to see how results > differ based on the hardware setup, build configuration of the devices etc. Sounds good. What kind of help would you need? Thanks! Sergio. > The repo also has links to a recording of my original presentation and the > slides I was using if you would like to look at that format or see the > discussion which came out of it.
signature.asc
Description: PGP signature