> * BICKFORD, JEFFREY E (jb6...@att.com) wrote: > > > * Daniel P. Berrange (berra...@redhat.com) wrote: > > > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote: > > > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote: > > > > > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote: > > > > > >>"Daniel P. Berrange" <berra...@redhat.com> wrote on 01/20/2016 > > > > > >>10:00:41 > > > > > >>AM: > > > > > >> > > > > > >> > > > > > >>>process at all - it would make sense if there was a single > > > > > >>>swtpm_cuse shared across all QEMU's, but if there's one per > > > > > >>>QEMU device, it feels like it'd be much simpler to just have > > > > > >>>the functionality linked in QEMU. That avoids the problem > > > > > >>I tried having it linked in QEMU before. It was basically rejected. > > > > > >I remember an impl you did many years(?) ago now, but don't recall > > > > > >the results of the discussion. Can you elaborate on why it was > > > > > >rejected as an approach ? It just doesn't make much sense to me > > > > > >to have to create an external daemon, a CUSE device and comms > > > > > >protocol, simply to be able to read/write a plain file containing > > > > > >the TPM state. Its massive over engineering IMHO and adding way > > > > > >more complexity and thus scope for failure > > > > > > > > > > The TPM 1.2 implementation adds 10s of thousands of lines of code. > > > > > The TPM 2 > > > > > implementation is in the same range. The concern was having this code > > > > > right > > > > > in the QEMU address space. It's big, it can have bugs, so we don't > > > > > want it > > > > > to harm QEMU. So we now put this into an external process implemented > > > > > by the > > > > > swtpm project that builds on libtpms which provides TPM 1.2 > > > > > functionality > > > > > (to be extended with TPM 2). We cannot call APIs of libtpms directly > > > > > anymore, so we need a control channel, which is implemented through > > > > > ioctls > > > > > on the CUSE device. > > > > > > > > Ok, the security separation concern does make some sense. The use of > > > > CUSE > > > > still seems fairly questionable to me. CUSE makes sense if you want to > > > > provide a drop-in replacement for the kernel TPM device driver, which > > > > would avoid ned for a new QEMU backend. If you're not emulating an > > > > existing > > > > kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC > > > > transport between 2 userspace processes. > > > > > While I don't really like CUSE; I can see some of the reasoning here. > > > By providing the existing TPM ioctl interface I think it means you can use > > > existing host-side TPM tools to initialise/query the soft-tpm, and those > > > should be independent of the soft-tpm implementation. > > > As for the extra interfaces you need because it's a soft-tpm to set it up, > > > once you've already got that ioctl interface as above, then it seems to > > > make > > > sense to extend that to add the extra interfaces needed. The only thing > > > you have to watch for there are that the extra interfaces don't clash > > > with any future kernel ioctl extensions, and that the interface defined > > > is generic enough for different soft-tpm implementations. > > > > > Dave > > > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK > > > > > > Over the past several months, AT&T Security Research has been testing the > > Virtual TPM software from IBM on the Power (ppc64) platform. Based on our > > testing results, the vTPM software works well and as expected. Support for > > libvirt and the CUSE TPM allows us to create VMs with the vTPM > > functionality and was tested in a full-fledged OpenStack environment. > > > > We believe the vTPM functionality will improve various aspects of VM > > security in our enterprise-grade cloud environment. AT&T would like to see > > these patches accepted into the QEMU community as the default-standard > > build so this technology can be easily adopted in various open source cloud > > deployments. > > Interesting; however, I see Stefan has been contributing other kernel > patches that create a different vTPM setup without the use of CUSE; > if that's the case then I guess that's the preferable solution. > > Jeffrey: Can you detail a bit more about your setup, and how > you're maanging the life cycle of the vTPM data? > > Dave > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
Sure. We are using the various patches Stefan submitted at the beginning of the year on an IBM Power (ppc64) machine. The machine is running the PowerKVM operating system (Linux with KVM for Power architecture) with Stefan's vTPM patches installed. This machine is also running as a single OpenStack compute node for us to test the vTPM functionality through an OpenStack deployment. Our environment is currently running OpenStack Kilo. Our main goal has been to test the overall functionality of the vTPM within a VM and compare this functionality to a physical TPM 1.2 running on a separate physical machine. Our main use case has been using the vTPM for booting a VM in a trusted manner and VM runtime integrity through Linux IMA. Based on our testing, we have found that the vTPM software supplied in Stefan's patches work in the same way as a physical TPM at the virtual machine layer. We have tested running an attestation server within a guest network to attest the boot-time and run-time integrity of a set of VMs using the vTPM. Regarding the life cycle, we have tested the creation and destruction of a VM. In these cases, vTPM state is created and destroyed successfully. We have also tested creating VMs from an image and snapshot, in which case a new vTPM device is created for the new instance. vTPMs that are created based off an image or snapshot are unique and contain their own different public/private endorsement key pairs. Other VM functions such as pause and resume seem to also work as normal. Log files show that the CUSE TPM is stopped and started successfully based on these commands. We have not tested VM migration yet. Please let me know if you have any other questions about our environment or testing. Thanks, Jeff Jeffrey Bickford AT&T Security Research Center jbickf...@att.com