On 25/11/2021 15:27, Daniel P. Berrangé wrote:
> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
>> * Daniel P. Berrangé (berra...@redhat.com) wrote:
>>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
>>>> Hi,
>>>>
>>>> We recently discussed a way for remote SEV guest attestation through QEMU.
>>>> My initial approach was to get data needed for attestation through 
>>>> different
>>>> QMP commands (all of which are already available, so no changes required
>>>> there), deriving hashes and certificate data; and collecting all of this
>>>> into a new QMP struct (SevLaunchStart, which would include the VM's policy,
>>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this is
>>>> provided, QEMU would then need to have support for attestation before a VM
>>>> is started. Upon speaking to Dave about this proposal, he mentioned that
>>>> this may not be the best approach, as some situations would render the
>>>> attestation unavailable, such as the instance where a VM is running in a
>>>> cloud, and a guest owner would like to perform attestation via QMP (a 
>>>> likely
>>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
>>>> commands, as this could be an issue.
>>>
>>> As a general point, QMP is a low level QEMU implementation detail,
>>> which is generally expected to be consumed exclusively on the host
>>> by a privileged mgmt layer, which will in turn expose its own higher
>>> level APIs to users or other apps. I would not expect to see QMP
>>> exposed to anything outside of the privileged host layer.
>>>
>>> We also use the QAPI protocol for QEMU guest agent commmunication,
>>> however, that is a distinct service from QMP on the host. It shares
>>> most infra with QMP but has a completely diffent command set. On the
>>> host it is not consumed inside QEMU, but instead consumed by a
>>> mgmt app like libvirt. 
>>>
>>>> So I ask, does anyone involved in QEMU's SEV implementation have any input
>>>> on a quality way to perform guest attestation? If so, I'd be interested.
>>>
>>> I think what's missing is some clearer illustrations of how this
>>> feature is expected to be consumed in some real world application
>>> and the use cases we're trying to solve.
>>>
>>> I'd like to understand how it should fit in with common libvirt
>>> applications across the different virtualization management
>>> scenarios - eg virsh (command line),  virt-manger (local desktop
>>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
>>> And of course any non-traditional virt use cases that might be
>>> relevant such as Kata.
>>
>> That's still not that clear; I know Alice and Sergio have some ideas
>> (cc'd).
>> There's also some standardisation efforts (e.g. 
>> https://www.potaroo.net/ietf/html/ids-wg-rats.html 
>> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
>> ) - that I can't claim to fully understand.
>> However, there are some themes that are emerging:
>>
>>   a) One use is to only allow a VM to access some private data once we
>> prove it's the VM we expect running in a secure/confidential system
>>   b) (a) normally involves requesting some proof from the VM and then
>> providing it some confidential data/a key if it's OK
> 
> I guess I'm wondering what the threat we're protecting against is,
> and / or which pieces of the stack we can trust ?
> 
> eg, if the host has 2 VMs running, we verify the 1st and provide
> its confidental data back to the host, what stops the host giving
> that dat to the 2nd non-verified VM ? 

The host can't read the injected secret: It is encrypted with a key that
is available only to the PSP.  The PSP receives it and writes it in a
guest-encrypted memory (which the host also cannot read; for the guest
it's a simple memory access with C-bit=1).  So it's a per-vm-invocation
secret.


> 
> Presumably the data has to be encrypted with a key that is uniquely
> tied to this specific boot attempt of the verified VM, and not
> accessible to any other VM, or to future boots of this VM ?

Yes, launch blob, which (if I recall correctly) the Guest Owner should
generate and give to the Cloud Provider so it can start a VM with it
(this is one of the options on the sev-guest object).

-Dov


> 
> 
>>   c) RATs splits the problem up:
>>     
>> https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
>>     I don't fully understand the split yet, but in principal there are
>> at least a few different things:
>>
>>   d) The comms layer
>>   e) Something that validates the attestation message (i.e. the
>> signatures are valid, the hashes all add up etc)
>>   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
>> 8.4 kernel, or that's a valid kernel command line)
>>   g) Something that holds some secrets that can be handed out if e & f
>> are happy.
>>
>>   There have also been proposals (e.g. Intel HTTPA) for an attestable
>> connection after a VM is running; that's probably quite different from
>> (g) but still involves (e) & (f).
>>
>> In the simpler setups d,e,f,g probably live in one place; but it's not
>> clear where they live - for example one scenario says that your cloud
>> management layer holds some of them, another says you don't trust your
>> cloud management layer and you keep them separate.
> 
> Yep, again I'm wondering what the specific threats are that we're
> trying to mitigate. Whether we trust the cloud mgmt APIs, but don't
> trust the compute hosts, or whether we trust neither the cloud
> mgmt APIs or the compute hosts.
> 
> If we don't trust the compute hosts, does that include the part
> of the cloud mgmt API that is  running on the compute host, or
> does that just mean the execution environment of the VM, or something
> else?
> 
>> So I think all we're actually interested in at the moment, is (d) and
>> (e) and the way for (g) to get the secret back to the guest.
>>
>> Unfortunately the comms and the contents of them varies heavily with
>> technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
>> while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
>> SEV-ES in some cases).
>>
>> So my expectation at the moment is libvirt needs to provide a transport
>> layer for the comms, to enable an external validator to retrieve the
>> measurements from the guest/hypervisor and provide data back if
>> necessary.  Once this shakes out a bit, we might want libvirt to be
>> able to invoke the validator; however I expect (f) and (g) to be much
>> more complex things that don't feel like they belong in libvirt.
> 
> Yep, I don't think (f) & (g) belong in libvirt, since libvirt is
> deployed per compute host, while (f) / (g) are something that is
> likely to be deployed in a separate trusted host, at least for
> data center / cloud deployments. May be there's a case where they
> can all be same-host for more specialized use cases.
> 
> Regards,
> Daniel
> 

Reply via email to