Yeah, there is a way to do it today. it really sucks though for most users. Due 
to the complexity of doing the task though, most users just have gotten into 
the terrible habit of ignoring the "this host's ssh key changed" and just 
blindly accepting the change. I kind of hate to say it this way, but because of 
the way things are done today, OpenStack's training folks to ignore man in the 
middle attacks. This is not good. We shouldn't just shrug it off and say folks 
should be more careful. We should try and make the edge less sharp so they are 
less likely to stab themselves, and later, give OpenStack a bad name because 
OpenStack was involved.

(Yeah, I get it is not exactly OpenStack's fault that they use it in an unsafe 
manner. But still, if OpenStack can do something about it, it would be better 
for everyone involved)

This is one thing I think k8s is doing really well. kubectl exec <pod>   uses 
the chain of trust built up from user all the way to the pod. There isn't 
anything manual the user has to do to secure the path. OpenStack really could 
benefit from something similar for client to vm.

Thanks,
Kevin
________________________________________
From: Clint Byrum [cl...@fewbar.com]
Sent: Friday, October 06, 2017 3:24 PM
To: openstack-dev
Subject: Re: [openstack-dev] Supporting SSH host certificates

Excerpts from Giuseppe de Candia's message of 2017-10-06 13:49:43 -0500:
> Hi Clint,
>
> Isn't user-data by definition available via the Metadata API, which isn't
> considered secure:
> https://wiki.openstack.org/wiki/OSSN/OSSN-0074
>

Correct! The thinking is to account for the MITM attack vector, not
host or instance security as a whole. One would hope the box comes up
in a mostly drone-like state until it can be hardened with a new secret
host key.

> Or is there a way to specify that certain user-data should only be
> available via config-drive (and not metadata api)?
>
> Otherwise, the only difference I see compared to using Meta-data is that
> the process you describe is driven by the user vs. automated.
>
> Regarding the extra plumbing, I'm not trying to avoid it. I'm thinking to
> eventually tie this all into Keystone. For example, a project should have
> Host CA and User CA keys. Let's assume OpenStack manages these for now,
> later we can consider OpenStack simply proxying signature requests and
> vouching that a public key does actually belong to a specific instance (and
> host-name) or Keystone user. So what I think should happen is when a
> Project is enabled for SSHaaS support, any VM instance automatically gets
> host certificate, authorized principal files based on Keystone roles for
> the project, and users can call an API (or Dashboard form) to get a public
> key signed (and assigned appropriate SSH principals).
>

Fascinating, but it's hard for me to get excited about this when I can
just handle MITM security myself.

Note that the other existing techniques are simpler too. Most instances
will print the public host key to the console. The API offers console
access, so it can be scraped for the host key.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to