And k8s has the benefit of already having been installed with certs that
had to get there somehow.. through a trust bootstrap.. usually SSH. ;)

Excerpts from Fox, Kevin M's message of 2017-10-09 17:37:17 +0000:
> Yeah, there is a way to do it today. it really sucks though for most users. 
> Due to the complexity of doing the task though, most users just have gotten 
> into the terrible habit of ignoring the "this host's ssh key changed" and 
> just blindly accepting the change. I kind of hate to say it this way, but 
> because of the way things are done today, OpenStack's training folks to 
> ignore man in the middle attacks. This is not good. We shouldn't just shrug 
> it off and say folks should be more careful. We should try and make the edge 
> less sharp so they are less likely to stab themselves, and later, give 
> OpenStack a bad name because OpenStack was involved.
> 

I agree that we could do better.

I think there _is_ a standardized method which is to print the host
public keys to console, and scrape them out on first access.

> (Yeah, I get it is not exactly OpenStack's fault that they use it in an 
> unsafe manner. But still, if OpenStack can do something about it, it would be 
> better for everyone involved)
> 

We could do better though. We could have an API for that.

> This is one thing I think k8s is doing really well. kubectl exec <pod>   uses 
> the chain of trust built up from user all the way to the pod. There isn't 
> anything manual the user has to do to secure the path. OpenStack really could 
> benefit from something similar for client to vm.
> 

This is an unfair comparison. k8s is running in the user space, and as
such rides on the bootstrap trust of whatever was used to install it.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to