Tempest has the TestEncryptedCinderVolumes scenario test [1] which creates an encrypted volume type, creates a volume from that volume type, boots a server instance and then attaches/detaches the 'encrypted' volume to/from the server instance.

This works fine in the integrated gate because LVM is used as the backend and the encryption providers used in the test are implemented in nova to work with the iscsi libvirt volume driver - it sets the 'device_path' key in the connection_info['data'] dict that the encryption provider code is checking.

The calls to the encryption providers in nova during volume attached are based on whether or not the 'encrypted' key is set in connection_info['data'] returned from the os-initialize_connection cinder API. In the case of iscsi and several other volume drivers in cinder this key is set to True if the volume's 'encryption_key_id' field is set in the volume object.

It was noticed that the encrypted volume tests were passing the ceph job even though the libvirt volume driver in nova wasn't setting the device_path key, so it's not actually doing encryption on the attached volume - but the test wasn't failing, so it's a big false positive.

Upon further inspection, it is passing because it isn't doing anything, and it isn't doing anything because the rbd volume driver in cinder isn't setting the 'encrypted' key in connection_info['data'] in it's initialize_connection() method.

So we got to this cinder change [2] which originally was just setting the encrypted key for the rbd volume driver until it was pointed out that we should set that key globally in the volume manager if the volume driver isn't setting it, so that's what the latest version of the change does.

The check-tempest-dsvm-full-ceph job is passing on that change because of a series of dependent changes [3]. Basically, a config option is needed in tempest to tell it whether or not to run the TestEncryptedCinderVolumes tests. This defaults to True for backwards compatibility. Then there is a devstack change to set the flag in tempest.conf based on an environment variable to devstack. Then there is a change to devstack-gate to set that flag to False for the Ceph job. Finally, the cinder change depends on the devstack-gate change so everything is in order and it doesn't blow up after marking the rbd volume connection as encrypted - which would fail if we didn't skip the test.

Now the issue is going to be, there are lots of other volume drivers in cinder that are going to be getting this encrypted key set to True which is going to blow up without the support in nova for encrypting the volume during attach.

The glusterfs and sheepdog jobs are failing in that patch for different reasons actually, but we expect third party CI to fail if they don't configure tempest by setting TEMPEST_ATTACH_ENCRYPTED_VOLUME=False in their devstack run.

So the question is, is everyone OK with this and ready to make that change?

An alternative to avoid the explosion is when nova detects that it should use an encryption provider but the 'device_path' key isn't set in connection_info, it could use the noop encryption provider and just ignore it, but that's putting our heads in the sand and the test is passing with a false positive - you're not actually getting encrypted attached volumes to your server instances which is the point of the test.

I'll get this on the cinder meeting agenda for next week for discussion before the cinder change is approved, unless we come up with other alternatives, like a 'supports_encryption' capability flag in cinder (something like that) which could tell the cinder API during a request to create a volume from an encrypted type that the volume driver doesn't support it and the request fails with a 400. That'd be an API change but might be acceptable given the API is pretty much broken today already.

[1] http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/test_encrypted_cinder_volumes.py
[2] https://review.openstack.org/#/c/193673/
[3] https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1463525,n,z

--

Thanks,

Matt Riedemann


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to