bionic-backports test build in:

  https://launchpad.net/~james-page/+archive/ubuntu/bionic

** Description changed:

- In a scenario where vaultlocker decrypted the device but it ended up not
- being mounted (either due to bug
- https://bugs.launchpad.net/vaultlocker/+bug/1838607 or some other reason
- that I am still investigating), the system is in a state where both
- "var-lib-nova-instances.mount" and "vaultlocker-decrypt@4fe5e652-ca6b-
- 4c14-aa5f-1085714cf3f1.service" are either failed/failed or
- inactive/dead.
+ [Impact]
+ Restarting a vaultlocker-decrypt systemd unit after it has successfully 
executed results in a hanging vaultlocker process.
+ 
+ [Test Case]
+ Encrypt a block device using vaultlocker.
+ Restart the systemd unit associated with the device.
+ 
+ [Regression Potential]
+ 
+ 
+ [Original Bug Report]
+ In a scenario where vaultlocker decrypted the device but it ended up not 
being mounted (either due to bug 
https://bugs.launchpad.net/vaultlocker/+bug/1838607 or some other reason that I 
am still investigating), the system is in a state where both 
"var-lib-nova-instances.mount" and 
"vaultlocker-decrypt@4fe5e652-ca6b-4c14-aa5f-1085714cf3f1.service" are either 
failed/failed or inactive/dead.
  
  To cleanly mount the device, I attempted to:
  
  - Restart vaultlocker-decrypt@4fe5e652-ca6b-4c14-aa5f-
  1085714cf3f1.service. This causes the service to be stuck in state
  "activating" printing continuously:
  
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: 
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 10.5.0.138
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: 
DEBUG:urllib3.connectionpool:http://10.5.0.138:8200 "POST 
/v1/auth/approle/login HTTP/1.1" 200 512
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: 
DEBUG:urllib3.connectionpool:http://10.5.0.138:8200 "GET 
/v1/charm-vaultlocker/juju-cc9161-bionic-queens-vault-6/4fe5e652-ca6b-4c14-aa5f-1085714cf3f1
 HTTP/1.1" 200 866
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: 
INFO:vaultlocker.dmcrypt:LUKS opening 4fe5e652-ca6b-4c14-aa5f-1085714cf3f1
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: Device 
crypt-4fe5e652-ca6b-4c14-aa5f-1085714cf3f1 already exists.
  
  I stopped the service and killed the two processes it spawns in order to
  stop vaultlocker.
  
  - Restart "var-lib-nova-instances.mount". This caused the same behavior
  as above because of systemd.requires dependency of the fstab mount
  entry. Since vaultlocker-decrypt service never transitions to "active",
  the dependency is never satisfied, so it never tries to mount.
  
  If vaultlocker was patched to transition directly to "active" (and then
  inactive/dead), then restarting either one of them would be able to
  *only* mount the decrypted device in this scenario.
  
  My workaround that worked was to run "cryptsetup luksClose /dev/mapper
  /crypt-4fe5e652-ca6b-4c14-aa5f-1085714cf3f1" and then restart
  vaultlocker-decrypt@4fe5e652-ca6b-4c14-aa5f-1085714cf3f1.service

** Description changed:

  [Impact]
  Restarting a vaultlocker-decrypt systemd unit after it has successfully 
executed results in a hanging vaultlocker process.
  
  [Test Case]
  Encrypt a block device using vaultlocker.
  Restart the systemd unit associated with the device.
  
  [Regression Potential]
- 
+ Low - the change simply skips dmcrypt opening of the block device if it is 
already open on the local system.
  
  [Original Bug Report]
  In a scenario where vaultlocker decrypted the device but it ended up not 
being mounted (either due to bug 
https://bugs.launchpad.net/vaultlocker/+bug/1838607 or some other reason that I 
am still investigating), the system is in a state where both 
"var-lib-nova-instances.mount" and 
"vaultlocker-decrypt@4fe5e652-ca6b-4c14-aa5f-1085714cf3f1.service" are either 
failed/failed or inactive/dead.
  
  To cleanly mount the device, I attempted to:
  
  - Restart vaultlocker-decrypt@4fe5e652-ca6b-4c14-aa5f-
  1085714cf3f1.service. This causes the service to be stuck in state
  "activating" printing continuously:
  
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: 
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 10.5.0.138
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: 
DEBUG:urllib3.connectionpool:http://10.5.0.138:8200 "POST 
/v1/auth/approle/login HTTP/1.1" 200 512
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: 
DEBUG:urllib3.connectionpool:http://10.5.0.138:8200 "GET 
/v1/charm-vaultlocker/juju-cc9161-bionic-queens-vault-6/4fe5e652-ca6b-4c14-aa5f-1085714cf3f1
 HTTP/1.1" 200 866
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: 
INFO:vaultlocker.dmcrypt:LUKS opening 4fe5e652-ca6b-4c14-aa5f-1085714cf3f1
  Feb 12 18:20:16 juju-cc9161-bionic-queens-vault-6 sh[30272]: Device 
crypt-4fe5e652-ca6b-4c14-aa5f-1085714cf3f1 already exists.
  
  I stopped the service and killed the two processes it spawns in order to
  stop vaultlocker.
  
  - Restart "var-lib-nova-instances.mount". This caused the same behavior
  as above because of systemd.requires dependency of the fstab mount
  entry. Since vaultlocker-decrypt service never transitions to "active",
  the dependency is never satisfied, so it never tries to mount.
  
  If vaultlocker was patched to transition directly to "active" (and then
  inactive/dead), then restarting either one of them would be able to
  *only* mount the decrypted device in this scenario.
  
  My workaround that worked was to run "cryptsetup luksClose /dev/mapper
  /crypt-4fe5e652-ca6b-4c14-aa5f-1085714cf3f1" and then restart
  vaultlocker-decrypt@4fe5e652-ca6b-4c14-aa5f-1085714cf3f1.service

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1863014

Title:
  skip trying to decrypt device if it already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/bionic-backports/+bug/1863014/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to