GitHub user tuanhoangth1603 added a comment to the discussion: Failed to create 
RBD storage pool after KVM agent upgrade from 4.20 to 4.22: 
"org.libvirt.LibvirtException: failed to create the RBD IoCTX"

@weizhouapache Today I just discovered a strange issue as follows, which is 
related to this issue.
I had to correct a wrong slave interface name inside a bond. After editing the 
netplan YAML and running `netplan apply`, the bond came up correctly and 
network was fully restored and then I restart cloudstack-agent.
However, immediately after that, CloudStack permanently lost the Ceph RBD 
primary storage mount with this repeated agent error:
```
Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to 
create the RBD IoCTX. Does the pool 'cloudstack-prod' exist?: No such file or 
directory
```
and output of virsh pool-list --all showed the line:
```
error: Could not retrieve pool information
```
While others agent still connected normally so I think the issue is not on the 
Ceph side.
Interestingly, if I reboot the entire host, the problem disappears and the RBD 
pool is mounted normally again.
Maybe, does a simple netplan apply (which only briefly interrupts the network) 
permanently break libvirt’s ability to create the RBD pool, while a full host 
reboot fixes it?

GitHub link: 
https://github.com/apache/cloudstack/discussions/12154#discussioncomment-15095582

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to