Good news, while I wrote the previous letter I found the solution, to
recovery back my vm's:
ceph osd tier remove cold-storage
I've been thinking how it can affect what happened. But I still do not
understand why overlay option has so strange behavior.
I know that overlay option sets overlay
Hi, at this night I had same issue on Hammer LTS.
I think that this is a ceph bug.
My history:
Ceph version: 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
Distro: Debian 7 (Proxmox 3.4)
Kernel: 2.6.32-39-pve
We have 9x 6TB SAS Drives in main pool and 6x 128GB PCIe SSD in cache
pool on 3
Hi,
I have a ceph cluster as the nova backend storage, and I enabled the cache tier
with readonly cache-mode for the nova_pool, now the nova instance cannot boot
after remove the nova_pool cache tier,
The instance show the error is "boot failed:not a bootable disk"
I used the below command to