Have you checked this thread :
https://lists.ovirt.org/pipermail/users/2016-April/039277.html

You can switch to postgre user, then 'source /opt/rhn/postgresql10/enable' & 
then 'psql engine'.

As per the thread you can find illegal snapshots via 'select 
image_group_id,imagestatus from images where imagestatus =4;'

And then update them via 'update images set imagestatus =1 where imagestatus = 
4 and <other criteria>; commit'

Best Regards,
Strahil Nikolov


On Oct 13, 2019 15:45, Leo David <[email protected]> wrote:
>
> Hi Everyone,
> Im still not being able to start the vms... Could anyone give me an advice on 
> sorign this out ?
> Still having th "Bad volume specification" error,  although the disk is 
> present on the storage.
> This issue would force me to reinstall a 10 nodes Openshift cluster from 
> scratch,  which would not be so funny..
> Thanks,
>
> Leo.
>
> On Fri, Oct 11, 2019 at 7:12 AM Strahil <[email protected]> wrote:
>>
>> Nah...
>> It's done directly on the DB and I wouldn't recommend such action for 
>> Production Cluster.
>> I've done it only once and it was based on some old mailing lists.
>>
>> Maybe someone from the dev can assist?
>>
>> On Oct 10, 2019 13:31, Leo David <[email protected]> wrote:
>>>
>>> Thank you Strahil,
>>> Could you tell me what do you mean by changing status ? Is this something 
>>> to be done in the UI ?
>>>
>>> Thanks,
>>>
>>> Leo
>>>
>>> On Thu, Oct 10, 2019, 09:55 Strahil <[email protected]> wrote:
>>>>
>>>> Maybe you can change the status of the VM in order the engine to know that 
>>>> it has to blockcommit the snapshots.
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>> On Oct 9, 2019 09:02, Leo David <[email protected]> wrote:
>>>>>
>>>>> Hi Everyone,
>>>>> Please let me know if any thoughts or recommandations that could help me 
>>>>> solve this issue..
>>>>> The real bad luck in this outage is that these 5 vms are part on an 
>>>>> Openshift deployment,  and now we are not able to start it up...
>>>>> Before trying to sort this at ocp platform level by replacing the failed 
>>>>> nodes with new vms, I would rather prefer to do it at the oVirt level and 
>>>>> have the vms starting since the disks are still present on gluster.
>>>>> Thank you so much !
>>>>>
>>>>>
>>>>> Leo
>
>
>
> -- 
> Best regards, Leo David
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/VIGEIWX7VOZGLRFSWKHVSA3PPHZ3DBNT/

Reply via email to