> On 10 May 2016, at 16:20, Luciano Natale <[email protected]> wrote:
>
> Ok, here are the relevant vdsm logs!
it seems to be indeed storage-related problem. I could only find that the
export task failed due to
106774fb-1093-495a-a996-48f48e066a6f::ERROR::2016-05-07
18:04:15,998::blockVolume::429::Storage.Volume::(validateImagePath) Unexpected
error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/blockVolume.py", line 427, in validateImagePath
os.mkdir(imageDir, 0o755)
OSError: [Errno 17] File exists:
'/rhev/data-center/58aa23b5-9680-4ff2-991c-8f8952cfa13c/2a1cc76e-9ba2-4586-a612-049894467470/images/63e58057-00fd-4597-a258-64558d45c155’
Tal?
Thanks,
michal
>
> Thanks!
> Luciano.
>
> On Tue, May 10, 2016 at 5:46 AM, Michal Skrivanek
> <[email protected] <mailto:[email protected]>> wrote:
>
>> On 09 May 2016, at 23:09, Luciano Natale <[email protected]
>> <mailto:[email protected]>> wrote:
>>
>> Ok, i've filtered out saturday. At midnight it starts to backup
>> automatically with a custom made script. At that run, I had trouble with vm
>> named "operaciones-ad". Then i started working on the problem (among other
>> things, i added a new backup storage domain) and you can se around 8 PM that
>> another VM failed the backup, this one called "biblioteca". Other VM's i
>> remeber failing where "NS1" and "NS2”.
>
> Tal, can someone take a look and investigate why the task is failing?
> engine.log failure excerpt below
>
> Luciano, I suppose vdsm.log from that time would help further
>
> Thanks,
> michal
>
> 2016-05-07 18:04:52,386 INFO [org.ovirt.engine.core.bll.ExportVmCommand]
> (org.ovirt.thread.pool-8-thread-15) [7a94a972] Running command:
> ExportVmCommand internal: fals
> e. Entities affected : ID: c9bddba0-c553-4a90-b97c-bdc1e88a333e Type:
> StorageAction group IMPORT_EXPORT_VM with role type ADMIN
> 2016-05-07 18:04:52,388 INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
> (org.ovirt.thread.pool-8-thread-15) [7a94a972] START, SetVmStatusVDSCommand(
> vmId
> = 59e1be99-c37e-431d-9b4c-1d039fd667a7, status = ImageLocked, exit status =
> Normal), log id: 7261d03c
> 2016-05-07 18:04:52,392 INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
> (org.ovirt.thread.pool-8-thread-15) [7a94a972] FINISH, SetVmStatusVDSCommand,
> log
> id: 7261d03c
> 2016-05-07 18:04:52,453 INFO [org.ovirt.engine.core.bll.ExportVmCommand]
> (org.ovirt.thread.pool-8-thread-15) [7a94a972] Lock freed to object
> EngineLock [exclusiveLocks
> = key: 59e1be99-c37e-431d-9b4c-1d039fd667a7 value: VM
> 2016-05-07 18:04:52,465 INFO
> [org.ovirt.engine.core.bll.CopyImageGroupCommand]
> (org.ovirt.thread.pool-8-thread-15) [6f66db4a] Running command:
> CopyImageGroupCommand in
> ternal: true. Entities affected : ID: c9bddba0-c553-4a90-b97c-bdc1e88a333e
> Type: Storage
> 2016-05-07 18:04:52,745 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
> (org.ovirt.thread.pool-8-thread-15) [6f66db4a] START, MoveImageGroupV
> DSCommand( storagePoolId = 58aa23b5-9680-4ff2-991c-8f8952cfa13c,
> ignoreFailoverLimit = false, storageDomainId =
> 2a1cc76e-9ba2-4586-a612-049894467470, imageGroupId = 63e
> 58057-00fd-4597-a258-64558d45c155, dstDomainId =
> c9bddba0-c553-4a90-b97c-bdc1e88a333e, vmId =
> 59e1be99-c37e-431d-9b4c-1d039fd667a7, op = Copy, postZero = false, force =
> true), log id: 781cc2f3
> 2016-05-07 18:04:53,135 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
> (org.ovirt.thread.pool-8-thread-15) [6f66db4a] FINISH, MoveImageGroup
> VDSCommand, log id: 781cc2f3
> 2016-05-07 18:04:53,233 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (org.ovirt.thread.pool-8-thread-15) [6f66db4a] CommandAsyncTask::Adding
> CommandMultiAsy
> ncTasks object for command 1711f190-c188-419c-acc2-25edc8b8d1cf
> 2016-05-07 18:04:53,234 INFO
> [org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
> (org.ovirt.thread.pool-8-thread-15) [6f66db4a]
> CommandMultiAsyncTasks::AttachTask: Atta
> ching task 106774fb-1093-495a-a996-48f48e066a6f to command
> 1711f190-c188-419c-acc2-25edc8b8d1cf.
> 2016-05-07 18:04:53,421 INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (org.ovirt.thread.pool-8-thread-15) [6f66db4a] Adding task
> 106774fb-1093-495a-a996-48f4
> 8e066a6f (Parent Command ExportVm, Parameters Type
> org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't
> started yet.
> 2016-05-07 18:04:54,164 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-15) [6f66db4a] Correlation ID: 7a94a972, Job
> ID: a312c127-45ef-44b4-8bee-b36fe7f251ee, Call Stack: null, Custom Event ID:
> -1, Message: Starting export Vm operaciones-ad to vms-backups
> 2016-05-07 18:04:54,166 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (org.ovirt.thread.pool-8-thread-15) [6f66db4a]
> BaseAsyncTask::startPollingTask: Starting to poll task
> 106774fb-1093-495a-a996-48f48e066a6f.
> 2016-05-07 18:04:54,945 WARN
> [org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
> (DefaultQuartzScheduler_Worker-53) [662bbf2a] There is no host with less
> than 4 running guests
> 2016-05-07 18:04:54,946 WARN
> [org.ovirt.engine.core.bll.scheduling.PolicyUnitImpl]
> (DefaultQuartzScheduler_Worker-53) [662bbf2a] All hosts are over-utilized,
> cant balance the cluster main
> 2016-05-07 18:05:00,647 INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (DefaultQuartzScheduler_Worker-90) [3b5cd3b6] Polling and updating Async
> Tasks: 1 tasks, 1 tasks to poll now
> 2016-05-07 18:05:00,656 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> (DefaultQuartzScheduler_Worker-90) [3b5cd3b6] Failed in
> HSMGetAllTasksStatusesVDS method
> 2016-05-07 18:05:00,657 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (DefaultQuartzScheduler_Worker-90) [3b5cd3b6] SPMAsyncTask::PollTask: Polling
> task 106774fb-1093-495a-a996-48f48e066a6f (Parent Command ExportVm,
> Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters)
> returned status finished, result 'cleanSuccess'.
> 2016-05-07 18:05:00,709 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (DefaultQuartzScheduler_Worker-90) [3b5cd3b6]
> BaseAsyncTask::logEndTaskFailure: Task 106774fb-1093-495a-a996-48f48e066a6f
> (Parent Command ExportVm, Parameters Type
> org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with
> failure:
> 2016-05-07 18:05:00,712 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (DefaultQuartzScheduler_Worker-90) [3b5cd3b6]
> CommandAsyncTask::endActionIfNecessary: All tasks of command
> 1711f190-c188-419c-acc2-25edc8b8d1cf has ended -> executing endAction
> 2016-05-07 18:05:00,714 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (DefaultQuartzScheduler_Worker-90) [3b5cd3b6] CommandAsyncTask::endAction:
> Ending action for 1 tasks (command ID: 1711f190-c188-419c-acc2-25edc8b8d1cf):
> calling endAction .
> 2016-05-07 18:05:00,716 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (org.ovirt.thread.pool-8-thread-30) [3b5cd3b6]
> CommandAsyncTask::endCommandAction [within thread] context: Attempting to
> endAction ExportVm, executionIndex: 0
> 2016-05-07 18:05:00,820 ERROR [org.ovirt.engine.core.bll.ExportVmCommand]
> (org.ovirt.thread.pool-8-thread-30) [3b5cd3b6] Ending command with failure:
> org.ovirt.engine.core.bll.ExportVmCommand
> 2016-05-07 18:05:00,876 ERROR
> [org.ovirt.engine.core.bll.CopyImageGroupCommand]
> (org.ovirt.thread.pool-8-thread-30) [6f66db4a] Ending command with failure:
> org.ovirt.engine.core.bll.CopyImageGroupCommand
> 2016-05-07 18:05:00,881 INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
> (org.ovirt.thread.pool-8-thread-30) START, SetVmStatusVDSCommand( vmId =
> 59e1be99-c37e-431d-9b4c-1d039fd667a7, status = Down, exit status = Normal),
> log id: 311b6c1c
> 2016-05-07 18:05:00,886 INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
> (org.ovirt.thread.pool-8-thread-30) FINISH, SetVmStatusVDSCommand, log id:
> 311b6c1c
> 2016-05-07 18:05:00,987 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-30) Correlation ID: 7a94a972, Call Stack:
> null, Custom Event ID: -1, Message: Failed to export Vm operaciones-ad to
> vms-backups
>
>>
>> Thanks,
>>
>> Luciano.
>>
>> On Mon, May 9, 2016 at 5:05 AM, Michal Skrivanek
>> <[email protected] <mailto:[email protected]>> wrote:
>>
>> > On 08 May 2016, at 02:14, Luciano Natale <[email protected]
>> > <mailto:[email protected]>> wrote:
>> >
>> > Hi everyone. I've been having trouble when exporting VM's. I get error
>> > when moving image. I've created a whole new storage domain exclusive for
>> > this issue, and same thing happens. It's not always the same VM that
>> > fails, but once it fails on a certain storage domain, I cannot export it
>> > anymore. Please tell me which logs are relevant so i can post them and any
>> > other relevant iformation I can provide, and maybe someone can help me get
>> > through this problem.
>>
>> Hi,
>> please send /var/log/ovirt-engine/engine.log and let’s see.
>>
>> Thanks,
>> michal
>>
>> >
>> > Ovirt version is 3.5.4.2-1.el6. Hosted engines is CentOS 6. Hosts are
>> > CentOS 7. VM's are all CentOS 7, except for two that are CentOS 6 and
>> > Windows 7.
>> >
>> > Please excuse my bad english!
>> > Thanks in advance!
>> >
>> > --
>> > Luciano Natale
>> > _______________________________________________
>> > Users mailing list
>> > [email protected] <mailto:[email protected]>
>> > http://lists.ovirt.org/mailman/listinfo/users
>> > <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>>
>>
>> --
>> Luciano Natale
>> <engine.log>
>
>
>
>
> --
> Luciano Natale
> <hyper1-vdsm.log.33.xz><hyper2-vdsm.log.65.xz>
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users