On 27-1-2016 10:55, Gianluca Cecchi wrote: > Hello, > single physical host with oVirt 3.6.1 on CentOS 7.2 > Configured with SH Engine on NFS. > Also NFS iso and NFS data domain present > All 3 NFS are provided by host itself. > 2 VMs with CentOS 7.2. > > Starting point has all ok in webadmin GUI, also the hosted_storage > without exceptions. > > Updated to 3.6.2 following these steps: > On host > - global maintenance > > On engine VM > - yum update "ovirt-engine-setup*" > - engine-setup > > On host > - exit maintenance > > Verified correct access to webadmin GUI again. From GUI > - shutdown of the 2 powered on VMs > > On host > - local maintenance > > Shutdown of engine VM > > On host > yum update > BTW: should I stop vdsmd before running update or not? > reboot > exit maintenance > > Verified access to webadmin GUI and all storage domains are shown as up. > But when I try to power on one of the VMs I get this error: > > VM racclient1 is down with error. Exit message: Cannot access storage > file > '/rhev/data-center/00000001-0001-0001-0001-0000000000ec/556abaa8-0fcc-4042-963b-f27db5e03837/images/d2f2b967-c0c3-4648-9daa-553e8ef4652b/e0524893-1f78-4e6b-8e5e-83217bc0dc5b' > (as uid:107, gid:107): No such file or directory. > > Indeed the file, that actually seems to be a symbolic link, doesn't exist > On host: > [root@ractor ~]# ll /rhev/data-center/00000001-0001-0001-0001-0000000000ec > total 0 > lrwxrwxrwx. 1 vdsm kvm 99 Jan 27 07:31 > 2025c2ea-6205-4bc1-b29d-745b47f8f806 -> > /rhev/data-center/mnt/ractor.datacenter.mydomain.dom:_SHE__DOMAIN/2025c2ea-6205-4bc1-b29d-745b47f8f806 > lrwxrwxrwx. 1 vdsm kvm 99 Jan 27 07:31 > fd5754f1-bd00-4337-ad64-1abde35438ae -> > /rhev/data-center/mnt/ractor.datacenter.mydomain.dom:_ISO__DOMAIN/fd5754f1-bd00-4337-ad64-1abde35438ae > lrwxrwxrwx. 1 vdsm kvm 99 Jan 27 07:31 mastersd -> > /rhev/data-center/mnt/ractor.datacenter.mydomain.dom:_SHE__DOMAIN/2025c2ea-6205-4bc1-b29d-745b47f8f806 > > > [root@ractor ~]# ll /rhev/data-center/mnt/ > total 0 > drwxr-xr-x. 3 vdsm kvm 74 Nov 19 15:47 > ractor.datacenter.mydomain.dom:_ISO__DOMAIN > drwxr-xr-x. 3 vdsm kvm 74 Nov 19 15:46 > ractor.datacenter.mydomain.dom:_NFS__DOMAIN > drwxr-xr-x. 3 vdsm kvm 74 Nov 19 15:25 > ractor.datacenter.mydomain.dom:_SHE__DOMAIN > drwxr-xr-x. 2 vdsm kvm 6 Nov 19 15:25 > _var_lib_ovirt-hosted-engine-setup_tmp8InOft > > [root@ractor ~]# ll > /rhev/data-center/mnt/ractor.datacenter.mydomain.dom\:_NFS__DOMAIN/556abaa8-0fcc-4042-963b-f27db5e03837/ > total 4 > drwxr-xr-x. 2 vdsm kvm 69 Nov 19 15:46 dom_md > drwxr-xr-x. 10 vdsm kvm 4096 Dec 4 00:23 images > drwxr-xr-x. 4 vdsm kvm 28 Nov 19 15:47 master > > [root@ractor ~]# ll > /rhev/data-center/mnt/ractor.datacenter.mydomain.dom\:_NFS__DOMAIN/556abaa8-0fcc-4042-963b-f27db5e03837/images/ > total 32 > drwxr-xr-x. 2 vdsm kvm 4096 Dec 10 15:45 > 3b641c29-5196-4b2f-b1a5-fb31f8064780 > drwxr-xr-x. 2 vdsm kvm 4096 Dec 1 12:06 > 7d5dd44f-f5d1-4984-9e76-2b2f5e42a915 > drwxr-xr-x. 2 vdsm kvm 4096 Nov 25 16:34 > 83799ab5-055f-445c-8b6c-496d19bce921 > drwxr-xr-x. 2 vdsm kvm 4096 Nov 29 17:44 > 9a3fcdf7-75f4-4605-ba34-665dae9a4e0d > drwxr-xr-x. 2 vdsm kvm 4096 Dec 4 00:19 > ba11e44c-7cf5-4ef1-a9cd-0a6630bda801 > drwxr-xr-x. 2 vdsm kvm 4096 Dec 4 00:22 > d2f2b967-c0c3-4648-9daa-553e8ef4652b > drwxr-xr-x. 2 vdsm kvm 4096 Dec 10 15:45 > d49f6096-96a2-4ef2-926a-a54a8246d303 > drwxr-xr-x. 2 vdsm kvm 4096 Dec 4 00:24 > d9d2cd3f-0b1c-4e04-bd6a-1ffafc723344 > > [root@ractor ~]# ll > /rhev/data-center/mnt/ractor.datacenter.mydomain.dom\:_NFS__DOMAIN/556abaa8-0fcc-4042-963b-f27db5e03837/images/d2f2b967-c0c3-4648-9daa-553e8ef4652b/ > total 3338720 > -rw-rw----. 1 vdsm kvm 8589934592 Jan 27 06:57 > e0524893-1f78-4e6b-8e5e-83217bc0dc5b > -rw-rw----. 1 vdsm kvm 1048576 Dec 4 00:21 > e0524893-1f78-4e6b-8e5e-83217bc0dc5b.lease > -rw-r--r--. 1 vdsm kvm 259 Dec 4 00:22 > e0524893-1f78-4e6b-8e5e-83217bc0dc5b.meta > > Is this problem already known? Should it be sufficent to only create > the symbolic link? What process/service is in charge to create the link? > > I have that problem too once in a while. Restarting vdsmd will solve your problem BUT its because vdsmd is active before your NFS is up. A fix is to require nfs in the vdsmd service unit.
Regards, Joop _______________________________________________ Users mailing list [email protected] http://lists.ovirt.org/mailman/listinfo/users

