Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug
Hi After upgrade and restarting system-VM's all VR started with some bad network configuration, egress rules stopped work. also some staticNAT rules, there is " ip addr show " from one of VR's root@r-256-VM:~# ip addr show 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:00:6b:16:00:09 brd ff:ff:ff:ff:ff:ff inet 10.1.1.1/24 brd 10.1.1.255 scope global eth0 inet6 fe80::6bff:fe16:9/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 0e:00:a9:fe:01:38 brd ff:ff:ff:ff:ff:ff inet 169.254.1.56/16 brd 169.254.255.255 scope global eth1 inet6 fe80::c00:a9ff:fefe:138/64 scope link valid_lft forever preferred_lft forever 4: eth2: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 06:06:ec:00:00:0e brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth2 inet6 fe80::406:ecff:fe00:e/64 scope link valid_lft forever preferred_lft forever 5: eth3: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 06:81:44:00:00:0e brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth3 inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth3 inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth3 inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth3 inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth3 inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth3 inet6 fe80::481:44ff:fe00:e/64 scope link valid_lft forever preferred_lft forever 6: eth4: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 06:e5:36:00:00:0e brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth4 inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth4 inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth4 inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth4 inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth4 inet6 fe80::4e5:36ff:fe00:e/64 scope link valid_lft forever preferred_lft forever 7: eth5: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 06:6f:3a:00:00:0e brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth5 inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth5 inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth5 inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth5 inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth5 inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth5 inet6 fe80::46f:3aff:fe00:e/64 scope link valid_lft forever preferred_lft forever 8: eth6: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 06:b0:30:00:00:0e brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth6 inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth6 inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth6 inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth6 inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth6 inet6 fe80::4b0:30ff:fe00:e/64 scope link valid_lft forever preferred_lft forever 9: eth7: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 06:26:b4:00:00:0e brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth7 inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth7 inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth7 inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth7 inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth7 inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth7 inet6 fe80::426:b4ff:fe00:e/64 scope link valid_lft forever preferred_lft forever -- ttyv0 "/usr/libexec/gmail Pc" webcons on secure
Re: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug
No idea, but have you verified that the vm is running the new system vm template? What happens if you destroy the router and let it recreate? On Sun, Apr 20, 2014 at 6:20 PM, Serg Senko wrote: > Hi > > After upgrade and restarting system-VM's > all VR started with some bad network configuration, egress rules stopped > work. > also some staticNAT rules, > > > there is " ip addr show " from one of VR's > > root@r-256-VM:~# ip addr show > > 1: lo: mtu 16436 qdisc noqueue state UNKNOWN > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > inet6 ::1/128 scope host > >valid_lft forever preferred_lft forever > > 2: eth0: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > > link/ether 02:00:6b:16:00:09 brd ff:ff:ff:ff:ff:ff > > inet 10.1.1.1/24 brd 10.1.1.255 scope global eth0 > > inet6 fe80::6bff:fe16:9/64 scope link > >valid_lft forever preferred_lft forever > > 3: eth1: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > > link/ether 0e:00:a9:fe:01:38 brd ff:ff:ff:ff:ff:ff > > inet 169.254.1.56/16 brd 169.254.255.255 scope global eth1 > > inet6 fe80::c00:a9ff:fefe:138/64 scope link > >valid_lft forever preferred_lft forever > > 4: eth2: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > > link/ether 06:06:ec:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth2 > > inet6 fe80::406:ecff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 5: eth3: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > > link/ether 06:81:44:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth3 > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth3 > > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth3 > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth3 > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth3 > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth3 > > inet6 fe80::481:44ff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 6: eth4: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > > link/ether 06:e5:36:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth4 > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth4 > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth4 > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth4 > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth4 > > inet6 fe80::4e5:36ff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 7: eth5: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > > link/ether 06:6f:3a:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth5 > > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth5 > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth5 > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth5 > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth5 > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth5 > > inet6 fe80::46f:3aff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 8: eth6: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > > link/ether 06:b0:30:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth6 > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth6 > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth6 > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth6 > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth6 > > inet6 fe80::4b0:30ff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 9: eth7: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > > link/ether 06:26:b4:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth7 > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth7 > > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth7 > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth7 > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth7 > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth7 > > inet6 fe80::426:b4ff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > > -- > ttyv0 "/usr/libexec/gmail Pc" webcons on secure
Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent
No, it has nothing to do with ssh or libvirt daemon. It's the literal unix socket that is created for virtio-serial communication when the qemu process starts. The question is why the system is refusing access to the socket. I assume this is being attempted as root. On Sat, Apr 19, 2014 at 9:58 AM, Nux! wrote: > On 19.04.2014 15:24, Giri Prasad wrote: > >> >> # grep listen_ /etc/libvirt/libvirtd.conf >> listen_tls=0 >> listen_tcp=1 >> #listen_addr = "192.XX.XX.X" >> listen_addr = "0.0.0.0" >> >> # >> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl >> -n v-1-VM -p >> >> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 >> . >> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent - >> Connection refused > > > Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd? > > (kind of stabbing in the dark) > > > -- > Sent from the Delta quadrant using Borg technology! > > Nux! > www.nux.ro
Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent
You may want to look in the qemu log of the vm to see if there's something deeper going on, perhaps the qemu process is not fully starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or something like that. On Sun, Apr 20, 2014 at 11:22 PM, Marcus wrote: > No, it has nothing to do with ssh or libvirt daemon. It's the literal > unix socket that is created for virtio-serial communication when the > qemu process starts. The question is why the system is refusing access > to the socket. I assume this is being attempted as root. > > On Sat, Apr 19, 2014 at 9:58 AM, Nux! wrote: >> On 19.04.2014 15:24, Giri Prasad wrote: >> >>> >>> # grep listen_ /etc/libvirt/libvirtd.conf >>> listen_tls=0 >>> listen_tcp=1 >>> #listen_addr = "192.XX.XX.X" >>> listen_addr = "0.0.0.0" >>> >>> # >>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl >>> -n v-1-VM -p >>> >>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 >>> . >>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent - >>> Connection refused >> >> >> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd? >> >> (kind of stabbing in the dark) >> >> >> -- >> Sent from the Delta quadrant using Borg technology! >> >> Nux! >> www.nux.ro
RE: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug
Type brctl show And check your public interface of your router is plugged into cloudbr0 or cloudbr1..If its plugged to cloubr0 and then need to detach from cloudbr0 and attach that interface to cloudbr1 and need to apply the all the iptables rules . Take the backup of iptables rules with iptables-save command before performing attach /detach interfces. Regards Sadhu -Original Message- From: Marcus [mailto:shadow...@gmail.com] Sent: 21 April 2014 10:46 To: dev@cloudstack.apache.org Cc: us...@cloudstack.apache.org Subject: Re: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug No idea, but have you verified that the vm is running the new system vm template? What happens if you destroy the router and let it recreate? On Sun, Apr 20, 2014 at 6:20 PM, Serg Senko wrote: > Hi > > After upgrade and restarting system-VM's all VR started with some bad > network configuration, egress rules stopped work. > also some staticNAT rules, > > > there is " ip addr show " from one of VR's > > root@r-256-VM:~# ip addr show > > 1: lo: mtu 16436 qdisc noqueue state UNKNOWN > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > inet6 ::1/128 scope host > >valid_lft forever preferred_lft forever > > 2: eth0: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > > link/ether 02:00:6b:16:00:09 brd ff:ff:ff:ff:ff:ff > > inet 10.1.1.1/24 brd 10.1.1.255 scope global eth0 > > inet6 fe80::6bff:fe16:9/64 scope link > >valid_lft forever preferred_lft forever > > 3: eth1: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > > link/ether 0e:00:a9:fe:01:38 brd ff:ff:ff:ff:ff:ff > > inet 169.254.1.56/16 brd 169.254.255.255 scope global eth1 > > inet6 fe80::c00:a9ff:fefe:138/64 scope link > >valid_lft forever preferred_lft forever > > 4: eth2: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > > link/ether 06:06:ec:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth2 > > inet6 fe80::406:ecff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 5: eth3: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > > link/ether 06:81:44:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth3 > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > eth3 > > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary > eth3 > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > eth3 > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > eth3 > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > eth3 > > inet6 fe80::481:44ff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 6: eth4: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > > link/ether 06:e5:36:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth4 > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > eth4 > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > eth4 > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > eth4 > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > eth4 > > inet6 fe80::4e5:36ff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 7: eth5: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > > link/ether 06:6f:3a:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth5 > > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary > eth5 > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > eth5 > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > eth5 > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > eth5 > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > eth5 > > inet6 fe80::46f:3aff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 8: eth6: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > > link/ether 06:b0:30:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth6 > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > eth6 > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > eth6 > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > eth6 > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > eth6 > > inet6 fe80::4b0:30ff:fe00:e/64 scope link > >valid_lft forever preferred_lft forever > > 9: eth7: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > > link/ether 06:26:b4:00:00:0e brd ff:ff:ff:ff:ff:ff > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth7 > >
Re: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug
Hi, Yes sure, root@r-256-VM:~# cat /etc/cloudstack-release Cloudstack Release 4.3.0 (64-bit) Wed Jan 15 00:27:19 UTC 2014 Also I tried to destroy the VR and re-create, VR up with same problem. The "cloudstack-sysvmadm" script haven't receive success answer from VR's. I have a finish rolling back to 4.1.1 - VR's successfully started, everything is work again, but how to upgrade to 4.3 ? This bug is not documented in know issue's, On Mon, Apr 21, 2014 at 8:16 AM, Marcus wrote: > No idea, but have you verified that the vm is running the new system > vm template? What happens if you destroy the router and let it > recreate? > > On Sun, Apr 20, 2014 at 6:20 PM, Serg Senko wrote: > > Hi > > > > After upgrade and restarting system-VM's > > all VR started with some bad network configuration, egress rules stopped > > work. > > also some staticNAT rules, > > > > > > there is " ip addr show " from one of VR's > > > > root@r-256-VM:~# ip addr show > > > > 1: lo: mtu 16436 qdisc noqueue state UNKNOWN > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > inet 127.0.0.1/8 scope host lo > > > > inet6 ::1/128 scope host > > > >valid_lft forever preferred_lft forever > > > > 2: eth0: mtu 1500 qdisc pfifo_fast > state > > UP qlen 1000 > > > > link/ether 02:00:6b:16:00:09 brd ff:ff:ff:ff:ff:ff > > > > inet 10.1.1.1/24 brd 10.1.1.255 scope global eth0 > > > > inet6 fe80::6bff:fe16:9/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 3: eth1: mtu 1500 qdisc pfifo_fast > state > > UP qlen 1000 > > > > link/ether 0e:00:a9:fe:01:38 brd ff:ff:ff:ff:ff:ff > > > > inet 169.254.1.56/16 brd 169.254.255.255 scope global eth1 > > > > inet6 fe80::c00:a9ff:fefe:138/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 4: eth2: mtu 1500 qdisc pfifo_fast > state > > UP qlen 1000 > > > > link/ether 06:06:ec:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth2 > > > > inet6 fe80::406:ecff:fe00:e/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 5: eth3: mtu 1500 qdisc pfifo_fast > state > > UP qlen 1000 > > > > link/ether 06:81:44:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth3 > > > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > eth3 > > > > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary > eth3 > > > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > eth3 > > > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > eth3 > > > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > eth3 > > > > inet6 fe80::481:44ff:fe00:e/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 6: eth4: mtu 1500 qdisc pfifo_fast > state > > UP qlen 1000 > > > > link/ether 06:e5:36:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth4 > > > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > eth4 > > > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > eth4 > > > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > eth4 > > > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > eth4 > > > > inet6 fe80::4e5:36ff:fe00:e/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 7: eth5: mtu 1500 qdisc pfifo_fast > state > > UP qlen 1000 > > > > link/ether 06:6f:3a:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth5 > > > > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary > eth5 > > > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > eth5 > > > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > eth5 > > > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > eth5 > > > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > eth5 > > > > inet6 fe80::46f:3aff:fe00:e/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 8: eth6: mtu 1500 qdisc pfifo_fast > state > > UP qlen 1000 > > > > link/ether 06:b0:30:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth6 > > > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > eth6 > > > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > eth6 > > > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > eth6 > > > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > eth6 > > > > inet6 fe80::4b0:30ff:fe00:e/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 9: eth7: mtu 1500 qdisc pfifo_fast > state > > UP ql
Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent
Sorry, actually I see the 'connection refused' is just your own test after the fact. By that time the vm may be shut down, so connection refused would make sense. What happens if you do this: 'virsh dumpxml v-1-VM > /tmp/v-1-VM.xml' while it is running stop the cloudstack agent 'virsh destroy v-1-VM' 'virsh create /tmp/v-1-VM.xml' Then try connecting to that VM via VNC to watch it boot up, or running that command manually, repeatedly? Does it time out? In the end this may not mean much, because in CentOS 6.x that command is retried over and over while the system vm is coming up anyway (in other words, some failures are expected). It could be related, but it could also be that the system vm is failing to come up for any other reason, and this is just the thing you noticed. On Sun, Apr 20, 2014 at 11:25 PM, Marcus wrote: > You may want to look in the qemu log of the vm to see if there's > something deeper going on, perhaps the qemu process is not fully > starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or > something like that. > > On Sun, Apr 20, 2014 at 11:22 PM, Marcus wrote: >> No, it has nothing to do with ssh or libvirt daemon. It's the literal >> unix socket that is created for virtio-serial communication when the >> qemu process starts. The question is why the system is refusing access >> to the socket. I assume this is being attempted as root. >> >> On Sat, Apr 19, 2014 at 9:58 AM, Nux! wrote: >>> On 19.04.2014 15:24, Giri Prasad wrote: >>> # grep listen_ /etc/libvirt/libvirtd.conf listen_tls=0 listen_tcp=1 #listen_addr = "192.XX.XX.X" listen_addr = "0.0.0.0" # /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n v-1-VM -p %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 . ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent - Connection refused >>> >>> >>> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd? >>> >>> (kind of stabbing in the dark) >>> >>> >>> -- >>> Sent from the Delta quadrant using Borg technology! >>> >>> Nux! >>> www.nux.ro
Unable to create System VMs on CS 4.3
Hi, I am using CS 4.3 with esxi & vCenter 5.5, while creating system vms its giving below error, with reference to this bug fix https://issues.apache.org/jira/browse/CLOUDSTACK-4875. I though it should be in CS 4.3 2014-04-21 11:09:25,158 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-82:ctx-9d7c063a) Seq 1-1056899145: Executing request 2014-04-21 11:09:25,305 INFO [c.c.s.r.VmwareStorageProcessor] (DirectAgent-82:ctx-9d7c063a 10.129.151.67) creating full clone from template 2014-04-21 11:09:28,741 ERROR [c.c.s.r.VmwareStorageProcessor] (DirectAgent-82:ctx-9d7c063a 10.129.151.67) clone volume from base image failed due to Exception: java.lang.RuntimeException Message: The name 'ROOT-15' already exists. java.lang.RuntimeException: The name 'ROOT-15' already exists. at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:336) at com.cloud.hypervisor.vmware.mo.VirtualMachineMO.createFullClone(VirtualMachineMO.java:619) at com.cloud.storage.resource.VmwareStorageProcessor.createVMFullClone(VmwareStorageProcessor.java:266) at com.cloud.storage.resource.VmwareStorageProcessor.cloneVolumeFromBaseTemplate(VmwareStorageProcessor.java:338) at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:78) at com.cloud.storage.resource.VmwareStorageSubsystemCommandHandler.execute(VmwareStorageSubsystemCommandHandler.java:171) at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:50) at com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:571) at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216) at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679) 2014-04-21 11:09:28,742 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-82:ctx-9d7c063a) Seq 1-1056899145: Response Received: 2014-04-21 11:09:28,742 DEBUG [c.c.a.t.Request] (DirectAgent-82:ctx-9d7c063a) Seq 1-1056899145: Processing: { Ans: , MgmtId: 345049296663, via: 1, Ver: v1, Flags: 10, [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"java.lang.RuntimeException: The name 'ROOT-15' already exists.","wait":0}}] } 2014-04-21 11:09:28,742 DEBUG [c.c.a.t.Request] (consoleproxy-1:ctx-b084e44b) Seq 1-1056899145: Received: { Ans: , MgmtId: 345049296663, via: 1, Ver: v1, Flags: 10, { CopyCmdAnswer } } 2014-04-21 11:09:28,743 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-cc7f41cc) Found 0 routers to update status. 2014-04-21 11:09:28,744 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-cc7f41cc) Found 0 networks to update RvR status. 2014-04-21 11:09:28,751 WARN [o.a.c.s.d.ObjectInDataStoreManagerImpl] (consoleproxy-1:ctx-b084e44b) Unsupported data object (VOLUME, org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@14ca1ff7), no need to delete from object in store ref table 2014-04-21 11:09:28,751 DEBUG [o.a.c.e.o.VolumeOrchestrator] (consoleproxy-1:ctx-b084e44b) Unable to create Vol[15|vm=15|ROOT]:java.lang.RuntimeException: The name 'ROOT-15' already exists. 2014-04-21 11:09:28,751 DEBUG [o.a.c.e.o.VolumeOrchestrator] (consoleproxy-1:ctx-b084e44b) Unable to create Vol[15|vm=15|ROOT]:java.lang.RuntimeException: The name 'ROOT-15' already exists. 2014-04-21 11:09:28,752 INFO [c.c.v.VirtualMachineManagerImpl] (consoleproxy-1:ctx-b084e44b) Unable to contact resource. com.cloud.exception.StorageUnavailableException: Resource [StoragePool:1] is unreachable: Unable to create Vol[15|vm=15|ROOT]:java.lang.RuntimeException: The name 'ROOT-15' already exists. at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.recreateVolume(VolumeOrchestrator.java:
Re: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug
HI Suresh, Thanks for you answer, but isn't realistic work around in my case, I have more than ~150 VR's, also I'm already finished rolling back to 4.1.1. Do you have some patch or link to updated systemvm template ( if that the ssvm issue ) ? P.S. The same issue, will happened with upgrade to 4.2.1? Or it's bug of 4.3.0? On Mon, Apr 21, 2014 at 8:35 AM, Suresh Sadhu wrote: > Type brctl show > And check your public interface of your router is plugged into cloudbr0 > or cloudbr1..If its plugged to cloubr0 and then need to detach from > cloudbr0 and attach that interface to cloudbr1 and need to apply the all > the iptables rules . Take the backup of iptables rules with iptables-save > command before performing attach /detach interfces. > > > > Regards > Sadhu > > > > -Original Message- > From: Marcus [mailto:shadow...@gmail.com] > Sent: 21 April 2014 10:46 > To: dev@cloudstack.apache.org > Cc: us...@cloudstack.apache.org > Subject: Re: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN > ) VR's bug > > No idea, but have you verified that the vm is running the new system vm > template? What happens if you destroy the router and let it recreate? > > On Sun, Apr 20, 2014 at 6:20 PM, Serg Senko wrote: > > Hi > > > > After upgrade and restarting system-VM's all VR started with some bad > > network configuration, egress rules stopped work. > > also some staticNAT rules, > > > > > > there is " ip addr show " from one of VR's > > > > root@r-256-VM:~# ip addr show > > > > 1: lo: mtu 16436 qdisc noqueue state UNKNOWN > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > inet 127.0.0.1/8 scope host lo > > > > inet6 ::1/128 scope host > > > >valid_lft forever preferred_lft forever > > > > 2: eth0: mtu 1500 qdisc pfifo_fast > > state UP qlen 1000 > > > > link/ether 02:00:6b:16:00:09 brd ff:ff:ff:ff:ff:ff > > > > inet 10.1.1.1/24 brd 10.1.1.255 scope global eth0 > > > > inet6 fe80::6bff:fe16:9/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 3: eth1: mtu 1500 qdisc pfifo_fast > > state UP qlen 1000 > > > > link/ether 0e:00:a9:fe:01:38 brd ff:ff:ff:ff:ff:ff > > > > inet 169.254.1.56/16 brd 169.254.255.255 scope global eth1 > > > > inet6 fe80::c00:a9ff:fefe:138/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 4: eth2: mtu 1500 qdisc pfifo_fast > > state UP qlen 1000 > > > > link/ether 06:06:ec:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth2 > > > > inet6 fe80::406:ecff:fe00:e/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 5: eth3: mtu 1500 qdisc pfifo_fast > > state UP qlen 1000 > > > > link/ether 06:81:44:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth3 > > > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > > eth3 > > > > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary > > eth3 > > > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > > eth3 > > > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > > eth3 > > > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > > eth3 > > > > inet6 fe80::481:44ff:fe00:e/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 6: eth4: mtu 1500 qdisc pfifo_fast > > state UP qlen 1000 > > > > link/ether 06:e5:36:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth4 > > > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > > eth4 > > > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > > eth4 > > > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > > eth4 > > > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > > eth4 > > > > inet6 fe80::4e5:36ff:fe00:e/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 7: eth5: mtu 1500 qdisc pfifo_fast > > state UP qlen 1000 > > > > link/ether 06:6f:3a:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth5 > > > > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary > > eth5 > > > > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary > > eth5 > > > > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary > > eth5 > > > > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary > > eth5 > > > > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary > > eth5 > > > > inet6 fe80::46f:3aff:fe00:e/64 scope link > > > >valid_lft forever preferred_lft forever > > > > 8: eth6: mtu 1500 qdisc pfifo_fast > > state UP qlen 1000 > > > > link/ether 06:b0:30:00:00:0e brd ff:ff:ff:ff:ff:ff > > > > inet XXX.XXX.XXX.
Review Request 20516: CLOUDSTACK-6463: password is not set for VMs created from password enabled template
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/20516/ --- Review request for cloudstack and Jayapal Reddy. Bugs: CLOUDSTACK-6463 https://issues.apache.org/jira/browse/CLOUDSTACK-6463 Repository: cloudstack-git Description --- CLOUDSTACK-6463: password is not set for VMs created from password enabled template Diffs - api/src/com/cloud/vm/VirtualMachineProfile.java c098e85 Diff: https://reviews.apache.org/r/20516/diff/ Testing --- Tested locally Thanks, Harikrishna Patnala