Thanks for the reply. This is the content of my agent.properties:

#Storage
#Tue Jun 25 16:07:21 CEST 2013
guest.network.device=public_guest
workers=5
private.network.device=management
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
guid=9b1d387e-11bd-3ab0-a673-c655eb4c8f66
cluster=1
public.network.device=public_guest
local.storage.uuid=1bc23df8-6012-4909-a3af-2719255caf3c
domr.scripts.dir=scripts/network/domr/kvm
host=192.168.0.2
LibvirtComputingResource.id=0


Following the findings of the previous answers qiven below the problem seems to be in the name the physical interfaces are qiven in RHEL and derivatives, not in the bridges. By the way I had a previous CS4.0.1 working with the same bridge configuration but under Ubuntu 12.04.



El 25/06/13 15:21, WXR escribió:
in the file "/etc/cloudstack/agent/agent.properties" you can see two properties 
which are commented:
#public.network.device=cloudbr0
#private.network.device=cloudbr1
the commented contents  indicate that you should create two bridges called 
cloudbr0 and cloudbr1,or you can uncomment the two lines and modify them to 
other values.



------------------ Original ------------------
From:  "Fernando Guillén Camb"<[email protected]>;
Date:  Tue, Jun 25, 2013 08:14 PM
To:  "users"<[email protected]>;

Subject:  Re: cant add host to cloud: "Nics are not configured!" / "Failedto get 
public nic name"



Hi.
I'm trying to install CS4.1 in a host with Centos6.4 and I'm having
exactly the same problem:

2013-06-25 14:07:14,703 DEBUG [kvm.resource.LibvirtComputingResource]
(main:null) failing to get physical interface from bridgemanagement, did
not find an eth*, bond*, or vlan* in
/sys/devices/virtual/net/management/brif

The interface list on the server:
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
      link/ether e0:db:55:21:1f:3c brd ff:ff:ff:ff:ff:ff
3: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
      link/ether e0:db:55:21:1f:3e brd ff:ff:ff:ff:ff:ff
4: p3p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
      link/ether 00:0a:f7:0d:fb:e0 brd ff:ff:ff:ff:ff:ff
5: p3p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
      link/ether 00:0a:f7:0d:fb:e2 brd ff:ff:ff:ff:ff:ff
6: public_guest: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UNKNOWN
      link/ether 00:0a:f7:0d:fb:e0 brd ff:ff:ff:ff:ff:ff
7: management: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UNKNOWN
      link/ether 00:0a:f7:0d:fb:e2 brd ff:ff:ff:ff:ff:ff
9: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UNKNOWN
      link/ether b2:64:55:9e:b8:33 brd ff:ff:ff:ff:ff:ff


Is there any other solution than the one found by Javier?
removing the biosdevname package, renaming all em* ifcfg scripts to eth*
and deleting the 70-persistent-net.rules

Thanx.

El 23/05/13 17:51, Prasanna Santhanam escribió:
Yes the consistent naming scheme problem I *think* was fixed already
by Marcus and should be in 4.1 IIRC. Glad to hear that your problem is
solved.

On Thu, May 23, 2013 at 05:17:04PM +0200, Javier Rodriguez wrote:
Hi Prasanna,

Thanks very much for your response,

[root@mnode-1 ~]# rpm -qa | grep qemu
gpxe-roms-qemu-0.9.7-6.9.el6.noarch
qemu-kvm-0.12.1.2-2.355.0.1.el6.centos.2.x86_64
qemu-img-0.12.1.2-2.355.0.1.el6.centos.2.x86_64

[root@mnode-1 ~]# rpm -qa | grep virt
libvirt-client-0.10.2-18.el6_4.4.x86_64
libvirt-0.10.2-18.el6_4.4.x86_64
virt-what-1.11-1.2.el6.x86_64

[root@mnode-1 ~]# rpm -qa | grep cloud
cloud-deps-4.0.2-1.el6.x86_64
cloud-utils-4.0.2-1.el6.x86_64
cloud-scripts-4.0.2-1.el6.x86_64
cloud-agent-libs-4.0.2-1.el6.x86_64
cloud-python-4.0.2-1.el6.x86_64
cloud-agent-4.0.2-1.el6.x86_64
cloud-core-4.0.2-1.el6.x86_64


I enabled the DEBUG mode like you asked (I did not find
/etc/cloudstack/agent/log4j.xml, so I changed it in
/etc/cloud/agent/log4j-cloud.xml instead), and I found something
interesting:

2013-05-23 12:26:26,210 DEBUG
[kvm.resource.LibvirtComputingResource] (main:null) failing to get
physical interface from bridgecloudbr0, did not find an eth*, bond*,
or vlan* in /sys/devices/virtual/net/cloudbr0/brif

The only file in /sys/devices/virtual/net/cloudbr0/brif is a symlink
named em1.200, and apparently the cloud agent is expecting to find
eth* devices.

Apparently this has something to do with the Consistent Network
Device naming feature introduced in later RH based distributions.
(if eth0 is embedded in the motherboard it will be now called em1 by
default).

After investingating a bit I managed to rename the nics by removing
the biosdevname package, renaming all em* ifcfg scripts to eth* and
deleting the 70-persistent-net.rules in /etc/udev/rules.d (which
gets automatically regenerated with proper values taken from ifcfg
scripts by write_net_rules).

After that I could add the host with no problem :) . I think the
cloud agent not being able to manage Consistent Network Device
Interfaces ( em* and p*p* ) is probably a bug in the cloud agent.

Thanks for your help,

-Javier



--
Fernando Guillén Camba
Unidade de Xestión de Infraestruturas TIC
Centro de Investigación en Tecnoloxías da Información (CITIUS)
Teléfono: 8818 16409
Correo: [email protected]

Reply via email to