Hi Patrick,
In Apex, configuring bonds is not supported yet.  It is possible in TripleO and 
with some advanced config you could use a workaround, but for now let's just 
focus on getting the deployment to work.  When you execute the deployment (use 
--debug arg to opnfv-deploy), can you console into one of the servers and see 
if it actually PXE boots into linux?  If it does then it means your network 
connectivity on your admin network is correct and take note of which NIC MAC is 
used to PXE.  Now go to your host and do:
opnfv-util undercloud
. stackrc
nova list
ping <each node's ip>
If one or more pings fail, it means the network configuration of the box 
post-pxe boot is wrong.  At this point you need to console into the overcloud 
node.  Since you used --debug, a default root password of 'opnfvapex' was 
applied to all the overcloud nodes.  So you can login to the node and look at 
/var/log/messages for:

Nov 15 09:58:26 localhost os-collect-config: [2016/11/15 09:58:26 AM] [INFO] 
nic1 mapped to: eth0
Nov 15 09:58:26 localhost os-collect-config: [2016/11/15 09:58:26 AM] [INFO] 
nic2 mapped to: eth1
Nov 15 09:58:26 localhost os-collect-config: [2016/11/15 09:58:26 AM] [INFO] 
nic3 mapped to: eth2
Nov 15 09:58:26 localhost os-collect-config: [2016/11/15 09:58:26 AM] [INFO] 
nic4 mapped to: eth3

In your network settings file if you are using logical nic mapping (meaning 
nic<#>), then you will need to double check that the admin (ctrlplane) network 
is correctly wired from your undercloud VM (host admin network NIC) to the 
logical nic mapping in your network settings file.  This logical nic name 
resolves to a physical NIC name on the host using the mapping above.  If it is 
wrong, then you can either fix that by providing the correct logical nic name 
in your network settings file or use the real physical nic name (in this case 
eth0) for your settings.  You can compare the MAC of the PXE boot interface 
from earlier to determine which physical NIC is your admin network nic.  The 
nic settings are per each network and per profile (compute/control).

Tim Rozet
Red Hat SDN Team

----- Original Message -----
From: "Patrick Lemay" <patrick.le...@bell.ca>
To: "Tim Rozet" <tro...@redhat.com>, opnfv-tech-discuss@lists.opnfv.org
Cc: "Jamo Luhrsen" <jluhr...@redhat.com>, "Jocelyn Poulin (6007251)" 
<jocelyn.pou...@bell.ca>, "Daniel Bernier (520165)" <daniel.bern...@bell.ca>, 
"Francois Guay (A214312)" <francois.g...@bell.ca>, "Brian Smith (3010640)" 
<b.sm...@bell.ca>
Sent: Tuesday, November 15, 2016 11:58:01 AM
Subject: RE: [opnfv-tech-discuss] [Apex] deployment error

Hi Tim, adding configuration on enp1s0f0 and f1 worked well it use this config 
to create br-admin and br-public. Thanks for that workaround. Now, undercloud 
deploy successfully. IPMI with sol work fine also. I'm able to poweroff and 
poweron servers with ironic. The only thing, the deployment script failed . 
It's probably related to the network between the node and the undercloud that 
is not well configure. We are supposed to create a bond interface on 10g but 
this part has not been configure because there is no example.  How can I 
confirm that? Is there a journal with more details? Also, my ctlplane interface 
is well configure but unreachable from the undercloud when source by stackrc. 


[root@undercloud ~]# ironic node-list 

+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+

| UUID                                 | Name | Instance UUID                   
     | Power State | Provisioning State | Maintenance |

+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+

| 9f069885-b9cb-4dec-85ab-945fb08a8753 | None | 
b1efb2e1-6e78-4fc0-9d5a-5b323088bebb | power on    | active             | False 
      |

| f273975c-06ba-4c36-b7e6-25872d7efce0 | None | 
a4c77e61-6d18-4e1d-94ca-cd007282856a | power on    | active             | False 
      |

+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+


 
[root@undercloud ~]# nova list

+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+

| ID                                   | Name                    | Status | 
Task State | Power State | Networks             |

+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+

| b1efb2e1-6e78-4fc0-9d5a-5b323088bebb | overcloud-controller-0  | ACTIVE | -   
       | Running     | ctlplane=10.66.20.24 |

| a4c77e61-6d18-4e1d-94ca-cd007282856a | overcloud-novacompute-0 | ACTIVE | -   
       | Running     | ctlplane=10.66.20.23 |

+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+

When I try to source overcloudrc the file is not present but stackrc is present.

[stack@undercloud ~]$ source overcloudrc 

-bash: overcloudrc: No such file or directory



[stack@undercloud ~]$ source stackrc

openstack server list

[stack@undercloud ~]$ openstack server list

+--------------------------------------+-------------------------+--------+----------------------+

| ID                                   | Name                    | Status | 
Networks             |

+--------------------------------------+-------------------------+--------+----------------------+

| b1efb2e1-6e78-4fc0-9d5a-5b323088bebb | overcloud-controller-0  | ACTIVE | 
ctlplane=10.66.20.24 |

| a4c77e61-6d18-4e1d-94ca-cd007282856a | overcloud-novacompute-0 | ACTIVE | 
ctlplane=10.66.20.23 |

+--------------------------------------+-------------------------+--------+----------------------+

[stack@undercloud ~]$ ssh heat-admin@10.66.20.24

ssh: connect to host 10.66.20.24 port 22: No route to host

[stack@undercloud ~]$ ssh heat-admin@10.66.20.23

ssh: connect to host 10.66.20.23 port 22: No route to host

[stack@undercloud ~]$ ping 10.66.20.24

PING 10.66.20.24 (10.66.20.24) 56(84) bytes of data.

^C

--- 10.66.20.24 ping statistics ---

2 packets transmitted, 0 received, 100% packet loss, time 1000ms

Ping the undercloud works:

[stack@undercloud ~]$ ping 10.66.20.101

PING 10.66.20.101 (10.66.20.101) 56(84) bytes of data.

64 bytes from 10.66.20.101: icmp_seq=1 ttl=64 time=0.055 ms

64 bytes from 10.66.20.101: icmp_seq=2 ttl=64 time=0.049 ms

-----Original Message-----
From: Tim Rozet [mailto:tro...@redhat.com] 
Sent: November-09-16 11:14 AM
To: Guay, Francois (A214312)
Cc: Jamo Luhrsen; Lemay, Patrick; opnfv-tech-discuss@lists.opnfv.org; Poulin, 
Jocelyn (6007251)
Subject: Re: [opnfv-tech-discuss] [Apex] deployment error

Hi Patrick,
For the Undercloud VM to bridge to your host, it needs to be able to get the IP 
information off of the host interfaces.  It does this by checking your ifcfg 
files under /etc/sysconfig/network-scripts for the interface you specify for 
each network.  Can you check that enp1s0f1 ifcfg file is not set to dhcp and 
has IP and NETMASK/PREFIX settings in the file?

Thanks,

Tim Rozet
Red Hat SDN Team

----- Original Message -----
From: "Francois Guay (A214312)" <francois.g...@bell.ca>
To: "Jamo Luhrsen" <jluhr...@redhat.com>, "Patrick Lemay" 
<patrick.le...@bell.ca>, opnfv-tech-discuss@lists.opnfv.org
Cc: "Jocelyn Poulin (6007251)" <jocelyn.pou...@bell.ca>
Sent: Friday, November 4, 2016 2:59:14 PM
Subject: Re: [opnfv-tech-discuss] [Apex] deployment error

Patrick, Jamo,

I did the libvirt-python install and it solve the problem. (see attached)

Now, we need to add the two missing nodes to get the five required nodes. I 
will give it a try and let you know.

Thanks Jamo.

Patrick, I'll let you share with Jamo what you did with the admin_network and 
public_network stuff.



François

-----Original Message-----
From: Jamo Luhrsen [mailto:jluhr...@redhat.com]
Sent: 4 novembre 2016 12:04
To: Lemay, Patrick; opnfv-tech-discuss@lists.opnfv.org
Cc: Poulin, Jocelyn (6007251); Guay, Francois (A214312)
Subject: Re: [opnfv-tech-discuss] [Apex] deployment error

Patrick,

I've had similar problems trying to get apex baremetal to work on the jumphost 
I'm working with.  I hit the libvirt issue the other day.  I think I just did a 
"yum install libvirt-python"  try that.

I'm still stuck on what I need to do with the admin_network, public_network 
stuff that I think you have overcome recently.  Can you recap the steps you 
took?

JamO

On 11/04/2016 08:47 AM, Lemay, Patrick wrote:
> Hi I finally create the br-public by hand instead of script. It work 
> but now I’m stock to another. «ImportError: No module named libvirt».
> I’m using your deployment cd. All the dependencies should be install. I start 
> to tshoot the problem but I’m sure that you’ve already see that bug.
> 
>  
> 
> All python version installed:
> 
> [root@jumphost opnfv-apex]# python
> 
> python      python2     python2.7   python3     python3.4   python3.4m 
> 
>  
> 
> Output from the opnfv-deploy script:
> 
> INFO: virsh networks set:
> 
>  Name                 State      Autostart     Persistent
> 
> ----------------------------------------------------------
> 
> admin_network        active     yes           yes
> 
> default              active     yes           yes
> 
> public_network       active     yes           yes
> 
>  
> 
> All dependencies installed and running
> 
> 4 Nov 10:58:33 ntpdate[27293]: adjust time server 206.108.0.133 offset
> -0.002592 sec
> 
> Volume undercloud exists. Deleting Existing Volume
> /var/lib/libvirt/images/undercloud.qcow2
> 
> Vol undercloud.qcow2 deleted
> 
>  
> 
> Vol undercloud.qcow2 created
> 
>  
> 
> Traceback (most recent call last):
> 
>   File "/usr/libexec/openstack-tripleo/configure-vm", line 8, in 
> <module>
> 
>    import libvirt
> 
> ImportError: No module named libvirt
> 
>  
> 
>  
> 
> Thanks,
> 
>  
> 
>  
> 
>  
> 
>  
> 
> *From:*Lemay, Patrick
> *Sent:* November-01-16 11:42 AM
> *To:* 'opnfv-tech-discuss@lists.opnfv.org'
> *Cc:* Bernier, Daniel (520165); Guay, Francois (A214312); Poulin, 
> Jocelyn (6007251)
> *Subject:* Bell deployment config
> 
>  
> 
> Hi guys, I have some issues regarding opnfv baremetal deployment. I 
> install a jumphost from opnfv cd. I configure the inventory with IPMI ip mac 
> and users.
> 
>  
> 
> I’m not sure that I configure the network_settings correctly for 
> public_network.
> 
>  
> 
>  
> 
> For the deployment I use pod 2 and pod 3 from the drawing. The server 
> Catherine is use for the jumphost. Interface enp1s0f0 is for pxe and
> enp1s0f1 is for public_network . The other 5 servers are for 
> deployment IPMI ready. All interfaces in vlan
> 1020 are pxe ready and disk are configured raid 1. I have a problem deploying 
> undercloud.
> 
>  
> 
>  
> 
> I have this error related to the undercloud deployment:
> 
> INFO: Creating Virsh Network: admin_network & OVS Bridge: br-admin
> 
> INFO: Creating Virsh Network: public_network & OVS Bridge: br-public
> 
> INFO: Bridges set:
> 
> br-admin
> 
> br-public
> 
> enp1s0f0
> 
> INFO: Interface enp1s0f0 bridged to bridge br-admin for enabled
> network: admin_network
> 
> /var/opt/opnfv/lib/common-functions.sh: line 18: 5 - ( / 8) : syntax
> error: operand expected (error token is "/ 8) ")
> 
> ERROR: IPADDR or NETMASK/PREFIX missing for enp1s0f1
> 
> ERROR: Unable to bridge interface enp1s0f1 to bridge br-public for 
> enabled network: public_network
> 
>  
> 
>  
> 
> Could you help please? There is no vmware at all in the setup only baremetal..
> 
>  
> 
> Regards,
> 
>  
> 
>  
> 
>  
> 
> Patrick Lemay
> 
> Consultant Managed Services Engineering Bell Canada
> 
> 671 de la Gauchetière O. Bur. 610, Montreal, Quebec  H3B 2M8
> 
> Tel:  (514) 870-1540
> 
>  
> 
> 
> 
> _______________________________________________
> opnfv-tech-discuss mailing list
> opnfv-tech-discuss@lists.opnfv.org
> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
> 

_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to