It looks like the problem might be IPMI connectivity from your jumphost to at least that compute node. Can you try from your jumphost issuing ipmitool cmdline to make sure you can connect to them?
For example: ipmitool -I lanplus -H <host ip> -L ADMINISTRATOR -U <username> -P <password> power status Tim Rozet Red Hat SDN Team ----- Original Message ----- From: "liyin (F)" <[email protected]> To: "Tim Rozet" <[email protected]> Cc: [email protected] Sent: Friday, January 6, 2017 10:50:35 PM Subject: RE: [opnfv-tech-discuss] Apex bare metel deploy problem Hi Tim, I could only get connect the jumphost by ipmi , so I only could provide you some picture . I think it's also a problem during deplovement, I have no access to this jumphost. By the way, this iso is master and the date is 2016.12.21. Stack_list.png is the output of 3. Nova_list.png is the output of 4. Thank you for you kindness. -----Original Message----- From: Tim Rozet [mailto:[email protected]] Sent: Friday, January 06, 2017 9:01 AM To: liyin (F) <[email protected]> Cc: [email protected] Subject: Re: [opnfv-tech-discuss] Apex bare metel deploy problem Hi Ace, Can you please on your jumphost do: 1. opnfv-util undercloud 2. . stackrc 3. openstack stack failures list overcloud --long 4. nova list Please send me the output of 3 and 4. Thanks, Tim Rozet Red Hat SDN Team ----- Original Message ----- From: "liyin (F)" <[email protected]> To: [email protected] Sent: Tuesday, December 27, 2016 3:41:57 AM Subject: [opnfv-tech-discuss] Apex bare metel deploy problem Hi all, We have an environment of bare metal pods. And we want to use apex to deploy openstack. I use the Centos iso from apex artifacts site to install jump server system. I have used several iso to deploy the environment and I get the same result as appendix showing. This log can’t help me to find where the problem is. And another thing is when I use opnfv-deploy os-nosdn-nofeature-ha.yaml to deploy, it will cost a lot of time. This puzzled me a lot, I need your help. Thanks in advance. Best Regards, Ace. _______________________________________________ opnfv-tech-discuss mailing list [email protected] https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss _______________________________________________ opnfv-tech-discuss mailing list [email protected] https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
