On 02 Feb 2014, at 23:10 , Alessandro Pilotti <apilo...@cloudbasesolutions.com> 
wrote:

> 
> On 02 Feb 2014, at 23:01 , Michael Still <mi...@stillhq.com> wrote:
> 
>> It seems like there were a lot of failing Hyper-V CI jobs for nova
>> yesterday. Is there some systemic problem or did all those patches
>> really fail? An example: https://review.openstack.org/#/c/66291/
>> 
> 
> 
> We’re awere of this issue and looking into it. The issue happens in devstack 
> before the Hyper-V compute nodes are added and before tempests starts.
> 
> I’ll post an update as soon as we get it sorted out.
> 

Fixed the issue. The reason was related to the following devstack patch which 
now binds by default the keystone
private port 35357 to $SERVICE_HOST. Our config was errornously using the 
private port instead of the public one
in OS_AUTH_URL to connect to 127.0.0.1, hence the sudded failures. 

https://github.com/openstack-dev/devstack/commit/6c57fbab26e40af5c5b19b46fb3da39341f34dab

Alessandro


> 
> Thanks,
> 
> Alessandro
> 
> 
>> Michael
>> 
>> On Mon, Feb 3, 2014 at 7:21 AM, Alessandro Pilotti
>> <apilo...@cloudbasesolutions.com> wrote:
>>> Hi Michael,
>>> 
>>> 
>>> On 02 Feb 2014, at 06:19 , Michael Still <mi...@stillhq.com> wrote:
>>> 
>>>> I saw another case of the "build succeeded" message for a failure just
>>>> now... https://review.openstack.org/#/c/59101/ has a rebase failure
>>>> but was marked as successful.
>>>> 
>>>> Is this another case of hyper-v not being voting and therefore being a
>>>> bit confusing? The text of the comment clearly indicates this is a
>>>> failure at least.
>>>> 
>>> 
>>> Yes, all the Hyper-V CI messages start with "build succeded", while the 
>>> next lines show the actual job result.
>>> I asked on infra about how to get rid of that message, but from what I got 
>>> from the chat it is not possible as long as the CI is non voting 
>>> independently from the return status of the single jobs.
>>> 
>>> Alessandro
>>> 
>>> 
>>>> Thanks,
>>>> Michael
>>>> 
>>>> On Tue, Jan 28, 2014 at 12:17 AM, Alessandro Pilotti
>>>> <apilo...@cloudbasesolutions.com> wrote:
>>>>> On 25 Jan 2014, at 16:51 , Matt Riedemann <mrie...@linux.vnet.ibm.com> 
>>>>> wrote:
>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On 1/24/2014 3:41 PM, Peter Pouliot wrote:
>>>>>>> Hello OpenStack Community,
>>>>>>> 
>>>>>>> I am excited at this opportunity to make the community aware that the
>>>>>>> Hyper-V CI infrastructure
>>>>>>> 
>>>>>>> is now up and running.  Let's first start with some housekeeping
>>>>>>> details.  Our Tempest logs are
>>>>>>> 
>>>>>>> publically available here: http://64.119.130.115. You will see them show
>>>>>>> up in any
>>>>>>> 
>>>>>>> Nova Gerrit commit from this moment on.
>>>>>>> <snip>
>>>>>> 
>>>>>> So now some questions. :)
>>>>>> 
>>>>>> I saw this failed on one of my nova patches [1].  It says the build 
>>>>>> succeeded but that the tests failed.  I talked with Alessandro about 
>>>>>> this yesterday and he said that's working as designed, something with 
>>>>>> how the scoring works with zuul?
>>>>> 
>>>>> I spoke with clarkb on infra, since we were also very puzzled by this 
>>>>> behaviour. I've been told that when the job is non voting, it's always 
>>>>> reported as succeeded, which makes sense, although sligltly misleading.
>>>>> The message in the Gerrit comment is clearly stating: "Test run failed in 
>>>>> ..m ..s (non-voting)", so this should be fair enough. It'd be great to 
>>>>> have a way to get rid of the "Build succeded" message above.
>>>>> 
>>>>>> The problem I'm having is figuring out why it failed.  I looked at the 
>>>>>> compute logs but didn't find any errors.  Can someone help me figure out 
>>>>>> what went wrong here?
>>>>>> 
>>>>> 
>>>>> The reason for the failure of this job can be found here:
>>>>> 
>>>>> http://64.119.130.115/69047/1/devstack_logs/screen-n-api.log.gz
>>>>> 
>>>>> Please search for "(1054, "Unknown column 'instances.locked_by' in 'field 
>>>>> list'")"
>>>>> 
>>>>> In this case the job failed when "nova service-list" got called to verify 
>>>>> wether the compute nodes have been properly added to the devstack 
>>>>> instance in the overcloud.
>>>>> 
>>>>> During the weekend we added also a console.log to help in simplifying 
>>>>> debugging, especially in the rare cases in which the job fails before 
>>>>> getting to run tempest:
>>>>> 
>>>>> http://64.119.130.115/69047/1/console.log.gz
>>>>> 
>>>>> 
>>>>> Let me know if this helps in tracking down your issue!
>>>>> 
>>>>> Alessandro
>>>>> 
>>>>> 
>>>>>> [1] https://review.openstack.org/#/c/69047/1
>>>>>> 
>>>>>> --
>>>>>> 
>>>>>> Thanks,
>>>>>> 
>>>>>> Matt Riedemann
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev@lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev@lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Rackspace Australia
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev@lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> -- 
>> Rackspace Australia
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to