On Dec 2, 2013, at 10:19 PM, Joe Gordon <joe.gord...@gmail.com> wrote:

> 
> On Dec 2, 2013 3:39 AM, "Maru Newby" <ma...@redhat.com> wrote:
> >
> >
> > On Dec 2, 2013, at 2:07 AM, Anita Kuno <ante...@anteaya.info> wrote:
> >
> > > Great initiative putting this plan together, Maru. Thanks for doing
> > > this. Thanks for volunteering to help, Salvatore (I'm thinking of asking
> > > for you to be cloned - once that becomes available.) if you add your
> > > patch urls (as you create them) to the blueprint Maru started [0] that
> > > would help to track the work.
> > >
> > > Armando, thanks for doing this work as well. Could you add the urls of
> > > the patches you reference to the exceptional-conditions blueprint?
> > >
> > > For icehouse-1 to be a realistic goal for this assessment and clean-up,
> > > patches for this would need to be up by Tuesday Dec. 3 at the latest
> > > (does 13:00 UTC sound like a reasonable target?) so that they can make
> > > it through review and check testing, gate testing and merging prior to
> > > the Thursday Dec. 5 deadline for icehouse-1. I would really like to see
> > > this, I just want the timeline to be conscious.
> >
> > My mistake, getting this done by Tuesday does not seem realistic.  
> > icehouse-2, then.
> >
> 
> With icehouse-2 being the nova-network feature freeze reevaluation point 
> (possibly lifting it) I think gating on new stacktraces by icehouse-2 is too 
> late.  Even a huge whitelist of errors is better then letting new errors in. 

No question that it needs to happen asap.  If we're talking about milestones, 
though, and icehouse-1 patches need to be in by Tuesday, I don't think 
icehouse-1 is realistic.  It will have to be early in icehouse-2.


m.

> >
> > m.
> >
> > >
> > > I would like to say talk to me tomorrow in -neutron to ensure you are
> > > getting the support you need to achieve this but I will be flying (wifi
> > > uncertain). I do hope that some additional individuals come forward to
> > > help with this.
> > >
> > > Thanks Maru, Salvatore and Armando,
> > > Anita.
> > >
> > > [0]
> > > https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
> > >
> > > On 11/30/2013 08:24 PM, Maru Newby wrote:
> > >>
> > >> On Nov 28, 2013, at 1:08 AM, Salvatore Orlando <sorla...@nicira.com> 
> > >> wrote:
> > >>
> > >>> Thanks Maru,
> > >>>
> > >>> This is something my team had on the backlog for a while.
> > >>> I will push some patches to contribute towards this effort in the next 
> > >>> few days.
> > >>>
> > >>> Let me know if you're already thinking of targeting the completion of 
> > >>> this job for a specific deadline.
> > >>
> > >> I'm thinking this could be a task for those not involved in fixing race 
> > >> conditions, and be done in parallel.  I guess that would be for 
> > >> icehouse-1 then?  My hope would be that the early signs of race 
> > >> conditions would then be caught earlier.
> > >>
> > >>
> > >> m.
> > >>
> > >>>
> > >>> Salvatore
> > >>>
> > >>>
> > >>> On 27 November 2013 17:50, Maru Newby <ma...@redhat.com> wrote:
> > >>> Just a heads up, the console output for neutron gate jobs is about to 
> > >>> get a lot noisier.  Any log output that contains 'ERROR' is going to be 
> > >>> dumped into the console output so that we can identify and eliminate 
> > >>> unnecessary error logging.  Once we've cleaned things up, the presence 
> > >>> of unexpected (non-whitelisted) error output can be used to fail jobs, 
> > >>> as per the following Tempest blueprint:
> > >>>
> > >>> https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
> > >>>
> > >>> I've filed a related Neutron blueprint for eliminating the unnecessary 
> > >>> error logging:
> > >>>
> > >>> https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
> > >>>
> > >>> I'm looking for volunteers to help with this effort, please reply in 
> > >>> this thread if you're willing to assist.
> > >>>
> > >>> Thanks,
> > >>>
> > >>>
> > >>> Maru
> > >>> _______________________________________________
> > >>> OpenStack-dev mailing list
> > >>> OpenStack-dev@lists.openstack.org
> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>>
> > >>> _______________________________________________
> > >>> OpenStack-dev mailing list
> > >>> OpenStack-dev@lists.openstack.org
> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >> _______________________________________________
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev@lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >
> > >
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to