Thanks for input from all. Since this is an issue that tracks multiple projects (Genesis, Pharos, Relneg, etc) I think having a Infra WG wiki page is where I will start. Then farm out JIRA tasks per project as needed. I think most of the JIRA tasks are already there. What is most needed is a organization / coordination effort initially. This would be done via a wiki page. I will work on something to present in next weeks Infra WG meeting. Then we can solicit additional feedback from the community as well as participating projects.


On 09/06/2016 08:04 PM, Daniel Smith wrote:

Hey All

 

Just as a question of JIRA -  this is/was a defined goal of Genesis, back when the original PHAROS specification (which is basically support for the 5 basical OStack networks (tagged/untagged), 3 Controller/2Compute Topology and supporting either VLAN or VXLAN (so as to support ODL) as the underlay.   


This was the compromise and provides the basis for the BareMetal PODs that are out there today, demonstratably augemented further in Colorado as components have shifted in line with both Openstack changes in Mitaka and other component changes (Boron and Neutron IRPs for example).     At the Arno, port-B time, this issue was visited again within the context of Genesis and we tried to come to a common set of simple network naming amongst the installers (so that we could have some common ‘installer framework’ established).   The current INFRA discussion is key to this as well, since we are now taking about creating some tech pieces to allow installer jumphosts to be imaged essentially and swapped around – each of these requiring said baseline specification that is common across installers.

 

How would we approach using JIRA to track this issue?  We can write up the tickets and do comparison amongst the installers and then as well (we were looking at a CR or some process for inclusion of adaptations of this specification – which would then be propagated to community labs so as to “farm out” for labs that could meet that spec.  

 

Should the tickets be in Genesis or is INFRA/PHAROS now picking up the task of communalizing the topologies – which would most likely also play into the scenario naming discussion as well.?

 

What would you like to see in the contents of the JIRA?

 

Cheers,
D

 

From: [email protected] [mailto:[email protected]] On Behalf Of Yujun Zhang
Sent: Tuesday, September 06, 2016 10:49 PM
To: SULLIVAN, BRYAN L <[email protected]>; Jack Morgan <[email protected]>; TSC OPNFV <[email protected]>
Cc: TECH-DISCUSS OPNFV <[email protected]>
Subject: Re: [opnfv-tech-discuss] [opnfv-tsc] harmonized configuration set for OPNFV

 

I agree. We should track this issue in JIRA. Which project shall we put it in? Pharos?

 

Besides the configuration harmonization, we have also spotted some performance deviation between environments deployed by different installers.

 

Theoretically , a consistent environment should be targeted no matter which installer user has chosen.

--

Yujun

 

On Tue, Sep 6, 2016 at 11:52 PM SULLIVAN, BRYAN L <[email protected]> wrote:

I’d suggest a Jira issue be created that we can add details to. I agree that this will be a very helpful effort. Right now there is substantial variation in the deployed platform e.g. even to such things as:

-          Number and names of networks, routers, etc created by default

-          Default images created in glance

-          Names of projects (e.g. “services” vs “service”)

-         

 

These variations add complexity to install and test scripts for features. As we eliminate the variations, we do need a process to identify and address  impact to existing code and tests.

 

Thanks,

Bryan Sullivan | AT&T

 

From: [email protected] [mailto:[email protected]] On Behalf Of Jack Morgan
Sent: Tuesday, September 06, 2016 8:41 AM
To: TSC OPNFV
Cc: TECH-DISCUSS OPNFV
Subject: Re: [opnfv-tech-discuss] harmonized configuration set for OPNFV

 

Yujun,

Thanks for raising this issue. This is a longer term issue to solve but for now I've added it as a topic for to the Infra WG weekly meeting. I'm hoping to solve this for the D release and been tasked by the TSC to help drive this effort. Please join the Infra WG meeting to provide your input.

 

On 09/05/2016 12:11 AM, Yujun Zhang wrote:

Dear TSC,

 

We have encountered some issues on the openstack environment configuration for some projects and it could not be resolved within the project scope. So I have to escalate it to TSC to look for a solution.

 

Some OPNFV project requires dedicated configuration on common services. But the environment deployed by the installers may not always come with a valid configuration.

 

For example, doctor project requires `notifier://?topic=alarm.all` in ceilometer event pipeline configuration. But the default deployed environment by fuel does not include this configuration. There has been a long debates between the two teams on where the modification should be made [1][2]. The contradiction is that if we enable this notifier topic, doctor will work, but nobody can guarantee that other projects are not affected.

 

OPNFV is targeting to deliver a "de facto standard open source NFV platform for the industry" [3]. The platform on software part includes not only the services installed but also a common configuration for all projects to run.

 

So the first step should be working out a harmonized configuration set which will allow all existing projects to run normally. Then it can be used to validate the deployed environment from each installer.

 

I'm sincerely hoping TSC can help to resolve this issue and lead OPNFV to a success.

 



_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

 

-- 
Jack Morgan
OPNFV Pharos Intel Lab

_______________________________________________
opnfv-tsc mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tsc


-- 
Jack Morgan
OPNFV Pharos Intel Lab

_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to