yardstick danube.3.0 was a mistaken premature tag before the release was 
postponed,  we can't delete git tags, so new git tag will be danube.3.1 with 
docker tag danube.3.1

docker danube.3.0 image was custom build for Dovetail.   We have projects 
consuming other project's dockers, so there are dependencies that way.

We are also going to start git cloning storperf inside yardstick docker 
container, so we will have to track two versions, yardstick git tag and 
storperf git tag.


From: [email protected] [mailto:[email protected]] 
On Behalf Of Fatih Degirmenci
Sent: Monday, July 10, 2017 2:04 PM
To: Alec Hothan (ahothan) <[email protected]>; Beierl, Mark 
<[email protected]>
Cc: [email protected]; [email protected]
Subject: Re: [test-wg] docker container versioning

Hi Alec,

Your understanding about the docker image tags seem correct to me (latest vs 
stable) but I let someone from test projects answer to that.

When it comes to artifact versioning in general; you are asking really good 
questions. Let me go back in time and summarize what plans we had (ie what we 
haven't been able to implement fully) with regards to it.

The questions you ask about tagging docker images is not limited to them. We 
have similar issues with other artifacts we produce (rpms, isos , etc.), maybe 
not on the same level as the docker images but we have them.

In order to achieve some level of traceability and reproducibility, we record 
the metadata for the artifacts (rpms, isos, etc.) we build so we can go back to 
source and find out exact version (commit) that was used for building the 
artifact in question. [1]
We also had plans to tag corresponding commits in our git repos but we haven't 
managed to fix that. [2] This includes docker images as well.

Apart from our own (OPNFV) repos, some of the artifacts we build include stuff 
from other sources, making it tricky to achieve full traceability and making 
the traceability even more important.
We had many discussions about how to capture this information in order to 
ensure we can go back to a specific commit in any upstream project we consume. 
(locking/pinning versions etc.)
But since we have different ways of doing things and different practices 
employed by different projects, this hasn't happened either. (I can talk about 
this for hours...)

By the way, I am not saying we totally failed as some projects take care of 
this themselves but as OPNFV, we do not have common practice, except metadata 
files for ISOs and the docker tags that do not help at all.

Long story short, this can be achieved in different ways like you exemplified; 
if a tag is applied to a repo, we trigger a build automatically and store & tag 
produced artifact in artifact repo and/or if we are building periodically, we 
apply the tag to git repo once the artifact is built successfully.

No matter which way we go, we need to fix this so thank you for questioning 
things, hopefully resulting in improvements starting with test projects.

[1] http://artifacts.opnfv.org/apex/opnfv-2017-07-05.properties
[2] https://jira.opnfv.org/browse/RELENG-77

/Fatih

From: "Alec Hothan (ahothan)" <[email protected]<mailto:[email protected]>>
Date: Monday, 10 July 2017 at 21:45
To: Fatih Degirmenci 
<[email protected]<mailto:[email protected]>>, "Beierl, 
Mark" <[email protected]<mailto:[email protected]>>
Cc: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>,
 "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: [test-wg] docker container versioning


[ cc test-wg - was [opnfv-tech-discuss] Multiple docker containers from one 
project) ]


Hi Fatih

It is generally not easy to deal with container tags that do not include any 
information that links easily to a git repo commit (e.g. a “version” number 
increased by 1 for each build does not tell which git commit was used – might 
be one reason why this was removed)

For example, if we look at the published yardstick containers as of today:

“latest” is generally used to refer to the latest on master at the time of the 
build (so whoever does not care about the exact version and just wants the 
bleeding edge will pick “latest”) – apparently this container is not linked to 
any particular OPNFV release? Or is it implicitly linked to the current latest 
release (Euphrates)?

“stable” is supposed to be the latest stable version (presumably more stable 
than latest), in the current script it is not clear in what conditions a build 
is triggered with BRANCH not master and RELEASE_VERSION is not set (does the 
project owner controls that?) Apparently unrelated to any particular OPNFV 
release as well although would have thought a “danube.3.0-stable” would make 
sense.

“danube.3.0”: related to Danube 3.0 but does not indicate what yardstick repo 
tag it was built from. The git repo has a git tag with the same name 
“danube.3.0”, how are those 2 tags correlated? For example, there is no 
matching git tag for container “colorado.0.15” and there is no matching 
container for git tag “colorado.3.0”.
Also not clear what yardstick project will do to publish a newer version of the 
yardstick container for Danube 3.0?

Project owners should be able to publish finer grain versions of containers in 
a faster pace than the overall OPNFV release (e.g. as frequent as more than 
once a day) and these need to be tracked properly.

The best practice – as seen by most popular container images – is to tag the 
container using a version string that reflects the container source code 
version.
Translating to a project workflow, what is typical:

  *   The project repo uses git tags to version the source code (e.g. “3.2.5”) 
independently of the OPNFV release versioning (e.g. “Danube 3.0”). Such 
versioning should be left at the discretion of the project owners (e.g. Many 
OpenStack projects use the pbr library to take care of component version)
  *   Optionally the project repo can have 1 branch per OPNFV release if 
desired (e.g. “danube”, …) – noting that some projects will not require such 
branches and support every OPNFV release (or a good subset of it) from a single 
master branch (simpler)
  *   Simplest (and this is how the dockerhub automated build works) is to 
trigger a new build either on demand (by project owners) or automatically 
whenever a new git tag is published (by who has permission to do so on that 
project)

I am not familiar with the OPNFV release packaging process (for example how are 
containers tied to a particular release), could someone explain or point to the 
relevant documentation? If you look at the OpenStack model, each release (e.g. 
Newton, Ocata,…) is made of a large number of separate git repos that are all 
versioned independently (i.e. neutron or tempest don’t have version tags that 
contain the openstack release). And each release has a list of all projects 
versions that come with it. Example for Ocata: 
https://releases.openstack.org/ocata/

Is there a similar scheme in OPNFV?


Mark:

  *   Container versioning is orthogonal to the support for multiple containers 
per project (that a project has more than 1 container makes the versioning a 
bit more relevant).
  *   Not being able to rebuild a container from a tag is problematic and I 
agree it needs to be supported



Gabriel:

  *   Using the dockerhub automated build is fine as long as the versioning of 
the built containers is aligned with the OPNFV versioning scheme



Thanks

  Alec




From: Fatih Degirmenci 
<[email protected]<mailto:[email protected]>>
Date: Monday, July 10, 2017 at 9:23 AM
To: "Beierl, Mark" <[email protected]<mailto:[email protected]>>, "Alec 
Hothan (ahothan)" <[email protected]<mailto:[email protected]>>
Cc: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Hi,

About the tagging question; in the past we tagged all the images we built and 
stored on Docker hub. The tags for the intermediate versions were set 
incrementally and applied automatically by the build job and release tag was 
applied manually for the release.
But then (some of the) test projects decided not to do that and got rid of 
that. (I don't exactly remember who, why and so on.)

We obviously failed to flag this at that time. This should be discussed by Test 
WG and fixed.

/Fatih

From: 
<[email protected]<mailto:[email protected]>>
 on behalf of "Beierl, Mark" <[email protected]<mailto:[email protected]>>
Date: Monday, 10 July 2017 at 18:10
To: "Alec Hothan (ahothan)" <[email protected]<mailto:[email protected]>>
Cc: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Sorry, Alec, for not responding.  I'm not a releng committer so I thought 
someone from there would have replied.  You are correct that the tag is 
provided by the person running the job in Jenkins and passed through to 
opnfv-docker.sh.

As for the git clone issue, or pip install from git, there is no tag provided.  
This is a concern that I have with the way there is a separation of docker 
build (in releng) and git clone.  We cannot actually rebuild from a label at 
this time.

Perhaps this is a bigger issue that needs to be discussed before we can 
properly address multiple docker builds.

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106<tel:1-613-314-8106>
[email protected]<mailto:[email protected]>

On Jul 10, 2017, at 11:34, Alec Hothan (ahothan) 
<[email protected]<mailto:[email protected]>> wrote:


Projects that do not have pyPI packages (or the right version of PyPI package 
published) might prefer to do a git clone instead and either install it 
directly or using pip install from the clone in the container.
Some Dockerfile may prefer to directly install from the current (cloned) repo 
(and avoid a git clone) but this might accidentally (or purposely) include 
local patches into the built container.
There are many valid ways to skin the cat…

I did not get any feedback on a previous question I had on container 
versioning/tagging.
The container versioning currently used is based on the branch name followed by 
a release name (e.g. “danube.3.0”) with the addition of latest, stable and 
master.

From opnfv-docker.sh:

# Get tag version
echo "Current branch: $BRANCH"

BUILD_BRANCH=$BRANCH

if [[ "$BRANCH" == "master" ]]; then
    DOCKER_TAG="latest"
elif [[ -n "${RELEASE_VERSION-}" ]]; then
    DOCKER_TAG=${BRANCH##*/}.${RELEASE_VERSION}
    # e.g. danube.1.0, danube.2.0, danube.3.0
else
    DOCKER_TAG="stable"
fi

if [[ -n "${COMMIT_ID-}" && -n "${RELEASE_VERSION-}" ]]; then
   DOCKER_TAG=$RELEASE_VERSION
    BUILD_BRANCH=$COMMIT_ID
fi

If branch is master, the tag is master, if RELEASE_VERSION is defined, it is 
branch.<release-version> else it is stable.
And lastly the above is overridden to RELEASE_VERSION if RELEASE_VERSION is set 
and COMMIT_ID is defined (wonder how that works with 2 branches with same 
RELEASE_VERSION?).

There are few gaps that don’t seem to be covered by this versioning - perhaps 
there is a way that project owners who publish containers work around them?

  *   How are the containers for multiple versions of master at various commits 
published ? They all seem to have the “master” tag
  *   For a given branch (say Danube), same question for a given release (say 
for Danube 3.0, one might have multiple versions of a container with various 
patches)
  *   Some projects may have containers that actually work with multiple OPNFV 
releases, will they be forced to publish the same container image with 
different tags (e.g. danube.3.0 and euphrates.1.0)?

In general a docker container tag would have version in it (e.g. 3.2.1) and 
sometimes along with a text describing some classification (indicating for 
example variations of the same source code version). In the case of OPNFV.

I’m not quite sure about when stable is used from the look of the script.
I’d be interested to know how current project docker owners deal with the above 
and if there is any interest to address them.

Thanks

  Alec



From: 
<[email protected]<mailto:[email protected]>>
 on behalf of Cedric OLLIVIER 
<[email protected]<mailto:[email protected]>>
Date: Monday, July 10, 2017 at 12:20 AM
To: "Beierl, Mark" <[email protected]<mailto:[email protected]>>
Cc: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

I'm sorry I don't understand the point of git clone.
Here we simply install Functest via the python package.
Pip install does a local copy because it's not published in PyPI yet and then 
removes it after installing the package.

Why should we clone again the repository?

Cédric

2017-07-10 3:10 GMT+02:00 Beierl, Mark 
<[email protected]<mailto:[email protected]>>:
Why should we avoid copy?  Why do a git clone of the existing git clone?  
Almost every dockerfile example I have seen uses copy, not a second got 
checkout of the same code.
Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106<tel:1-613-314-8106>
[email protected]<mailto:[email protected]>

On Jul 9, 2017, at 21:00, Cedric OLLIVIER 
<[email protected]<mailto:[email protected]>> wrote:
No we cannot (parent directory) and we should mostly avoid copying files 
(except for configurations).

For instance, you could have a look to 
https://gerrit.opnfv.org/gerrit/#/c/36963/.
All Dockerfiles simply download Alpine packages, python packages (Functest + 
its dependencies) and upper constraints files.
testcases.yaml is copied from host as it differs between our containers (smoke, 
healthcheck...).
Cédric


2017-07-10 1:25 GMT+02:00 Beierl, Mark 
<[email protected]<mailto:[email protected]>>:
My only concern is for dockerfiles that do a "COPY . /home" in them. That means 
all the code would be located under the docker directory.

I suppose multiple ../ paths can be used instead.

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106<tel:1-613-314-8106>
[email protected]<mailto:[email protected]>

On Jul 9, 2017, at 19:03, Julien 
<[email protected]<mailto:[email protected]>> wrote:
Hi Cédric,

Patch in  https://gerrit.opnfv.org/gerrit/#/c/36963/ is exact what I mean. 
Let's collect the opinions from the releng team.

Julien



Cedric OLLIVIER 
<[email protected]<mailto:[email protected]>>于2017年7月10日周一 
上午4:15写道:
Hello,

Please see https://gerrit.opnfv.org/gerrit/#/c/36963/ which introduces several 
containers for Functest too.
I think the tree conforms with the previous requirements.

Automating builds on Docker Hub is a good solution too.
Cédric

2017-07-09 12:10 GMT+02:00 Julien 
<[email protected]<mailto:[email protected]>>:
Hi Jose,

According to the current implementation, current script only support one 
Dockerfile, my personal suggestion is:
1. list all the sub-directory which contains "Dockerfile"
2. build for each sub-directory fetched in #1
3. for the names, in the top directory using the project name, in the 
sub-directory using: project_name-sub_directory_name
not too much changes for current script and easy for project to manage.

/Julien

Beierl, Mark <[email protected]<mailto:[email protected]>>于2017年7月7日周五 
下午11:35写道:
Hello,

Having looked over the docker-hub build service, I also think this might be the 
better approach.  Less code for us to maintain, and the merge job from OPNFV 
Jenkins can use the web hook to remotely trigger the job on docker-hub.

Who has the opnfv credentials for docker-hub, and the credentials for the 
GitHub mirror that can set this up?  Is that the LF Helpdesk?

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106<tel:1-613-314-8106>
[email protected]<mailto:[email protected]>

On Jul 7, 2017, at 11:01, Xuan Jia 
<[email protected]<mailto:[email protected]>> wrote:

+1 Using build service from docker-hub

On Thu, Jul 6, 2017 at 11:42 PM, Yujun Zhang (ZTE) 
<[email protected]<mailto:[email protected]>> wrote:
Does anybody consider using the build service from docker-hub[1] ?

It supports multiple Dockerfile from same repository and easy to integrate with 
OPNFV Github mirror.

[1]: https://docs.docker.com/docker-hub/builds/


On Thu, Jul 6, 2017 at 11:02 PM Jose Lausuch 
<[email protected]<mailto:[email protected]>> wrote:
Hi Mark,

I would incline for option 1), it sounds better than searching for a file. We 
could define specific values of DOCKERFILE var for each project.

/Jose


From: Beierl, Mark [mailto:[email protected]<mailto:[email protected]>]
Sent: Thursday, July 06, 2017 16:18 PM
To: 
[email protected]<mailto:[email protected]>
Cc: Julien <[email protected]<mailto:[email protected]>>; Fatih Degirmenci 
<[email protected]<mailto:[email protected]>>; Jose 
Lausuch <[email protected]<mailto:[email protected]>>
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Ideas:


  *   Change the DOCKERFILE parameter in releng jjb so that it can accept a 
comma delimited list of Dockerfile names and paths.  Problem with this, of 
course, is how do I default it to be different for StorPerf vs. Functest, etc?
  *   Change the opnfv-docker.sh to search for the named DOCKERFILE in all 
subdirectories.  This should cover the .aarch64 and vanilla docker file cases.

Please +1/-1 or propose other ideas, thanks!

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106<tel:1-613-314-8106>
[email protected]<mailto:[email protected]>

On Jun 24, 2017, at 04:05, Jose Lausuch 
<[email protected]<mailto:[email protected]>> wrote:

+1

No need for an additional repo, the logic can be in Releng..
Functest will probably move to different containers some time soon, so that is 
something we could also leverage.

-Jose-


On 23 Jun 2017, at 18:39, Julien 
<[email protected]<mailto:[email protected]>> wrote:

Agree,

If StorPerf can list some rules and examples, current scripts can be adapted 
for multiple docker image building and other project can use this type of 
changes. It is not deserved to add a new repo just for build a new image.






_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to