Thanks! (A bit late responding, I know!) Can I just clarify that we're no longer planning to bundle DEB/RPM packages into our source release?
Thanks, On Sat, Sep 8, 2012 at 7:09 PM, Chip Childers <chip.child...@sungard.com>wrote: > Noah, > > First, welcome to the community! > > Obviously we are still finding our way on how to get RM activities > working correctly. I completely understand (and agree with) the > intent to have the release manager be able to produce all artifacts > from a single command (or set of commands... but I like having one!). > Right now, I'm using the script tools/build/build_asf.sh as the > starting point to achieve this. > > The challenge we have right now, is that we wanted to start allowing > the test engineers participating in the community to help test the > functionality of the software. As you can imagine, it's a project > who's primary purpose drives a ton of integration testing with various > hardware and software systems. To that end, Edison got the > jenkins.cloudstack.org server to produce the required RPM and DEB > package bundles via a build process against our release branch. I've > combined the build artifacts from that source with the results of the > build_asf.sh script and signed all artifacts. I then posted them to > p.a.o/~chipchilders/cloudstack/4.0 as a sample of the eventual results > we want to release. > > Concurrent with that testing I noted above, we're trying to sort out > how to best deal with the mechanics of doing the release process. I > believe I have the source process ironed out (but wouldn't mind > another set of eyes on the build_asf.sh script). What's left are the > binary distributions. > > The way that CloudStack had previously been distributed in "binary > form" from Citrix is actually quite good from a user's standpoint (IMO > as a user), and we seem to all be in continual agreement that a good > installation process remains a community priority. To that end, we > have several outstanding issues to work through (see [1], [2] and > [3]), as well as all of the QA time being spent auditing and > correcting dependency and source code licensing issues. That being > said, we have also agreed that we expect the linux distro packaging > community to create their own packages from the source (and they would > probably use the sample spec and deb files to help them with that). > This doesn't, however, mean that we don't want to offer "connivence > builds" for the agreed upon target OS's [4]. > > So the question is (given that I too would like to see a single > command to build everything that we call a release) - are there tools > that allow us to simultaneously build RPMs and Debian packages on a > single system? I haven't found one, but would LOVE if someone had an > idea there. Until we can do that though, I think that using the same > script that jenkins.c.o uses on local VMs that the release manager > controls will have to be the primary method of creating the release > binaries. > > Thoughts? > > -chip > > [1] - https://issues.apache.org/jira/browse/CLOUDSTACK-41 > [2] - https://issues.apache.org/jira/browse/CLOUDSTACK-42 > [3] - https://issues.apache.org/jira/browse/CLOUDSTACK-44 > [4] - > http://mail-archives.apache.org/mod_mbox/incubator-cloudstack-dev/201208.mbox/%3C501A7954.8020604%40widodh.nl%3E > > On Sat, Sep 8, 2012 at 12:54 PM, Noah Slater <nsla...@tumbolia.org> wrote: > > Hmm. Just reading through the ASF Maven stuff. There's a good chance this > > takes care of things for you. Excuse my rashness. My release management > > experience at Apache comes from Autotools, and not Java. The policy (or > > theory behind it) is the same, but the practicalities of it obviously are > > not. > > > > On Sat, Sep 8, 2012 at 5:44 PM, Noah Slater <nsla...@tumbolia.org> > wrote: > > > >> Chip, > >> > >> If we're shipping binary distribution artefacts, there needs to a be a > >> single (preferably) command (or set of commands) (that are well > documented) > >> that a user could run that will produce the bit for bit identical file > or > >> files. Is this currently the case? > >> > >> If it is, then this command should (preferably) be the same command that > >> the release manager uses, before signing, and uploading to p.a.o. (With > >> CouchDB, we have a signing step integrated into Autotools. Perhaps some > >> similar thing might be possible for CloudStack.) > >> > >> Thanks, > >> > >> > >> On Fri, Sep 7, 2012 at 1:35 AM, Chip Childers < > chip.child...@sungard.com>wrote: > >> > >>> On Thu, Sep 6, 2012 at 7:40 PM, Edison Su <edison...@citrix.com> > wrote: > >>> > > >>> > > >>> >> -----Original Message----- > >>> >> From: Chip Childers [mailto:chip.child...@sungard.com] > >>> >> Sent: Thursday, September 06, 2012 1:56 PM > >>> >> To: cloudstack-dev@incubator.apache.org > >>> >> Subject: Re: [ASFCS40] Specifically what should be our "binary > >>> >> distribution"? > >>> >> > >>> >> On Thu, Sep 6, 2012 at 4:46 PM, Wido den Hollander <w...@widodh.nl> > >>> >> wrote: > >>> >> > > >>> >> > > >>> >> > On 09/06/2012 09:56 PM, Chip Childers wrote: > >>> >> >> > >>> >> >> On Thu, Sep 6, 2012 at 3:44 PM, Joe Brockmeier <j...@zonker.net> > >>> >> wrote: > >>> >> >>> > >>> >> >>> On Thu, Sep 6, 2012, at 02:19 PM, Chip Childers wrote: > >>> >> >>>> > >>> >> >>>> Hi all, > >>> >> >>>> > >>> >> >>>> Looking at previous CloudStack releases on sourceforge, I see > that > >>> >> the > >>> >> >>>> "binary" distributions are tar.gz rpm/deb packages for RHEL and > >>> >> >>>> Ubuntu. I've looked at other Apache projects, and I see that > they > >>> >> >>>> usually include the built jar files as their "binary" release > >>> >> >>>> artifacts. > >>> >> >>>> > >>> >> >>>> So my question for everyone is, what specifically do you think > we > >>> >> >>>> should be distributing as an RC (and eventually as a release)? > Do > >>> >> we > >>> >> >>>> want to do a set of the jar files in a tar.gz archive? Do we > want > >>> >> to > >>> >> >>>> do RPM and DEV packages? Do we want both? > >>> >> >>> > >>> >> >>> > >>> >> >>> How useful are a set of .jar packages in a tarball? > >>> >> >>> > >>> >> >>> Ideally, we can provide something that lets people get set up > in as > >>> >> few > >>> >> >>> steps as possible. > >>> >> >> > >>> >> >> > >>> >> >> Agreed - I was raising the question, but I don't think it's > needed > >>> >> or > >>> >> >> useful. > >>> >> >> > >>> >> >>>> If we do the RPM and DEB packages, what OS should we be > building > >>> >> on for > >>> >> >>>> each? > >>> >> >>> > >>> >> >>> > >>> >> >>> At a minimum, the latest RHEL/CentOS and Ubuntu LTS. > >>> >> >> > >>> >> >> > >>> >> >> OK - So should we agree specifically on building on CentOS 6.3 > and > >>> >> Ubuntu > >>> >> >> 12.04? > >>> >> >> > >>> >> > > >>> >> > This was already discussed about a month ago: > >>> >> > http://mail-archives.apache.org/mod_mbox/incubator-cloudstack- > >>> >> dev/201208.mbox/%3C501A7954.8020604%40widodh.nl%3E > >>> >> > > >>> >> > We came to the conclusion: > >>> >> > > >>> >> > - Ubuntu 12.04 > >>> >> > - CentOS/RHEL 6.2 and 6.3 > >>> >> > > >>> >> > I still think our binary distribution should be in the form of RPM > >>> >> and DEB > >>> >> > files, that makes life for admins so much easier. > >>> >> > >>> >> Right, OK on that. For this first RC, I'm going to use CentOS 6.2 > and > >>> >> 6.3. > >>> >> > >>> >> I'm also able to easily do Ubuntu 12.04, but I haven't tested the > >>> >> ./waf deb process yet. Do you know if the deb build is working > right > >>> >> now? > >>> >> > >>> >> > I'll be setting up a Debian/Ubuntu repository soon for at the > Debian > >>> >> > packages. > >>> >> > >>> >> So I think that's great, but I also would like us to release the > final > >>> >> 4.0 RPMs and DEBs via the ASF mirrors. Perhaps similar to the > >>> >> previous sourceforge packaging structure? > >>> >> > >>> >> Does anyone know where the install.sh that was included with the > >>> >> Citrix cloudstack distro lives? Is there a packaging process to > >>> >> create that tarball? > >>> > > >>> > Here it is our internal build system: > >>> https://github.com/CloudStack/hudsonbuild > >>> > Which can build debs/rpms. I am trying to integrate it with > >>> http://jenkins.cloudstack.org/ > >>> > >>> Great! So if that's the case, then I'm proposing that the release > >>> process use the build artifacts from http://jenkins.cloudstack.org/ > >>> build jobs for each target OS (to provide the bundled installer > >>> package) as the binary distros. > >>> > >>> I'll still follow the process of using local bundling for the source > >>> release, since there are lots of notes in the ASF release > >>> documentation about NOT doing the release signing on ASF > >>> infrastructure. I'm assuming that the spirit of those comments is to > >>> protect the integrity of a source packaging and signing process by not > >>> allowing them to occur on public / shared systems. > >>> > >>> I'll also assume that the artifacts produces by Jenkins still have to > >>> be signed locally after being downloaded from the Jenkins server. > >>> > >>> Please shout if there are disagreements! > >>> > >>> >> > >>> >> > >>> >> > >>> >> >>> (Long term we need to focus on being included with the distros, > but > >>> >> >>> that's a different discussion.) > >>> >> >>> > >>> >> > > >>> >> > These are the platforms we build binaries for, not the platforms > it's > >>> >> only > >>> >> > going to work on. > >>> >> > > >>> >> > > >>> >> >>>> I know these questions might be obvious to some people, but I > >>> >> wanted > >>> >> >>>> to get a clear consensus from the list. > >>> >> >>> > >>> >> >>> > >>> >> >>> Thanks! > >>> >> >>> -- > >>> >> >>> Joe Brockmeier > >>> >> >>> j...@zonker.net > >>> >> >>> Twitter: @jzb > >>> >> >>> http://www.dissociatedpress.net/ > >>> >> >>> > >>> >> > > >>> > > >>> > >> > >> > >> > >> -- > >> NS > >> > > > > > > > > -- > > NS > -- NS