On 09/11/2012 08:56 PM, Noah Slater wrote:
Wido, see my comments about the problems with hosting an APT repository on
ASF infra.


Yes, our e-mails crossed. Yours arrived while I was typing mine.

* Can't think of a way it would work with the mirrors.
* If you don't use the mirrors, infra might not like the bandwidth.

However:

* Do we already host repositories for Java?

No, we don't.

* Can we set something up for debs?


I'm offering bandwidth on our company servers under cloudstack.apt-get.eu, more then enough bandwidth available.

We should probably use a cloudstack.org hostname just to make sure we can switch without users knowing.

Wido


On Tue, Sep 11, 2012 at 7:50 PM, Wido den Hollander <w...@widodh.nl> wrote:



On 09/11/2012 05:45 PM, David Nalley wrote:

On Tue, Sep 11, 2012 at 8:23 AM, Wido den Hollander <w...@widodh.nl>
wrote:

On 09/11/2012 12:16 PM, Suresh Sadhu wrote:


HI All,

Installer fail to read the cloud packages  and MS installation on Ubuntu
12.04 was not successful(No packages were installed) Raised a blocker
bug.
Please find the issue details in the below mentioned issue:


I'd like to bring this up again, do we REALLY want this install.sh
script?



This really deserves its own thread, because it won't receive the
attention it deserves in the original thread.

I talked with infra about this a few weeks back, and while they said
they really wanted downstreams to package, they weren't vehemently
opposed to use creating our own repo, but we'd have to figure out how
to make it work with the mirror system.


A Debian/Ubutnu repository is just a bunch of directories and files, that
could be distributed I think?

The question is, do we want this to go on ASF infra or us an external
mirror for it?


  Personally - the packages as they exist are great for people doing a
first, small scale install, but it doesn't scale. While I am not
necessarily opposed to the installer, I also recognize the problems
from a real world deployment perspective.


I disagree on the first point. When manually installing packages with dpkg
you will run into dependency hell. You (you=install script) manually have
to "apt-get install" several packages.

The problem you run into here is that you start doing redundant work. In
the "control" file you specify which packages you depend on. If you'd use
apt(itude) it will resolve those dependencies for you. But when doing a
manual install with dpkg it will complain about every single package which
is missing.

This leads to having install.sh a couple of directives to install packages
we already specific in the control file. On the longer run you get packages
installed by install.sh which are no longer required, but apt has no way of
knowing they can be removed.

Packages should always enter a Debian system through apt to know which
package was depending on which package so apt(itude) can do their work.

Adding a repository and install CloudStack is just 4 commands, isn't that
simple enough?

$ echo "deb 
http://cloudstack.apt-get.eu/**ubuntu<http://cloudstack.apt-get.eu/ubuntu>$(lsb_release -s 
-c) 4.0" > /etc/apt/sources.list.d/
**cloudstack.list
$ wget -O - 
http://cloudstack.apt-get.eu/**release.asc|apt-key<http://cloudstack.apt-get.eu/release.asc%7Capt-key>add
 -
$ apt-get update
$ apt-get install cloud-agent

Again, the repo of mine is just an example :)


  However, there is an impact, at a minimum all of our documentation
will need rewriting, so personally, I'd prefer that for 4.0.0 - that
we do repos if we can figure it out in time, and keep the installer as
an option as well.



Re-writing the docs is a couple of hours work I'd be more then happy to do
for 4.0 if we go for a repo.

I honestly must admit that in some recent docs I already assumed there
would be a repo for 4.0...

It would be awesome if Jenkins could produce packages and send them to the
mirror, but it's more then doable to build the packages locally and upload
them, it's not like we are doing 10 releases a month.

It's just placing the packages in the "pool" directory and have a script
re-scan the repo.

The question remains: Do we want this to be on ASF infra or do we host
this externally?

Wido




Reply via email to