Because every snapshot for every architecture is done on a different
tree, and some are even done 5-6 times a day.  So this would require,
if I can guess this right, 2.6GB per day.  Supplied over a T1.

Obviously a full tarball isn't the answer, but how about enough
information to reproduce the source code used to make the snapshot?

Sure, it does not look like it's a lot of work.

Now imagine what happens in reality.

Architecture A is 10% in the build.
Architecture B is 30% in the build.
Architecture C is 75% in the build.
All three from the same, NFS mounted, source tree.

A spiffy userland diff arrives, which will be put in the snapshot. It
affects src/bin/foo, src/usr.bin/bar and src/usr.sbin/baz.

After the diff is applied, it is probably too late for architecture C,
which will have these changes in its next snapshot, and all of it or
part of it will be in the A and B snapshots.

If you want "enough information to reproduce" the snapshot, this means
that every time a diff is added to the common source tree, or every
time a partial cvs update is made in this common source tree, one has
to check all the currently running snapshots to see how far they are and
what part of the update will really end up in the tarballs.

This is not something you can do with scripts only.

The only way to access your request is to change the process used in
making snapshots.

Guess what? This will not happen, because we are satisfied with the
current process.

This can be frustrating to end users, but given that unpublished diffs
don't stay long in snapshots (they either get dropped or commited soon),
this is something we developers thing you can live with.

                        It is well known in the free software community
that the more eyeballs look at source code, the more bugs get found and
fixed.

BTW, this is one of the most successfull lies in the free software
community.

Miod

Reply via email to